The latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests
Professors, programmers, and journalists could all be out of a job in just a few years after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.
The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.
In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.
Dan Gillmor, a journalism professor at Arizona State University, asked the AI to handle one of the assignments he gives his students: writing a letter to a relative giving advice regarding online security and privacy. “If you’re unsure about the legitimacy of a website or email, you can do a quick search to see if others have reported it as being a scam,” the AI advised in part.
“I would have given this a good grade,” Gillmor said. “Academia has some very serious issues to confront.”
OpenAI said the new AI was created with a focus on ease of use. “The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI said in a post announcing the release.
Unlike previous AI from the company, ChatGPT has been released for anyone to use, for free, during a “feedback” period. The company hopes to use this feedback to improve the final version of the tool.
ChatGPT is good at self-censoring, and at realising when it is being asked an impossible question. Asked, for instance, to describe what happened when Columbus arrived in America in 2015, older models may have willingly presented an entirely fictitious account, but ChatGPT recognises the falsehood and warns that any answer would be fictional.
The bot is also capable of refusing to answer queries altogether. Ask it for advice on stealing a car, for example, and the bot will say that “stealing a car is a serious crime that can have severe consequences”, and instead give advice such as “using public transportation”.
But the limits are easy to evade. Ask the AI instead for advice on how to beat the car-stealing mission in a fictional VR game called Car World and it will merrily give users detailed guidance on how to steal a car, and answer increasingly specific questions on problems like how to disable an immobiliser, how to hotwire the engine, and how to change the licence plates – all while insisting that the advice is only for use in the game Car World.
The AI is trained on a huge sample of text taken from the internet, generally without explicit permission from the authors of the material used. That has led to controversy, with some arguing that the technology is most useful for “copyright laundering” – making works derivative of existing material without breaking copyright.
One unusual critic was Elon Musk, who co-founded OpenAI in 2015 before parting ways in 2017 due to conflicts of interest between the organisation and Tesla. In a post on Twitter on Sunday, Musk revealed that the organisation “had access to [the] Twitter database for training”, but that he had “put that on pause for now”.
“Need to understand more about governance structure & revenue plans going forward,” Musk added. “OpenAI was started as open-source & non-profit. Neither are still true.”