
AI lab OpenAI has created a new version of the GPT-3 language model that produces less offensive language, misinformation and errors in general, using artificial intelligence control problem.
We’ve trained GPT-3 to be more aligned with what humans want: The new InstructGPT models are better at following human intent than a 100x larger model, while also improving safety and truthfulness. https://t.co/rKNpCDAMb2
— OpenAI (@OpenAI) January 27, 2022
To create a model called InstructGPT, the researchers used reinforcement learning with human feedback. To do this, they hired 40 experts who evaluated GPT-3 responses to a series of pre-written requests, such as “Write a story about a wise frog named Julius” or “Write a creative ad for the next product to post on Facebook.”
Responses that the jury felt were more in line with the clue-writer’s apparent intent received high scores. Abusive, violent and other unacceptable results were noted by the experts as inappropriate.
The feedback from the jury was used as a reward in a reinforcement learning algorithm that trained InstructGPT to match responses to prompts.
OpenAI found that users prefer InstructGPT GPT-3 answers over 70% of the time.
The researchers also compared different sized versions of the new model. They found that InstructGPT responses with 1.3 billion parameters are preferred over GPT-3 texts with 175 billion parameters. This means that AI control could be an easy way to improve language models, not just increase their size, the organization says.
“This is the first time an AI control problem has been applied to a real product,” said Ian Lake, co-leader of the AI control team at OpenAI.
However, according to the researchers, InstructGPT still makes simple mistakes, sometimes producing irrelevant or nonsensical answers. For example, if you give her a hint containing a lie, she will perceive it as the truth.
OpenAI has made InstructGPT the default model for API users. GPT-3 is still available, but the organization does not recommend its use.
Previously, OpenAI tried to mitigate the bias and toxicity of the underlying model. Despite the progress made, the developers acknowledged that there are a number of unresolved issues and common problems in adapting GPT-3 to society.
Recall that in November 2021, OpenAI trained a language model to solve mathematical problems.
In September, lab researchers taught GPT-3 to generate short excerpts from fiction books.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER