
Non-profit organization OpenAI stated about reducing bias and improving security in the latest DALL-E 2 image generator update.
According to the organization, the new technique allows the algorithm to generate images of people that more accurately reflect the diversity of the world’s population.
“This method is applied at the system level, when a DALL-E prompt is issued with a description of a person without specifying race or gender, for example,” firefighter “,” the press release says.

As a result of testing the new technique, users were 12 times more likely to say that DALL-E images included people of different backgrounds, the company said.
“We plan to improve this technique over time as we gather more data and feedback,” OpenAI added.
The organization launched a preview version of DALL-E 2 for a limited number of people in April 2022. The developers believe that this allowed them to better understand the capabilities and limitations of the technology and improve security systems.
According to OpenAI, during the study they took other steps to improve the generator, including:
- minimized the risk of misuse of DALL-E to create deepfakes;
- blocked tooltips and downloadable images that violated the content policy;
- improved protection systems against misuse.
OpenAI said that with these changes, they will be able to open up the algorithm to more users.
“Expanding access is an important part of our responsible deployment of AI systems, as it allows us to learn more about their use in real conditions and continue to improve our security systems,” the developers noted.
Recall that in July, researchers found that users do not distinguish between images created by a neural network and a person.
In January, OpenAI released a new version of the GPT-3 language model that produces less offensive language, misinformation, and errors in general.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

Non-profit organization OpenAI stated about reducing bias and improving security in the latest DALL-E 2 image generator update.
According to the organization, the new technique allows the algorithm to generate images of people that more accurately reflect the diversity of the world’s population.
“This method is applied at the system level, when a DALL-E prompt is issued with a description of a person without specifying race or gender, for example,” firefighter “,” the press release says.

As a result of testing the new technique, users were 12 times more likely to say that DALL-E images included people of different backgrounds, the company said.
“We plan to improve this technique over time as we gather more data and feedback,” OpenAI added.
The organization launched a preview version of DALL-E 2 for a limited number of people in April 2022. The developers believe that this allowed them to better understand the capabilities and limitations of the technology and improve security systems.
According to OpenAI, during the study they took other steps to improve the generator, including:
- minimized the risk of misuse of DALL-E to create deepfakes;
- blocked tooltips and downloadable images that violated the content policy;
- improved protection systems against misuse.
OpenAI said that with these changes, they will be able to open up the algorithm to more users.
“Expanding access is an important part of our responsible deployment of AI systems, as it allows us to learn more about their use in real conditions and continue to improve our security systems,” the developers noted.
Recall that in July, researchers found that users do not distinguish between images created by a neural network and a person.
In January, OpenAI released a new version of the GPT-3 language model that produces less offensive language, misinformation, and errors in general.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER