Over the past 12 months, the field of artificial intelligence has experienced many events – from major deals and large-scale updates to scandals. Technology was talked about so often that the compilers of the British Collins dictionary called AI term word of the year, and Cambridge Dictionary chose verb “hallucinate“
We invite you to remember what interesting things happened in the field of artificial intelligence in 2023.
What happened to the company
OpenAI still holds the title of industry hero. The beginning of 2023 was marked by a multibillion-dollar deal – Microsoft announced a “multi-year” investment in a Californian AI startup. At the same time, towards the end of the year it became known that the company was negotiating with investors about the possible sale of shares and increasing the valuation to $80-90 billion.
In February, the company introduced a paid version of the ChatGPT chatbot, and in March, developers were given access to the neural network and speech-to-text conversion model Whisper through API. That same month, OpenAI introduced a large multimodal GPT-4 model.
However, the company encountered difficulties in the regulatory field: in April, Italian authorities ordered to block ChatGPT, accusing OpenAI of “illegal collection of personal data.” Then the management managed to settle the claims and the chatbot became available again in the jurisdiction.
Prior to this, a technology ethics group called on the US Federal Trade Commission to investigate the company. Claims were also made by regulators:
- Great Britain;
Against this background, the company’s CEO Sam Altman spoke before the US Congress and called on the government to regulate the use and development of artificial intelligence. President Joe Biden's administration subsequently issued an executive order setting new standards for AI safety and security.
Development of ChatGPT
In the fall, OpenAI released a large-scale update, thanks to which ChatGPT learned to “see, hear and speak.”
At this point, the chatbot was already able to analyze data, create code in Python, build graphs and solve mathematical problems. The neural network even managed to scientifically refute the “flat Earth” theory and pass an exam in neurology.
In November, Altman revealed plans to create artificial general intelligence (A.G.I.) and GPT-5 details. And just a few days later, perhaps the biggest corporate scandal of the year broke out at OpenAI.
Fired – not fired
On November 17, OpenAI's board of directors fired Altman from his position as CEO. The reason was allegedly an internal audit and the fact that the general director “was not always frank” with the board.
By the way, Reuters sources reported that shortly before this decision, the company’s researchers warned the board of directors about a major discovery in the field of AI, which “could threaten humanity.” However, this topic did not receive further development.
Most of the company's employees and Altman's associates spoke out against this decision and threatened to quit. Microsoft CEO Satya Nadella expressed his dissatisfaction, according to Bloomberg.
The decision to fire Altman lasted less than a week; on November 22, OpenAI announced his return to the position of CEO. The entire board resigned. The new board of directors includes:
- former Salesforce co-CEO Bret Taylor (chairman);
- former US Treasury Secretary Larry Summers;
- Quora co-founder Adam D'Angelo (was on the previous board).
In early February, Google introduced the Bard chatbot, built on the lightweight LaMDA language model. The corporation later launched an Experiment updates blog for it, containing a list of changes to the conversational AI algorithm.
In March, Bard became available to a limited number of waitlist members in the US and UK.
At the end of spring, Google presented new AI functions for its services at the I/O 2023 developer conference. It was not only about the chatbot, but also about the multimodal model of PaLM 2, the integration of conversational artificial intelligence into Google Search and the Duet AI toolkit.
On July 13, the corporation launched Bard for users from the European Union and Brazil, and also presented a number of new chatbot functions. In December, the tool received a Gemini update that improved its performance.
Mark Zuckerberg's Meta has been trying to keep up with the tech giant and industry leader all year. In February, the corporation released a large LLaMA language model for artificial intelligence researchers with 13 billion and 65 billion parameters. At the time, the developers said the smaller version of LaMMA-13B performed better “in most tests” than OpenAI’s GPT-3.
In the third quarter of the year, reports emerged that Meta intended to create a competitor to ChatGPT. At the same time, the company introduced the AudioCraft AI tool for generating music based on text descriptions.
At the end of September, at a presentation, the corporation showed a voice AI assistant, neural networks with different personalities, and smart glasses. Two months later, Meta announced Emu Video and Emu Edit, generative AI tools for editing and content creation.
At the same time, according to media reports, the corporation decided to disband the team responsible for regulating and preventing potential threats in the development of AI. In December, Zuckerberg's company and IBM announced the creation of the AI Alliance to work together on technology development, which was joined by more than 50 technology firms.
In March 2023, Microsoft introduced the Kosmos-1 neural network, which combines text, images, audio and video content as input data. The researchers called the system a “multimodal large language model.” In their opinion, such algorithms will become the basis of AGI, which will be able to perform tasks at the human level.
That same month, search giant Baidu unveiled the AI-powered Ernie Bot. The chatbot is set to revolutionize the search engine and could improve the efficiency of cloud computing, smart cars, home appliances, and other core businesses. However, the company warned that the system has a number of problems.
ChatGPT intends to compete with Tongyi Qianwen from the Chinese tech giant Alibaba. The tool supports English and Chinese languages. Initially, it will appear in the corporate messenger DingTalk.
The chatbot will perform a number of tasks, including:
- converting conversations in meetings into written notes;
- writing emails;
- drawing up business proposals.
Elon Musk is also trying to keep up with the tech giants. In November, the billionaire announced the development of the Grok neural network by his company xAI. It is modeled after a guidebook from Douglas Adams's series of science fiction novels, The Hitchhiker's Guide to the Galaxy.
Against the backdrop of the news, the network was swept by a wave of cryptocurrencies of the same name. Some of them turned out to be scams.
Concerns and risks
However, with the development of technology, concerns about its safety have also arisen. Some experts have expressed a loss of control over AI development.
In March, more than a thousand industry experts, including Elon Musk and Steve Wozniak, called for a six-month pause on training language models more powerful than OpenAI's GPT-4. The letter details the potential threats to society and civilization from competitive AI systems. These include economic and political shocks.
However, some artificial intelligence experts have criticized the initiative. They called it “hype around AI” and a distortion of articles by a number of scientists. Microsoft founder Bill Gates expressed a similar position. According to him, the world should focus on using technology for the benefit of society.
Europol warned about threats from ChatGPT, and the European Commission saw risks for democracy in generative AI tools.
Former Google CEO Eric Schmidt said in May that AI poses an “existential risk” that could leave many people “hurt or killed.” Before this, the head of OpenAI admitted that the technology will radically change “usual society,” but developers “must be careful.”
In August, a majority of Gartner survey participants identified generative AI as a threat to organizations. However, Meta called concerns about existential risks associated with the technology “premature.” Biden made a similar statement, but emphasized that companies should be responsible for the safety of technology-related tools and products.
In December, the head of the Roman Catholic Church, Francis, warned of the dangers of a neural network dictatorship. Also in 2023, an increase in the popularity of deepfakes among scammers was recorded.
The past year has demonstrated not only the rapid development of new technologies, but also the responsibility their developers must take. Judging by the position and statements of major industry players, they understand this.
2023 turned out to be rich in interesting and turning-point events. However, the community has a lot of work to do to reduce fears around AI and create safe and useful tools.
Found an error in the text? Select it and press CTRL+ENTER
Cryplogger newsletters: keep your finger on the pulse of the Bitcoin industry!