
Google at the developer conference I/O 2023 introduced new AI features for its services.
Multimodal model PaLM 2
Google introduced an updated version of the PaLM 2 language model, which is now called multimodal. The developers did not disclose the technical details of the algorithm.
However, they noted that the model was trained on the Google JAX architecture and TPU v4. They also used corpora of scientific texts, which makes PaLM 2 more productive in solving mathematical and logical problems, the company claims.
The model received improved capabilities for generating code. PaLM 2 was trained in 20 programming languages, including JavaScript and Python.
In addition, when developing the algorithm, text corpora in 100 languages were used. The company claims that this will improve the performance of the model in “multilingual tasks”.
PaLM 2 is available now via APIas well as in Colab and Firebase services.
Chatbot Bard
Bard’s conversational AI has been ported to the PaLM 2 multimodal model. Now the chatbot can process images and provide them in responses. Bard also learned how to work with maps and create tables.
The developers said that the tool will soon receive support for extensions. Among those announced are Google’s own services like Docs, Sheets, and Mail. Third party partners include the Adobe Firefly image generator and the Wolfram Alpha knowledge base.
In addition, the developers have extended the programming capabilities of Bard using the “Lens” and added the ability to export code to Python in Replit.
Google also canceled the waiting list for the chatbot and opened access to it in 180 countries. So far, the tool supports English, Korean and Japanese, but in the near future the company will expand their number to 40, including Russian and Ukrainian.
Google Search
During the presentation, the company demonstrated integrated conversational AI directly into the search box. This approach will allow users to quickly understand the topic, open up new points of view and more conveniently study the information, the developers say.
The company gave an example of how the new search could handle a complex query like “Which is best for a family with a child under three and a dog: Bryce Canyon or Arches National Park.”
As a result, generative AI provided a synthesizable overview of requested locations, as well as links to further research.
After that, the search engine will offer the next steps or advise you to refine the request by switching to the dialogue mode. In this case, the system remembers the entire context of the conversation.
The developers have shown the potential of a new search when shopping. The tool can find a product, compare it with other offers, and help the user make a choice.
Generative AI generates a summary of important factors to consider when buying, as well as relevant offers.
The function is based on the Shopping Graph database, which contains more than 35 billion products.
So far, the feature is available in the experimental app. Search Labs for iOS and Android, and for Google Chrome PC users in the US.
Duet AI for Workspace
The Duet AI toolkit will allow Workspace users to use generative AI in office applications:
- writing texts in Documents and Gmail;
- analysis and filling of tables in the “Tables”;
- generating images and summarizing presentations in “Slides”;
- automatic summary of video calls in Meet.
The developers separately noted the importance of the Help me write function for mobile devices. According to them, with its help, users will be able to quickly create large texts without using a full keyboard in the Documents or Gmail applications.
The company also introduced the Sidekick feature, which is a sidebar for analyzing an open document. She can answer content-based questions and generate suggestions.
Some features are already available to registered users of Workspace Labs. The company also opened a waiting list for everyone.
Responsible AI
The company devoted part of the speech to the approach to creating responsible AI. One of the innovations in the “Search” will be the labeling of images generated by algorithms.
The company will also begin to provide information about when and on what resource the requested image first appeared. As conceived by Google engineers, this can lead users to fact-checking sites and help them figure out the credibility of the images.
Features will start rolling out in the coming weeks.
Android 14
The next version of the Android 14 mobile OS will also receive a number of AI features. Among them is a wallpaper generator for your desktop and lock screen.
Users will be able to create images from emoji by choosing the necessary symbols and color palette. In this case, emoticons will respond to touch.
The Cinematic Wallpapers feature allows you to create 3D wallpapers from user photos. Artificial intelligence will automatically analyze the selected image and generate depth with parallax effect.
The developers also announced the function of generating wallpapers according to a text description.
Cinematic Wallpapers will arrive on Pixel devices in June, with generative wallpapers coming in fall 2023.
In addition, Android 14 will receive a number of improvements in the field of security and lock screen customization. The beta version of the OS is already available for Pixel devices and a small number of vendors. The final release is scheduled for August-September 2023.
Pixel devices
During the presentation, Google showed several new devices from the Pixel line. Among them:
- budget smartphone Pixel 7a;
- a tablet with a Pixel Tablet docking station;
- foldable smartphone Pixel Fold.
All three devices are based on Google’s own Tensor G2 chip. The company noted that the owners of new devices will have access to all the features presented at the fall conference with the announcement of the Pixel 7.
The Pixel 7a is already on sale starting at $499. The Pixel Tablet and Pixel Fold are available for pre-order and will ship in June. The cost of the devices is from $499 and $1799 respectively.
Other innovations
In addition, the company introduced a number of new products for other applications and services:
- waiting list for music generation service from MusicLM text description;
- the effect of immersion when laying routes in the “Maps”;
- Magic Editor in the Photos app for AI-assisted touch-ups;
- automatic video dubbing technology;
- Generative AI to simplify the publication of applications in the Play Store;
- Project Starline for creating 3D images of people;
- WebGPU support in Chrome to speed up AI web applications.
Recall that in April, Google combined Brain and DeepMind into one team.
Found a mistake in the text? Select it and press CTRL+ENTER
Cryplogger Newsletters: Keep your finger on the pulse of the bitcoin industry!

Google at the developer conference I/O 2023 introduced new AI features for its services.
Multimodal model PaLM 2
Google introduced an updated version of the PaLM 2 language model, which is now called multimodal. The developers did not disclose the technical details of the algorithm.
However, they noted that the model was trained on the Google JAX architecture and TPU v4. They also used corpora of scientific texts, which makes PaLM 2 more productive in solving mathematical and logical problems, the company claims.
The model received improved capabilities for generating code. PaLM 2 was trained in 20 programming languages, including JavaScript and Python.
In addition, when developing the algorithm, text corpora in 100 languages were used. The company claims that this will improve the performance of the model in “multilingual tasks”.
PaLM 2 is available now via APIas well as in Colab and Firebase services.
Chatbot Bard
Bard’s conversational AI has been ported to the PaLM 2 multimodal model. Now the chatbot can process images and provide them in responses. Bard also learned how to work with maps and create tables.
The developers said that the tool will soon receive support for extensions. Among those announced are Google’s own services like Docs, Sheets, and Mail. Third party partners include the Adobe Firefly image generator and the Wolfram Alpha knowledge base.
In addition, the developers have extended the programming capabilities of Bard using the “Lens” and added the ability to export code to Python in Replit.
Google also canceled the waiting list for the chatbot and opened access to it in 180 countries. So far, the tool supports English, Korean and Japanese, but in the near future the company will expand their number to 40, including Russian and Ukrainian.
Google Search
During the presentation, the company demonstrated integrated conversational AI directly into the search box. This approach will allow users to quickly understand the topic, open up new points of view and more conveniently study the information, the developers say.
The company gave an example of how the new search could handle a complex query like “Which is best for a family with a child under three and a dog: Bryce Canyon or Arches National Park.”
As a result, generative AI provided a synthesizable overview of requested locations, as well as links to further research.
After that, the search engine will offer the next steps or advise you to refine the request by switching to the dialogue mode. In this case, the system remembers the entire context of the conversation.
The developers have shown the potential of a new search when shopping. The tool can find a product, compare it with other offers, and help the user make a choice.
Generative AI generates a summary of important factors to consider when buying, as well as relevant offers.
The function is based on the Shopping Graph database, which contains more than 35 billion products.
So far, the feature is available in the experimental app. Search Labs for iOS and Android, and for Google Chrome PC users in the US.
Duet AI for Workspace
The Duet AI toolkit will allow Workspace users to use generative AI in office applications:
- writing texts in Documents and Gmail;
- analysis and filling of tables in the “Tables”;
- generating images and summarizing presentations in “Slides”;
- automatic summary of video calls in Meet.
The developers separately noted the importance of the Help me write function for mobile devices. According to them, with its help, users will be able to quickly create large texts without using a full keyboard in the Documents or Gmail applications.
The company also introduced the Sidekick feature, which is a sidebar for analyzing an open document. She can answer content-based questions and generate suggestions.
Some features are already available to registered users of Workspace Labs. The company also opened a waiting list for everyone.
Responsible AI
The company devoted part of the speech to the approach to creating responsible AI. One of the innovations in the “Search” will be the labeling of images generated by algorithms.
The company will also begin to provide information about when and on what resource the requested image first appeared. As conceived by Google engineers, this can lead users to fact-checking sites and help them figure out the credibility of the images.
Features will start rolling out in the coming weeks.
Android 14
The next version of the Android 14 mobile OS will also receive a number of AI features. Among them is a wallpaper generator for your desktop and lock screen.
Users will be able to create images from emoji by choosing the necessary symbols and color palette. In this case, emoticons will respond to touch.
The Cinematic Wallpapers feature allows you to create 3D wallpapers from user photos. Artificial intelligence will automatically analyze the selected image and generate depth with parallax effect.
The developers also announced the function of generating wallpapers according to a text description.
Cinematic Wallpapers will arrive on Pixel devices in June, with generative wallpapers coming in fall 2023.
In addition, Android 14 will receive a number of improvements in the field of security and lock screen customization. The beta version of the OS is already available for Pixel devices and a small number of vendors. The final release is scheduled for August-September 2023.
Pixel devices
During the presentation, Google showed several new devices from the Pixel line. Among them:
- budget smartphone Pixel 7a;
- a tablet with a Pixel Tablet docking station;
- foldable smartphone Pixel Fold.
All three devices are based on Google’s own Tensor G2 chip. The company noted that the owners of new devices will have access to all the features presented at the fall conference with the announcement of the Pixel 7.
The Pixel 7a is already on sale starting at $499. The Pixel Tablet and Pixel Fold are available for pre-order and will ship in June. The cost of the devices is from $499 and $1799 respectively.
Other innovations
In addition, the company introduced a number of new products for other applications and services:
- waiting list for music generation service from MusicLM text description;
- the effect of immersion when laying routes in the “Maps”;
- Magic Editor in the Photos app for AI-assisted touch-ups;
- automatic video dubbing technology;
- Generative AI to simplify the publication of applications in the Play Store;
- Project Starline for creating 3D images of people;
- WebGPU support in Chrome to speed up AI web applications.
Recall that in April, Google combined Brain and DeepMind into one team.
Found a mistake in the text? Select it and press CTRL+ENTER
Cryplogger Newsletters: Keep your finger on the pulse of the bitcoin industry!