With the debut of its new smartphones, Samsung has kicked off the first wave of mobile Artificial Intelligence (AI) in 2024.
On Wednesday, January 17, local time, Samsung officially unveiled the next generation flagship smartphone series, Galaxy S24, at the Galaxy Unpacked event held in San Jose, California.
Samsung announced a collaboration with Google Cloud to empower the Galaxy S24 series with Generative AI, offering users a unique AI experience. Samsung is the first Google Cloud partner to deploy smartphones using Gemini Pro and Imagen 2 on Google Cloud.
The Galaxy S24 is the first smartphone to be configured with Gemini Pro and Imagen 2 on Google Cloud’s machine learning platform, Vertex AI.
Equipped with Google’s AI model, Gemini Pro, Galaxy S24 features a multimodal model capable of summarizing and smoothly understanding, operating, and combining different types of information, including text, code, images, and videos. Gemini Pro on Vertex AI provides essential Google Cloud functionalities, such as security, privacy, and data compliance.
Users of Galaxy S24 can also leverage Google’s powerful text-to-image generation model, Imagen 2, the most advanced text-to-image diffusion technology by Google DeepMind to date. With Imagen 2 on Vertex AI, Samsung brings secure and intuitive photo editing features to users. By activating Generative Edit2 in the Galaxy S24’s Gallery app, users can access these functions.
Samsung claims to be one of the first test clients for Gemini Ultra, the most feature-rich and powerful version of the Gemini model. Additionally, users of the S24 series will have access to Gemini Nano, a mobile version of the model, which is the most efficient Large Language Model (LLM) for handling edge tasks.
Galaxy S24 Introduces Circle to Search
Google introduces a new search experience for Galaxy S24 users called Circle to Search. Users can search for any content they want to know about on their Android phones without switching between apps. With a simple gesture, such as drawing a circle, highlighting, or swiping a line over the content, users can initiate a search and obtain relevant information.
This allows users to easily and quickly search for results if they see attractive clothing on social media, encounter unfamiliar phrases in articles, or spot unusual plants on YouTube. The summarized content generated by Generative AI technology helps users better understand concepts, opinions, or topics found online.
Google states that Circle to Search helps users quickly identify objects appearing in images or videos. For example, if a video related to fashion styling shows an outfit without a brand label, users can activate Circle to Search by long-pressing the home button or navigation bar on their phones and quickly find similar items available for purchase online.
With simultaneous text and image searches and upgrades to Google’s AI capabilities, users can more easily understand new concepts encountered online. For instance, if a user sees a picture of a corn dog online, they can circle the image of the corn dog, input the query “why are these so popular,” and quickly see an answer. This is a Korean-style corn dog, popular for its unique combination of flavors and textures, with meat and gooey melted cheese inside, a crispy outer layer, and the growing popularity of Korean cuisine.
Circle to Search is set to be officially available on some high-end Android phones from January 31, including the Galaxy S24 series and Google’s Pixel 8 and Pixel 8 Pro.
Google also highlights enhancements in Galaxy S24’s Notes, Voice Recorder, and Keyboard apps compared to previous Samsung phones. For example, users can use the Voice Recorder app to record lectures and quickly get summaries of the most important parts of the course. With Imagen 2, users can use Generative Edit2 in the Photo Library app to help edit photos.