TECH NEWS – With the new multimodal AI model Google Gemini, the parent company is aiming to at least catch up with OpenAI GPT-4. The first tests are already underway.
According to three anonymous sources at The Information, Google has given a small group of selected companies access to a stripped-down chat version of Google Gemini. The three sources claim to have direct knowledge of the matter. The largest version of Gemini is still under internal development.
The first testing with external customers may indicate that the launch of Gemini is getting closer. Google announced it earlier this year. Earlier reports suggested it would launch in the autumn.
Gemini will be available to businesses via cloud access and will also be integrated into Google’s consumer products.
Google plans to use Gemini for all of its AI applications, from the Bard chatbot to the new AI functionality in Workspace.
Through the Vertex AI service, Google plans to offer different model sizes. Smaller models could perform simpler tasks at a lower cost.
Google user data could give Gemini an edge
According to one tester, a significant advantage is that Google can process data from its own products, such as Google Search, in addition to public information from the web. This could result in the model understanding user intent better than GPT-4. According to the source, this could also lead to fewer incorrect responses.
Gemini’s code generation is reportedly good enough that Google hopes to compete with Microsoft’s GitHub Copilot. Features such as graph analysis and data interpretation are also being discussed. Also, performing actions on a computer, such as in a browser, using voice commands.
More Google Gemini models could be coming
According to The Information, Gemini is a “set of large-scale language models” capable of performing various tasks. For example, chatbots, text summarisation, coding or generating new text.
It is unclear whether Gemini will rely on networked expert models, as OpenAI does with its GPT-4 architecture.
Gemini was officially announced in May. Early rumours suggested the model would have at least a trillion parameters. Tens of thousands of Google TPU AI chips are expected to be used for training.
Demis Hassabis, Gemini’s senior manager, said at the end of June that Gemini will combine some of the strengths of the AlphaGo system with the language capabilities of the large models.
Leave a Reply