TECH NEWS – Apple’s AR headset and OpenAI’s technology can be used together, with some incredible results.
At last year’s WWDC, the Cupertino-based tech company announced Apple Vision Pro, for which third-party developers can create apps (although they had to learn the VisionOS environment), which is how ChatGPT, one of the most important in productivity today, showed up. Users can communicate with the GPT-4 Turbo while wearing the headset, and all this is available for free (of course, the Apple Vision Pro itself costs at least $3500, so it won’t be a mass product, that’s for sure…).
In OpenAI’s ChatGPT app we get a combination of AI and AR. In other words, you can generate text, images, and more in a virtual space, and as we just mentioned, you can do it all with the latest version of their large language model (LLM), GPT-4 Turbo. We can ask questions, get answers, and get advice. This is OpenAI’s first tentative step into augmented reality, and yet for the first time, users can experience AI in a 3D space with unprecedented immersion. It is quite easy to become immersed in content-generating AI.
Previously, dictation support was added to the ChatGPT app on both iOS and Android. OpenAI uses the same technology in its VisionOS app, where we can dictate our queries to the AI and also give it images from the real world, which will be especially useful in technical situations (solving complex tasks, searching for recipes, etc.). Developers have an easier task because Apple Vision Pro features OpticID and spatial audio; VisionKit is also available for them, so they can take advantage of every aspect of Apple Vision Pro.
While the app may not seem like a big deal at first, you really have to try it to see how revolutionary Microsoft’s OpenAI-powered technology combined with Apple’s is (it’s no coincidence that the latter headset says it offers tomorrow’s technology today).
Leave a Reply