The Avatar Cloud Engine (ACE) has yet to be seen in action, but Nvidia is getting pretty close….
Nvidia has announced production microservices for ACE technology that will allow game, tool, and application developers to easily integrate AI models into avatars or NPCs to make next-generation applications and games more powerful. Over the past year, Nvidia has expanded ACE in several ways, including NeMo SteerLM, which allows developers to experiment with customizable attributes for digital avatars, and Inworld AI, which makes NPCs more context-aware.
Nvidia has also partnered with several developers this year to deploy Audio2Face (A2F) and Riva Automatic Speech Recognition (ASR) technologies with ACE microservices to further enhance AI-powered NPCs. The list of developers is already impressive: Ubisoft, Tencent, UneeQ, Ourpalm, NetEase Games, miHoYo, Convai, Charisma AI, Inworld. Text-to-speech input can be used and enhanced with Convai’s fine-tuned LLM and RAG models to create believable interactions with NPCs in games. They can respond not only to tone of voice, but also to other NPCs based on the text input they receive from the player.
So it’s not just about the character saying something to the text you type, it’s about voice recognition and not just getting some boring machine voice, but something more believable (voice imitation is not perfect yet). So the technology is definitely future-proof.
So all of this could lead to many AAA publishers (it’s no coincidence that Ubisoft is at the top of the list!) creating not just template NPCs like we saw in The Elder Scrolls V: Skyrim, but non-playable characters that can respond to us in a more serious way, and while the technology is still in the works, it’s safe to say from the presentation that the result will be promising.
Source: WCCFTech
Leave a Reply