Project Chimera: World Of Warcraft-Sized Games With Small Teams

It sounds incredible: Google wants to utilise the improving technology to make small teams create big games.

Project Chimera has its developers and engineers look into the potential applications of generative adversarial networks or GANs for short. It’s a machine learning method, which previously was used to create art by training the AI with similar images.

„We’re taking on the risk that developers don’t want to. We’ve been talking externally to developers and asking them, what are the things that you’ve always wanted to do but have not been able to do? What are the things that you’ve had to cut out of your games because you haven’t been able to do them fast enough, or you just haven’t had the processing power?

What if a team of 14 people could make a game the scale of World of Warcraft? That’s an absurd goal, right? The thing about games like WoW is that they rely on a lot of heavy, repetitive content creation. The artists and the writers are doing a lot of essentially duplicate work, that’s where a lot of the investment goes. If you look at the amount of money that is spent making a game like World Warcraft, it’s like 70% content and 30% or less code, even though it’s a tremendous amount of code, it’s way more on the content side,” Erin Hoffman-John, head of creative for Stadia research and development, explained to the MCVUK magazine. Content creation could be simpler by training the machine learning on a set of reference images to create new designs based on them.

However, Project Chimera could also be used to make balancing far easier for the developers „[…] by playing the game millions of times with reinforcement learning agents that we’ve trained on the rules of the game, that lets us test the balance very, very quickly. So even a small developer who might not have access to hundreds of people to playtest their game could have access to this reinforcement learning tool that will optimise the play of the game. It can learn the game by itself without being scripted and then tell you where the problems are in the balancing. It lets you test your theories of the design against what’s happening in real-time,” Hoffman-John added.

This reinforcement learning method has been used by DeepMind’s AlphaStar. With it, 99.8% of StarCraft II players were defeated. Not a bad ratio. Google’s idea is not bad, but it needs to have a lot of training for the AI, or the result could look shockingly bad. And with it – perhaps not as a joke -, we may get another step closer to Skynet…

Source: WCCFTech

Please support our page theGeek.games on Patreon, so we can continue to write you the latest gaming, movie and tech news and reviews as an independent magazine.
Become a Patron!

Spread the love
Avatar photo
Anikó, our news editor and communication manager, is more interested in the business side of the gaming industry. She worked at banks, and she has a vast knowledge of business life. Still, she likes puzzle and story-oriented games, like Sherlock Holmes: Crimes & Punishments, which is her favourite title. She also played The Sims 3, but after accidentally killing a whole sim family, swore not to play it again. (For our office address, email and phone number check out our IMPRESSUM)

No comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

theGeek TV