Microsoft’s AI: From Normal To Racist!

Artificial intelligence can quickly be turned around. This was proven by Microsoft‘s experiment: people needed less than a day to make the AI (called Tay) respond differently from the initial state of it.

Tay was let loose on Twitter by Microsoft. It can learn from responses, evolving its capabilities and the things it can respond with as more and more users start to communicate with Tay. The experiment backfired in a somewhat hilarious way, to say the least.

In just a single day, Tay became racist, feminist and genocide-supporting. Mind you; it started out by saying things like „humans are super cool.” Microsoft quickly turned Tay off to not cause more problems, but the Twitter users started a hashtag campaign (#JusticeForTay) to allow Tay to return to Twitter and make the chatbot learn from its mistakes.

At least, we’re far from the things we have seen in Terminator and its Skynet. Imagine if Tay returns with a Shodan quote, taking Twitter entirely over – however, seeing how humans „defeated” it, it’s not going to happen soon.

Spread the love
Avatar photo
Anikó, our news editor and communication manager, is more interested in the business side of the gaming industry. She worked at banks, and she has a vast knowledge of business life. Still, she likes puzzle and story-oriented games, like Sherlock Holmes: Crimes & Punishments, which is her favourite title. She also played The Sims 3, but after accidentally killing a whole sim family, swore not to play it again. (For our office address, email and phone number check out our IMPRESSUM)

No comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

theGeek TV