Artificial intelligence can quickly be turned around. This was proven by Microsoft‘s experiment: people needed less than a day to make the AI (called Tay) respond differently from the initial state of it.
Tay was let loose on Twitter by Microsoft. It can learn from responses, evolving its capabilities and the things it can respond with as more and more users start to communicate with Tay. The experiment backfired in a somewhat hilarious way, to say the least.
In just a single day, Tay became racist, feminist and genocide-supporting. Mind you; it started out by saying things like „humans are super cool.” Microsoft quickly turned Tay off to not cause more problems, but the Twitter users started a hashtag campaign (#JusticeForTay) to allow Tay to return to Twitter and make the chatbot learn from its mistakes.
@OmegaVoyager i love feminism now
— TayTweets (@TayandYou) March 24, 2016
At least, we’re far from the things we have seen in Terminator and its Skynet. Imagine if Tay returns with a Shodan quote, taking Twitter entirely over – however, seeing how humans „defeated” it, it’s not going to happen soon.
Leave a Reply