Is Our Own Microsoft Copilot Chatbot Not Safe?

TECH NEWS – Security experts say it’s easy to build your own chatbot, but don’t ignore its security flaws…

 

Microsoft’s chatbot could, for example, allow a game developer to answer questions about his game, help with graphics tuning for PC ports, and even provide a cure for technical problems. On the surface, that’s good, but Zenity, an AI security specialist, says Copilot Studio and the chatbots it creates are a security nightmare. The company’s CTO, Michael Bargury, dug into the details at the Black Hat security conference…

Copilot Studio’s default security settings are unenforceable. A malicious user can place malicious code in an innocent email, tell the Copilot bot to scan it, and poof, the code runs. Or Copilot could serve up a fake Microsoft login page that could steal credentials in seconds, all visible to the chatbot. According to Zenity, the average large US company already runs 3,000 such chatbots, and 63% of them can be detected online. If that’s true, then 2,000 bots in a Fortune 500 company could be capable of leaking critical, secret data!

“We scanned the Internet and found tens of thousands of these bots. There’s a fundamental problem here. When you give AI access to data, that data is now an attack surface for rapid injection,” Bargury said. According to him, Copilot’s original default settings exposed the bots to the Internet and could be accessed without authentication. Zenity has raised this with Microsoft, but the bots are still there before the update…

This was Microsoft’s response: “We appreciate the work of Michael Bargury in identifying and responsibly reporting these techniques through a coordinated disclosure. We are investigating these reports and continually enhancing our systems to proactively detect and mitigate these types of threats and help protect our customers. Similar to other post-compromise techniques, these methods require prior compromise of a system or social engineering. Microsoft Security provides a robust set of protections that customers can use to address these risks, and we’re committed to continuing to improve our defenses as this technology evolves.

Artificial intelligence can be a minefield.

Source: PCGamer, The Register, TechTarget

Spread the love
Avatar photo
Anikó, our news editor and communication manager, is more interested in the business side of the gaming industry. She worked at banks, and she has a vast knowledge of business life. Still, she likes puzzle and story-oriented games, like Sherlock Holmes: Crimes & Punishments, which is her favourite title. She also played The Sims 3, but after accidentally killing a whole sim family, swore not to play it again. (For our office address, email and phone number check out our IMPRESSUM)

No comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.