TECH NEWS – Security experts say it’s easy to build your own chatbot, but don’t ignore its security flaws…
Microsoft’s chatbot could, for example, allow a game developer to answer questions about his game, help with graphics tuning for PC ports, and even provide a cure for technical problems. On the surface, that’s good, but Zenity, an AI security specialist, says Copilot Studio and the chatbots it creates are a security nightmare. The company’s CTO, Michael Bargury, dug into the details at the Black Hat security conference…
Copilot Studio’s default security settings are unenforceable. A malicious user can place malicious code in an innocent email, tell the Copilot bot to scan it, and poof, the code runs. Or Copilot could serve up a fake Microsoft login page that could steal credentials in seconds, all visible to the chatbot. According to Zenity, the average large US company already runs 3,000 such chatbots, and 63% of them can be detected online. If that’s true, then 2,000 bots in a Fortune 500 company could be capable of leaking critical, secret data!
“We scanned the Internet and found tens of thousands of these bots. There’s a fundamental problem here. When you give AI access to data, that data is now an attack surface for rapid injection,” Bargury said. According to him, Copilot’s original default settings exposed the bots to the Internet and could be accessed without authentication. Zenity has raised this with Microsoft, but the bots are still there before the update…
This was Microsoft’s response: “We appreciate the work of Michael Bargury in identifying and responsibly reporting these techniques through a coordinated disclosure. We are investigating these reports and continually enhancing our systems to proactively detect and mitigate these types of threats and help protect our customers. Similar to other post-compromise techniques, these methods require prior compromise of a system or social engineering. Microsoft Security provides a robust set of protections that customers can use to address these risks, and we’re committed to continuing to improve our defenses as this technology evolves.
Artificial intelligence can be a minefield.
Source: PCGamer, The Register, TechTarget
Leave a Reply