TECH NEWS – It’s no coincidence that Sam Altman, the CEO of OpenAI, the company behind ChatGPT, said that using artificial intelligence in the wrong way could be “lights out for all.”.
Checkpoint has reported that cybercriminals have found a loophole so they don’t have to comply with ChatGPT’s content moderation limits and can make money doing it. ChatGPT can generate malicious code or possibly develop a new one from phishing emails for less than six dollars. To do so, they have used OpenAI’s API (application programming interface) to create special bots for the popular messaging app Telegram.
They could access the unlimited ChatGPT through the app, and cybercriminals charge around $5.5 for every hundred queries. Customers can get examples of harmful content generated by AI. Other hackers have gotten past ChatGPT’s defenses by creating a special script (again, using OpenAI’s API), which was then published on GitHub. ChatGPT’s “dark side” can disguise itself as a bank or a company and show you in an email where you should place a link to a phishing site (which you shouldn’t click on).
If that wasn’t scary enough, it could further develop malicious code (malware). Checkpoint had previously written about how easy it is to create nasty malware without any programming knowledge (none required, you can be a noob), especially on earlier versions of ChatGPT, before OpenAI started tightening the terms of use around the technology, which Microsoft will apply after the Bing search engine is updated because an AI can be used to chat to provide more consumer-friendly, meaningful answers to more open-ended questions. It could be another interesting topic around copyrighted material.
As voice impersonation is already possible with AI (anyone can say something they wouldn’t say in real life), who knows where to go from here?
Source: PCGamer
Leave a Reply