TECH NEWS – Vibe hacking is the term used to describe this AI-driven form of cybercrime, where highly sophisticated attacks are now being carried out.
Cybercrime is rising worldwide, with attacks becoming more advanced by the day. Because generative AI tools are so accessible, the nature of ransomware operations has shifted, making them easier to abuse. AI is no longer just writing scary ransom notes—it’s also executing the tasks themselves. This makes AI not just a communication tool but a central weapon in cybercrime.
Anthropic revealed it had intercepted and shut down several hackers attempting to exploit its Claude AI systems for malicious purposes, including sending phishing emails and bypassing built-in security safeguards. By exposing these new tactics and the sophisticated abuses of its Claude models, the company sheds light on the evolving strategies cybercriminals are using to launch their threats.
One hacker group used Claude Code, Anthropic’s AI coding agent, to run an entire cyberattack campaign targeting 17 organizations, from government agencies and healthcare providers to religious institutions and emergency services. The AI model handled everything from generating ransom demands to executing the full hacking process. Anthropic dubbed this new threat “vibe hacking,” referring to the use of AI’s emotional and psychological pressure to coerce victims into paying or giving up sensitive data.
The hackers reportedly demanded over $500,000, highlighting AI’s growing role in high-stakes cyber extortion. Anthropic also warned that abuse goes beyond ransomware: scammers have even leveraged AI to fraudulently land jobs at Fortune 500 companies, overcoming shortcomings in English proficiency or technical skills to get through recruitment systems.
Other cases included romance scams carried out via Telegram. Scammers built bots with Claude that could write convincing multilingual messages and generate flattering compliments for victims across the United States, Japan, and South Korea. In response, Anthropic banned accounts, tightened security barriers, and shared its findings with government agencies. It also updated its usage policy to explicitly prohibit scams or the creation of malicious software.
The rise of vibe hacking raises fresh concerns over AI’s potential to exploit victims with greater precision. To combat this, governments and tech companies must strengthen detection tools and ensure that security protections evolve fast enough to keep up with technological advances and prevent manipulative misuse.




Leave a Reply