AI company Anthropic says it recently disrupted “a sophisticated cybercriminal operation” that relied heavily upon its Claude code in a “vibe hacking” scheme targeting at least 17 separate organizations, including government agencies.
“The actor used AI to what we believe is an unprecedented degree,” Anthropic wrote (via the BBC). “Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks.
“Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.”
Anthropic said the operation represents “an evolution in AI-assisted cybercrime,” as it demonstrates that operations previously requiring a team of humans can now be largely supported by agentic AI. The use of AI assistance also reduces the technical expertise required to pull off sophisticated cybercrimes, the company added.
This sort of thing may not be entirely surprising from a chatbot that seems to have a real predilection for criminal behavior, but the bad news is that it’s not the only way Claude is getting up to trouble. In the same report, Anthropic said North Korean operatives are using the chatbot to get jobs at big tech companies in the US, which can then be leveraged in any number of ways to help North Korea evade sanctions or do whatever it is a rogue state does once it’s on the inside of a Fortune 500 tech firm.
This remote-work scheme has been going on for a while and is relatively well known, but until now it’s required significant amounts of specialized training in order to produce workers who could actually do the required jobs, or at least fake it sufficiently—a challenge exacerbated by North Korea’s near-complete isolation from the Western world.
“AI has eliminated this constraint,” Anthropic wrote. “Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions. This represents a fundamentally new phase for these employment scams.”
Yet another enterprising criminal used Claude to “develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.” Those pieces of “no-code malware,” as Anthropic described them, were then sold through online forums to other criminals for $400 to $1,200 each.
Anthropic said it banned the Claude accounts involved in all of the above, notified appropriate authorities, and is developing new tools and systems to help prevent this sort of thing in the future. But, like other, far more horrific cases of AI doing awful things, it illustrates the central flaw of the technology: We’re not ready for it, we’re reacting to it. If we take Anthropic’s report at face value—and I can’t shake the feeling that it’s at least a little bit of a “look how powerful our AI is” flex—we then have to ask how many incidents of AI-assisted criminal behavior remain undetected. And then maybe we should also take a minute to think about what we’re getting out of the deal in return.
Anthropic’s full report on how Claude is being used to do crime, which also touches on romance scams and “synthetic identity services,” is available here.

Best PC gaming kit 2025