News

Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
In tests, Anthropic's Claude Opus 4 would resort to "extremely harmful actions" to preserve its own existence, a safety report revealed.
Anthropic says its AI model Claude Opus 4 resorted to blackmail when it thought an engineer tasked with replacing it was having an extramarital affair.
Anthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 ...
In a simulated workplace test, Claude Opus 4 — the most advanced language model from AI company Anthropic — read through a ...
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported ...
Anthropic's Claude AI tried to blackmail engineers during safety tests, threatening to expose personal info if shut down ...
Anthropic's new model might also report users to authorities and the press if it senses "egregious wrongdoing." ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
This development, detailed in a recently published safety report, have led Anthropic to classify Claude Opus 4 as an ‘ASL-3’ ...
A third-party research institute Anthropic partnered with to test Claude Opus 4 recommended against deploying an early ...
These safeguards are supposed to prevent the bots from sharing illegal, unethical, or downright dangerous information. But ...