News

Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
Anthropic says its AI model Claude Opus 4 resorted to blackmail when it thought an engineer tasked with replacing it was ...
Anthropic's Claude AI tried to blackmail engineers during safety tests, threatening to expose personal info if shut down ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
In a fictional scenario, the model was willing to expose that the engineer seeking to replace it was having an affair.
A third-party research institute Anthropic partnered with to test Claude Opus 4 recommended against deploying an early ...
A universal jailbreak for bypassing AI chatbot safety features has been uncovered and is raising many concerns.