Anthropic’s debuts most powerful AI yet amid ‘whistleblowing’ controversy
The artificial intelligence company Anthropic launched the latest generations of its chatbots in the midst of criticism of testing environmental behavior which could report certain users to the authorities.
Anthropic unveiled Claude Opus 4 and Claude Sonnet 4 on May 22, saying that Claude Opus 4 is his most powerful model to date, “and the best coding model in the world”, while Claude Sonnet 4 is a significant upgrade of his predecessor, “offering coding and higher reasoning.”
The company added that the two upgrades are hybrid models offering two modes – “almost instantaneous answers and prolonged reflection for deeper reasoning”.
The two AI models can also alternate between reasoning, research and use of tools, such as web search, to improve responses, he said.
Anthropic added that Claude Opus 4 surpasses competitors in agency coding references. He is also able to work continuously for hours on long -standing complex tasks, “considerably expanding what AI agents can do”.
Anthropic claims that the chatbot obtained a score of 72.5% on a rigorous reference of software engineering, surpassing the GPT-4.1 of Openai, which marked 54.6% after its launch of April.
In relation: Openai ignored the experts when he published a chatpt too pleasant
The main players in the AI industry pivoted “reasoning models” in 2025, which methodically solve problems before responding.
OPENAI launched the change in December with its “O” series, followed by Gemini 2.5 Pro from Google with its experimental capacity “Think Think”.
Claude Rats on improper use in tests
The first conference of anthropic developers on May 22 was overshadowed by the controversy and the reaction on a characteristic of Claude 4 opus.
Developers and users have reacted strongly to revelations that the model can independently report users to the authorities if it detects “deliciously immoral” behavior, according to Venturebeat.
The report quoted Sam Bowman, a anthropogenic alignment researcher, who wrote on X that the chatbot “will use command line tools to contact the press, contact regulators, try to lock yourself in relevant or all of the preceding systems”.
However, Bowman later said that he “deleted the previous tweet on the denunciation because he had come out of his context”.
He said that the functionality has occurred that in “testing environments where we give it unusually free access to very unusual tools and instructions”.
The CEO of Stability AI, Emad Mostaque, told the anthropogenic team: “It is a completely bad behavior and you must deactivate this – it is a massive betrayal of confidence and a slippery slope.”
Review: Ai liquité, “ good ” Propaganda bots, OPENAI DOOMSDAY BUNKER: AI EYE