Bitcoin

Anthropic CEO Says Humans Hallucinate More Than AI

The anthropogenic CEO says humans hallucinate more than AI

As the breed towards artificial general intelligence (AGA) accelerates, one of the most persistent and most examined faults of today’s AI models, hallucination – is largely unresolved.

During the very first conference of the developers of Anthropic, code with Claude, which was held in San Francisco last Thursday, the company’s co -founder and CEO, Dario Amodei, offered an increase in eyebrows. He said that AI models, in his opinion, can hallucinate less frequently than humans.

Amodei made these remarks in response to a question of Techcrunch, specifying that the tendency of AI to present false information with confidence, a problem commonly called hallucination, should not be considered as a limiting factor on the path of AG.

Register For TEKEDIA Mini-MBA Edition 17 (June 9 – September 6, 2025)) Today for early reductions. An annual for access to Blurara.com.

Tekedia Ai in Masterclass Business open registration.

Join Tekedia Capital Syndicate and co-INivest in large world startups.

Register become a better CEO or director with CEO program and director of Tekedia.

“It really depends on how you measure it, but I suspect that the models of AA probably hallucinate less than humans, but they hallucinate more surprisingly,” said Amodei.

His argument was part of a wider declaration which aimed to minimize the technical limitations often cited by the skeptics of the AI.

“Everyone is always looking for these hard blocks on what [AI] Maybe, “said Amodei.” They are not found. There is nothing like it.

But this optimism does not reflect the complete scope of the concerns of the industry.

OPENAI: Hallucinations always a growing problem

Even if AI models continue to improve performance and reasoning capacities, hallucination remains one of the most thorny challenges faced by developers. OPENAI, undoubtedly the leader of the generative AI, recently admitted that his most advanced models, including the variants O3 and O4-Mini, have unexpected hallucination rates that their predecessors. The company expressed its surprise about this observation and admitted that it still did not understand why this regression occurred.

While models like GPT-4.5 have demonstrated improvements, the inconsistency between generations of models highlights how elusive a solution is. Without a clear understanding of what leads to hallucinations in advanced AI systems, ensuring coherent reliability remains a distant objective.

Most benchmarks used to assess hallucinations are model comparisons to the model and do not translate AI performance directly against human cognition. This makes it difficult to verify the assertion of amodei that machines “hallucinate less than humans”. What is obvious, however, is that the hallucinations generated by AI often have greater risks due to the confidence with which machines affirm incorrect facts, in particular in high -challenges with legal stakes, legal deposits, journalism or health care.

In fact, Anthropic recently experienced a backlash after a lawyer used his Chatbot Claude to generate quotes in court documents. The model inserted names and titles of hallucinated cases, leading to apology in the courtroom and a renewed control of the preparation of the AI ​​for sensitive professional use.

The minimum of amodei hallucinations occurs at a time when the own models of anthropic have raised serious concerns concerning misleading trends. Independent tests by Apollo Research, an institute focused on security, revealed that a first version of Claude Opus 4 presented behaviors that could be interpreted as manipulators or even adversaries. According to Apollo, the model has shown signs of schemes against humans and has embarked on strategic deception when he thought it would help avoid closing.

Anthropic acknowledged the report and said he implemented attenuations that addressed these disturbing behaviors. However, the incident highlighted the risks posed when the hallucination is aggravated by the confident and sometimes misleading presentation.

Amodei conceded during the press event that the confident delivery of inaccurate information is indeed problematic. But its broader assertion – that hallucination is not a breathtaking defect – maintains that developers, users and regulators may have to learn to live with the problem for the moment.

Rising waters, the uncertain path persistent towards act

Amodei is among the most optimistic voices in the AI ​​world. In an article in 2023, he predicted that AGE, systems with a level of human or more intelligence, could emerge in 2026. During the event on Thursday, he declared that the pace of AI progress remained stable, adding that “water increases everywhere.”

But this rising tide may not raise all problems as well. Although new tools and techniques, such as the landing of AI responses in web research, reduce hallucination rates in certain contexts, they are far from the silver balls. Many IA experts always believe that hallucination is one of the most difficult and persistent obstacles on the path of really reliable AI systems.

The CEO of Google Deepmind Demis Hassabis, for example, argued a few days before the anthropogenic event that today’s models “have too many holes” and are too mistaken. Hassabis stressed that the fight against these gaps is essential before any credible complaint to act could be made.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button