Bitcoin

Pichai Says AI Is in Its ‘Jagged’ Phase — and It Still Can’t Spell “Strawberry”

Pichai says that AI is in its

The journey to general artificial intelligence (AG) promises to be much more disorderly than the chiefs of technology formerly envisaged. Google CEO Sundar Pichai, now describes this moment not as a stable march towards intelligence at the level of man, but as a bumpy ride filled with brilliant flashes and confusing triples – a stage he calls artificial intelligence shredded, or Aji.

Speaking on the Podcast Lex Fridman, Pichai explained that the term AJI captures the unequal development of AI systems – capable of making amazing exhibitors for a moment and insensitive errors the following. He attributes the sentence to himself, either to Andrej Karpathy, the in-depth and former co-founder of Openai.

“You see what they can do and then you can find it trivially that they make digital errors or [struggle with] Count R’s in “Strawberry”, which seems to trip most models, “said Pichai.” I have the impression that we are in the Aji phase where dramatic progress [is happening]Some things do not work well, but overall, you see a lot of progress. “”

Register For TEKEDIA Mini-MBA Edition 17 (June 9 – September 6, 2025)) Today for early reductions. An annual for access to Blurara.com.

Tekedia Ai in Masterclass Business open registration.

Join Tekedia Capital Syndicate and co-INivest in large world startups.

Register become a better CEO or director with CEO program and director of Tekedia.

The commentary reflects a wider frustration emerging from the developers and users of an advanced AI. The problem is not only new errors – it is the persistence of a deeper defect known as the hallucination.

Hallucination – where AI models generate false or deceptive information with confidence – remains one of the most serious problems not resolved in the field. Despite billions of dollars in investment and waves of model upgrades, the most advanced systems of today Openai, Google and Anthropic continue to hallucinate frequently. These errors are not only embarrassing but potentially dangerous in cases of sensitive use such as legal advice, health care or journalism.

Even the CEO of Openai, Sam Altman, who once described GPT-4 as the most useful tool he has ever used, recently admitted to having been surprised by the way in which the stubborn hallucinations were. Speaking at a private event at the start of this year, Altman would have said he expected that hallucinations were considerably reduced in more recent iterations – but this was not the case.

The Chicago Sun -Times and the Philadelphia Inquirer recently learned that hard after the publication of a summer reading list generated by AI which included several nonexistent books – an error that has rekindled the debate on editorial responsibility and the surveillance of AI.

Pichai plans to act by 2030 – or something close

When Deepmind was launched in 2010, its founders estimated that it would take around 20 years to reach Act. Google acquired the laboratory in 2014, and Pichai says that even if the chronology could stretch, we are likely to see “breathtaking” breakthroughs through several dimensions by 2030 – even if acted in the strictest sense is not yet achieved.

“I would emphasize that this is not important the definition because you will have breathtaking progress on many dimensions,” he said.

However, he pointed out that at that time, the world will need clearer systems to label the synthetic content to help people “distinguish reality” from fiction generated by AI.

Pichai was one of the leaders in the most vocal technology that grow to coordinated AI global regulation. During the UN future summit in September 2024, he described four ways of which AI could benefit considerably in humanity: to improve access to knowledge in the mother tongue, to accelerate scientific breakthroughs, to fight climate change and fuel economic growth.

But he echoes the call for security of the AI, which aligns the warning than without governance, the AI ​​could do more harm than good. He indicated the need for executives and guarantees – a point taken up by the CEO of Deepmind Demis Hassabis, who recently warned of autonomous models poorly used by bad players.

Speaking at the SXSW Festival in London, Hassabis highlighted the need for stricter restrictions on access to powerful AI systems, warning that the world moves too slowly to regulate the tools capable of destabilizing economies and entire companies.

“These two risks are significant,” Hassabis told Anna Stewart from CNN during the interview. “But a bad actor could reuse these same technologies for a harmful end … And therefore a great thing is how to restrict access to these powerful systems to bad actors, but allow the good actors to do a lot of incredible things with?”

Despite these concerns, Pichai remains optimistic. He considers Aji not as a failure, but as the clumsy adolescence of a new powerful technology – always learning, always stumbling, but pushing towards a future that could reshape everything.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button