Bitcoin

‘Godfather of AI’ warns machines could develop thoughts beyond human understanding

Geoffrey Hinton, widely considered to be the “sponsor of AI”, seems once again the alarm on the uncontrolled acceleration of the development of artificial intelligence, warning that humans could soon lose the ability to understand what AI systems think or plan.

In a recent episode of the “One Decision” podcast, Hinton explained that today’s great language models still function with “chain of reflection” in English, which allows researchers and developers to trace how they come to certain conclusions. But this transparency may not last much longer.

“Now it becomes more frightening if they develop their own internal languages to talk to each other,” said Hinton. “I would not be surprised if they develop their own language to think, and we have no idea what they think.”

Register For Tekedia Mini-MBA Edition 18 (Sept. 15 Annual made for access to Blurara.com.

Tekedia ai in Business Masterclass opens registration.

Join Tekedia Capital Syndicate and Co-Investment in large world startups.

Register For Tekedia Ai Lab: from technical design to deployment.

He also noted that AI systems have already demonstrated the ability to produce “terrible” thoughts, referring to the potential for the machines to evolve in a dangerous and unpredictable manner.

These comments have additional weight from Hinton, whose research underpins a large part of the AI revolution. For decades, it has been at the forefront of automatic learning. In the 1980s, Hinton developed a technique called retro -propagation, a key algorithm that allowed neural networks to learn data – a method that subsequently enabled the explosive growth in in -depth learning. His article Landmark 2012, co-written with two of his students at the University of Toronto, presented a network of deep neurons which obtained record results in image recognition. This work is widely credited with having catalyzed the Boom of the current AI.

Hinton continued to join Google, where he spent more than a decade working on research on neural networks. He helped help Google integrate AI into products like research and translation. But in 2023, he left the company, citing the need to speak more freely about his concerns about the risks posed by the very systems that he helped to create.

Since then, Hinton has been frank in his criticism of the rapid expansion of the AI industry, arguing that companies and governments are not prepared for what awaits us. He believes that general artificial intelligence (AG), a form of AI that rivals or goes beyond human intelligence, is no longer a distant possibility.

He expressed his concern that we will develop machines that are smarter than us, and once it happens, we may not understand what they are doing.

This possibility will probably have deep implications. If AI models begin to reason in a way that cannot be interpreted by humans, experts warn, then the ability to monitor, audit and restrict these systems could disappear. Hinton fears that without mechanisms guaranteed to ensure that these systems remain “benevolent”, the human race could take existential risks without adequate guarantees.

Meanwhile, the racing AI heats up. Technological companies offer wages and massive scholarships to the best researchers as they jockey for domination. Governments are also moving to guarantee their positions. On July 23, the White House published an “AI action plan” offering limits of federal funding to the states that impose “heavy” regulations of AI and called for faster AI data centers – a critical infrastructure to feed these increasingly complex models.

Many researchers think that technical progress far exceeds ethical and security considerations. Hinton’s voice is part of an increasing choir of experts urging greater surveillance, transparency and international cooperation to mitigate the risks of AI to economies, societies and even human survival.

In an area he helped to define, Hinton’s warnings have deepened. While others in the world of technology continue to boast the potential for productivity and growth of AI, Hinton insists that understanding and controlling these systems should be a higher priority.

The only hope of ensuring that AI does not turn against humans, said Hinton in the episode of the Podcast, it is if “we can find a way to make them guaranteed benevolent”.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button