It's the technology topic of the hour. Since the end of last year, artificial intelligence (AI) has hit the public consciousness with full force. Week after week, new programs are introduced with increasingly impressive capabilities. Actually a time when industry veterans like Geoffrey Hinton should finally be reaping the rewards of their success. Instead, Hinton just threw in the towel after 50 years of AI development. His enthusiasm has given way to fear.
Hinton reports this in a detailed interview with the "New York Times". The fact that he is now speaking freely about this fear has to do with his most recent decision: after more than ten years, Hinton has just resigned from Google. Because he sees the company and its competitors running into ruin with open eyes.
For a long time he believed that technology was far inferior to humans. "But what if what happens in the systems is actually much better than what happens in our brains," he says, summing up his concern. "Few people believed in the idea that this technology could actually become smarter than humans," Hinton recalls. "Most thought that was still a long way off. I thought so too, that it would take at least 30 to 50 years," he explains. "It's obvious I don't think that anymore."
"Look at where AI was today and where it is now," Hinton told the newspaper, appalled at the speed of development. "Take the difference and project it into the future. It's terrifying."
He is even more worried that companies like Google don't seem to see this danger to the same extent. Until last year, he still thought that Google was a "serious godfather" of the technology, who considered the risks responsibly. But since the company reacted very hastily to presenting its own chatbot Bard in the spring, this belief has been gone, he explains his change of heart. Responsibility has given way to a hasty race for technological supremacy.
The rethink is remarkable because Hinton is one of the key thought leaders of the technology. As early as 1972, the expert, who was still studying in Edinburgh at the time, suggested that real artificial intelligence could only be achieved if it relied on a kind of neural connection like the human brain. He succeeded 40 years later: together with his team, he trained a neural network to be able to independently recognize objects in images. Today, neural networks are the basis of all modern AI approaches, Hinton has often been referred to as the "Godfather of AI".
And his work is the direct basis for two of the most important companies. Shortly after taking his neural network public with two students, Google bought his company for $44 million. The work is considered the basis for Bard. One of the students, Ilya Sutskever, switched to OpenAI - and works there as a chief scientist on ChatGPT.
Today, Hinton fears the consequences of his work. Because of the ability of programs like Midjourney to create deceptively real images and programs like ChatGPT, which write down texts on command, "we may soon no longer be able to know what is actually true," he fears. "I can't see how to stop bad guys from using it for bad," Hinton says pessimistically.
However, he considers state regulation to be difficult. The competition between the tech giants ensures that the companies cannot regulate themselves, he assumes. Unlike with nuclear weapons, it is hardly possible to monitor developments around the world. His radical suggestion: scientists should join forces to consciously slow down further development. "We shouldn't let this continue to grow unless we know if we can control it."
Hinton had previously used the comparison with nuclear weapons as a statement to the contrary. The inventor of the atomic bomb, Robert Oppenheimer, always said: "If something is technically attractive, you go ahead and do it." He used to quote that often, he admits to the "NYT". Today he wouldn't do that anymore. "I console myself with the usual excuse," he explains. "If I hadn't done it, it would have been someone else."
What: New York Times