views
2023 is shaping up to be the year of Artificial Intelligence—generative AI chatbots like ChatGPT, Bing, and Google’s Bard are making headlines. Human jobs are at risk, with IBM halting hiring for 7,800 roles. And, the fear of AI surpassing human capabilities persists. Geoffrey Hinton, known as the ‘Godfather of AI,’ left Google citing concerns about its dangers. Despite the benefits that AI can bring, such as increased efficiency and productivity, the fear of AI outpacing humans and becoming too smart for its own good is a fear that has been discussed time and again.
Now, another Google executive, Denis Hassabis, who is the CEO of DeepMind, a startup that was acquired by Google in 2014, has made a bold claim that AI could reach human-level cognition in as little as five years.
Hassabis, speaking at Wall Street Journal’s Future of Everything Festival in New York City, said that the developments in the Artificial Intelligence space shouldn’t be slowed down. “I think we’ll have very capable, very general systems in the next few years," and he doesn’t “see any reason why that progress is going to slow down. I think it may even accelerate." He added, “So I think we could be just a few years, maybe within a decade away."
According to Fortune, Google announced last month that it was merging its core A.I. research team with DeepMind, the startup founded by Demis Hassabis, who would become the CEO of the newly combined unit. Google had acquired DeepMind in 2014.
In related news, Geoffrey Hinton, who’s touted to be the ‘Godfather of AI, quit Google to express his concerns. Hinton, who has won the Turing award, the computer science equivalent of the Nobel prize, has expressed concern that future versions of AI could pose a threat because they often learn unexpected behavior from the vast amounts of data they analyze. This is particularly troubling as AI systems might soon generate and run their own code, which could lead to truly autonomous weapons and killer robots becoming a reality.
Read all the Latest Tech News here
Comments
0 comment