A new study from Google DeepMind indicates that artificial general intelligence (AGI), which possesses human-like abilities, might emerge by 2030. The research highlights various potential risks, including the alarming possibility that AGI could lead to the extinction of humanity. Although it does not specify how this might happen, the study raises serious concerns about the dangers involved.

The authors argue that society, rather than just tech companies, should make decisions about what constitutes significant harm, guided by collective values and risk tolerance. The study identifies four main categories of risks associated with AGI: misuse, misalignment, mistakes, and structural risks. Misuse is particularly concerning, as individuals or groups could intentionally exploit AGI for harmful purposes.

Google DeepMind stresses the need for prevention measures to combat this potential misuse. The risks are treated as immediate concerns that require prompt action from AI developers, regulators, and international organizations. The researchers assert that without appropriate oversight, severe consequences, including existential threats, are plausible. Earlier this year, Demis Hassabis, CEO of DeepMind, emphasized the necessity of a global framework for governing AGI.

He called for an international initiative akin to CERN, focused on the safe advancement of AGI, along with an oversight body similar to the International Atomic Energy Agency (IAEA) to monitor risky projects. He also suggested a collaborative international effort, similar to the United Nations, to oversee the deployment of AGI systems.

The Distinction of AGI

AGI differs from traditional AI, which is designed for specific tasks. AGI seeks to replicate the flexibility of human intelligence, enabling it to learn, understand, and solve problems across various fields without being limited to one function.

Essentially, AGI would be a machine capable of adapting to new situations and goals like humans do. This unique capability brings both excitement and concern, as its unpredictability may challenge existing safety protocols. The study underscores the urgent need for a cautious and coordinated global approach to AGI development, given the potential for significant harm.