You are currently viewing Industry leaders: AI poses risk of extinction

Industry leaders: AI poses risk of extinction

  • Post author:
  • Post category:News
  • Post comments:0 Comments

Artificial intelligence poses an existential threat to humanity that must be addressed, a group of tech executives and AI scientists said Tuesday in a joint statement. 


What You Need To Know

  • More than 350 executives, researchers and engineers working in AI said in a joint statement Tuesday, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
  • Those who signed the letter include AI-pioneering computer scientists Geoffrey Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei
  • Dan Hendrycks, the Center for AI Safety’s executive director, told The New York Times the statement was kept short in an effort to unit AI experts who might disagree about the specific risks or steps needed to counter them
  • Tuesday’s letter is the latest warning about artificial intelligence issued by members of the tech industry, including people who are working to further develop the technology

The brief statement, released through the Center for AI Safety, says, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter was signed by more than 350 executives, researchers and engineers working in AI, including Geoffrey Hinton and Yoshua Bengio, two of the three computer scientists who in 2019 won a Turing Award for their pioneering work on neural networks. OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei and Microsoft’s chief technology and chief scientific officers were among others who signed off on the statement.

An introduction to the statement noted that AI experts, journalists, policymakers and the public are increasingly discussing the risks of AI and that it can be difficult for some experts to voice their concerns. The statement aims to overcome that obstacle and open up discussion, as well put on the record that a growing number of experts and public figures are taking the potential dangers seriously. 

Dan Hendrycks, the Center for AI Safety’s executive director, told The New York Times the statement was kept short in an effort to unit AI experts who might disagree about the specific risks or steps needed to counter them. 

Artificial intelligence is being celebrated by some as revolutionary technology that could reshape many aspects of life — everything from searching the internet to curing diseases. But some argue it could also steal people’s jobs, be used to promote disinformation or spiral out of the control of humans if systems are allowed to write and execute their own code.

AI has been around for years but has exploded in the mainstream in recent months with the release of chat bots, image generators and other tools.

Tuesday’s letter is the latest warning about artificial intelligence issued by members of the tech industry, including people who are working to further develop the technology.

Earlier this month, Hinton resigned from Google so he can speak out against the dangers of AI. Among Hinton’s concerns is that the internet could become flooded with fake images and text and the average person will “not be able to know what is true anymore,” he told The New York Times.

Altman told a Senate subcommittee this month he believes the AI industry needs government regulation, adding, “If this technology goes wrong, it can go quite wrong.”

Speaking last week to a group of government officials, members of Congress and policy experts, Microsoft President Brad Smith revealed a five-point blueprint for the public governance of AI. Among his proposals: AI-generated content should be labeled and a new government regulatory agency should be created.

In March, more than 1,000  technology leaders and researchers, including Elon Musk of SpaceX, Tesla and Twitter and Steve Wozniak of Apple, wrote an open letter calling for companies to pause for six months development of AI systems more powerful than OpenAI’s latest GPT-4 release, arguing they “can pose profound risks to society and humanity.”

On Tuesday, the Biden administration issued a request for public input as it works to develop a national strategy governing artificial intelligence.

Leave a Reply