Former Google Executive Warns AI Can Generate Synthetic Viruses, Spark Pandemics

A former Google executive has issued a warning about the potential dangers of artificial intelligence (AI) in creating synthetic viruses that could trigger pandemics. This caution underscores the need for responsible AI development and oversight to prevent unintended consequences.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

A former Google executive and AI expert, Mustafa Suleyman, has raised alarms about the misuse of artificial intelligence, which could result in the creation of synthetic viruses capable of sparking pandemics.

Suleyman, co-founder of Google DeepMind, expressed concerns that AI could be used to engineer pathogens with the potential for greater harm, emphasizing the urgent need for responsible AI development to avoid such catastrophic scenarios.

“The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible or more lethal,” In a recent podcast episode, he expressed these concerns.

Mr. Suleyman has likened the need for restrictions on advanced AI technology, similar to existing measures preventing easy access to pathogenic microbes like anthrax. He emphasizes the importance of regulating access to software that operates such AI models to mitigate potential risks.

“That’s where we need containment. We have to limit access to the tools and the know-how to carry out that kind of experimentation,” he said in The Diary of a CEO podcast.

“We can’t let just anyone have access to them. We need to limit who can use the AI software, the cloud systems, and even some of the biological material.”

“And of course on the biology side it means restricting access to some of the substances,” adding that AI development needs to be approached with a “precautionary principle”.

Mr. Suleyman’s remarks align with concerns raised in a recent study, which revealed that even undergraduate students lacking a biology background can receive suggestions for bio-weapons from AI systems.

Researchers, including those from the Massachusetts Institute of Technology, discovered that chatbots could propose “four potential pandemic pathogens” within an hour and provide explanations on how to create them using synthetic DNA.

The research found chatbots also “supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organisation”.

Such large language models (LLMs), like ChatGPT, “will make pandemic-class agents widely accessible as soon as they are credibly identified, even to people with little or no laboratory training,” the study said.

The study, whose authors included MIT bio risk expert Kevin Esvelt, called for “non-proliferation measures”.

Such measures could include “pre-release evaluations of LLMs by third parties, curating training datasets to remove harmful concepts, and verifiably screening all DNA generated by synthesis providers or used by contract research organisations and robotic ‘cloud laboratories’ to engineer organisms or viruses”.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Can’t get enough?

Never miss any important news. Subscribe to our newsletter.