top of page

Can Artificial intelligence cause a pandemic or epidemic?





Experts have warned that artificial intelligence models could create pathogens (germs, viruses and other microorganisms) capable of "causing a pandemic or epidemic."


Specialized artificial intelligence models trained on biological data have made significant progress, helping to speed up the development of vaccines, treatments for diseases and more. But the same qualities that make these models useful also pose potential risks.


That's why experts are calling on governments to introduce mandatory oversight and protective barriers for advanced biological models in a new research paper published on August 22 in the peer-reviewed journal Science.

While today’s AI models may not “contribute significantly” to biological risks, future systems could help engineer new pathogens capable of causing pandemics.


That warning comes in a paper by co-authors from Johns Hopkins University, Stanford University, and Fordham University, who say AI models “are trained on, or are capable of purposefully manipulating, large amounts of biological data, from accelerating drug and vaccine design to improving crop yields.”


But as with any powerful new technology, such biological models also pose significant risks.


Because of their general nature, the same biological model that can engineer a benign viral vector for delivering gene therapy can be used to engineer a more pathogenic virus that can evade vaccine-induced immunity.”


The paper went on to say: “Voluntary commitments among developers to assess the potentially dangerous potential of biological models are meaningful and important, but they cannot stand alone. “We suggest that national governments, including the United States, pass legislation and establish mandatory rules that would prevent advanced biological models from contributing significantly to large-scale risks, such as the creation of new or improved pathogens capable of causing major epidemics or even pandemics.”


While today’s AI models are unlikely to “contribute significantly” to biological risks, “the essential ingredients for creating worrisome advanced biological models may already exist or will soon exist.”


The experts reportedly recommended that governments create a “basis of testing” that AI biological models must pass before they are released to the public, so that officials can determine how far to restrict access to the models.

“We need to plan now,” Anita Cicero, deputy director at the Johns Hopkins Center for Health Security and one of the paper’s authors, said, according to Time. Some government oversight and requirements will be necessary to mitigate the risks of particularly powerful tools in the future.”


Because of the expected advances in AI capabilities and the relative ease of obtaining biological materials and hiring third parties to conduct experiments remotely, Cicero believes that biological risks from AI could become apparent “within the next 20 years, and perhaps even much sooner,” unless there is proper oversight.


She adds: “We need to think not only about the current version of all the tools available, but also the next versions, because of the exponential growth that we see. These tools will become more powerful.”


Source: Fox News -https://ar.rt.com/y7hb - Published on September 5- 2024

Related Posts

See All

Comentarios


bottom of page