Columns Indian Scenario

Emerging biosecurity threats in the age of AI

Suryesh K Namdeo & Pawan Dhar

As artificial intelligence (AI) enables the transformation of biology into an engineering discipline, an effective governance model that uses threat forecasting, real-time evaluation, and response strategies is urgently needed to address accidental or deliberate misuse. This article talks about the risks at the interface of AI and biosecurity and what could India do to better prepare for potential AI-biorisks.

Suryesh Title image
Emerging biosecurity threats in the age of AI. Image for representation only.

AI, or artificial intelligence, are systems that use machines to analyse information and make choices. AI systems can learn from machines and people, and turn this understanding into models to recommend actions or make predictions. 

In the last few years, AI has emerged as one of the most consequential technologies with wide-ranging implications for economic growth, privacy, safety, and security. 

There are also new technological paradigms emerging with the convergence of AI with other key technologies, leading to potential benefits and risks that still need to be fully understood. One such technological convergence is currently underway between AI and synthetic biology, a transdisciplinary field that involves the engineering and synthesis of biological agents and organisms. 

Several AI-based tools could be used for predicting, finding, designing, and simulating the structure, functions, and mutual interactions of biomolecules. While this potentially brings enormous benefits in supporting vaccine development, medicinal drug discovery, biofuel generation, and overall growth of the bioeconomy, it also creates a new landscape of biosafety and biosecurity risks.

Risks at the interface of AI and biosecurity

AI-based biological design tools (BDT) originally created to design and discover medicinal drugs that could be used to predict the structures of new toxins with dual-use potential. Further, large language model (LLM) tools such as ChatGPT can now enable actors with limited training to find ways to synthesise pathogens with pandemic potential. AI tools developed based on host-pathogen interactions can be used to learn more about properties like immune evasion with the potential for dual use. 

Moreover, life science-specific LLMs could be used to find effective methods for disseminating harmful biological agents. AI could be used to circumvent sequence, taxonomy, and lists-based biosecurity measures currently in place by different countries. Broadly, while the LLMs could lower the barriers to misuse of synthetic biology by providing easy access to sensitive information, the BDTs could increase the capabilities of malicious actors with training and resources for developing biological weapons.

AI could increase the efficiency of CRISPR-based genome editing experiments, which could have a dual-use implication when conducted on dangerous pathogens or human subjects. Further, AI could also assist in increasing the sophistication, frequency and diversity of cyber-biosecurity attacks, such as on All India Institute of Medical Sciences (AIIMS) and on vaccine R&D units. Another important concern is related to data bias in AI tools, as most data is from the Global North and could have gender, ethnicity, and other socio-economic biases. Such biases could create systematic issues with AI tools.

One of the most alarming possibilities at the convergence of AI and synthetic biology is the potential ability to create customised chemical or biological weapons that could affect a specific section of the population based on certain biological traits. 

The risk landscape will keep evolving as new AI tools are developed and released without safety and security checks. 

International efforts to manage AI-Bio risks

Given the nature and seriousness of these safety and security risks, few countries and organisations have started efforts to develop policy frameworks and expert forums to manage and mitigate these risks. Some of these are mentioned below:

US Executive order on AI safety

Section 4.4 of the October 2023 US Executive Order (EO) on safe, secure, and trustworthy development and use of AI examines the complex interplay between artificial intelligence (AI) and biosecurity. The EO asks National Academies to assess how generative AI models trained on biological data can elevate biosecurity risks and propose mitigation strategies. It highlights the potential of AI to bolster biosecurity within synthetic biology by asking the White House Office of Science and Technology Policy to develop a framework for regulating synthetic nucleic acid procurement and regulatory oversight to curb misuse amplified by AI capabilities. Finally, transparent labelling for AI-generated content and establishing authenticity for all government-produced or funded digital content including biological data repositories, is emphasised. 

UK AI Safety Summit

The UK AI safety summit in November 2023 called for a proactive, globally inclusive approach to biosecurity risks arising from AI-enabled life sciences tools, anticipating and collectively addressing emergent threats before they materialise. The summit proposed a future governance framework that balances proportionality and adaptability with predictability and minimal intrusion. A deeper understanding of how AI tools, particularly those specialised in life sciences, influence each stage of the biological weapons life cycle is essential, highlighting specific vulnerabilities and enabling targeted solutions, ultimately reducing the risks posed by AI in the life sciences.

AI-Bio Global Forum

The US-based Nuclear Threat Initiative (NTI | Biological) proposes the setting up an AI-Bio Global Forum” to establish risk-reduction measures and advocate for a radically new, adaptable national approach. Scaling promising AI safeguards, exploring advanced guardrails, and strengthening digital-to-biological controls are crucial. The need of the hour is to develop proactive measures to harness the power of AI-bio for good while mitigating its existential threats.

What could India do to better prepare for AI-bio risks?

The rapid development of AI-powered synthetic biology tools presents a potential for accidental or deliberate misuse, raising the spectre of global biological catastrophes. Policymakers require agile governance frameworks to keep pace. India as a developing country with a rapidly growing bio-economy will have distinct challenges here due to its lack of a national biosecurity strategy. Despite these limitations, a number of policy steps and precautionary measures can be taken to better prepare the country for emerging AI-bio risks:

  • Formulate a nationwide task force to study biosecurity threats stemming from interdisciplinary approaches to biology, including AI, and provide suggestions for regulatory measures.
  • Conduct a technology foresight study at the interface of AI and synthetic biology to better understand potential risks and benefits and develop policy options to better manage them.
  • Include biosecurity as a key safety and security measure in the national AI strategy and the recently approved national AI mission. It could incorporate a national regulatory framework to review biosecurity measures at the intersection of AI-based BDTs, LLMs, and biological systems.
  • Create a multidisciplinary expert mechanism to conduct biosafety and biosecurity checks of AI tools in the Indian context. This mechanism can advise the government to regulate the usage of such tools in the country.
  • Craft a strategy for enhancing identification, evaluation, and response capabilities.
  • Develop an updated comprehensive data safety, security, and privacy policy guidelines concerning biological data and its potential use in the development of AI models and tools.
  • Enhance regulatory protocols concerning the synthesis, screening and import of nucleic acids while taking into account the possible ways in which AI can be used to circumvent the existing safety and security measures.
  • Employ AI models to forecast future generations of harmful biological agents and potential disease outbreaks in the Indian context.
  • Support the early establishment of a science advisory mechanism for the UN Biological Weapons Convention and push for the development of safety and security measures on AI-Bio interface in multilateral forums.
  • Support and facilitate the industry-led international initiatives for safety and security screening of the AI-Bio tools.
  • Identify, categorise, and trace synthetic content generated by AI systems to ensure the authenticity and origin of digital materials.
  • Develop funding policy guidelines for research at the interface of biology and computation that incorporate measures for biological safety and security such as requirements for researchers to self-assess their research on certain safety and security parameters.

The rapid development of AI-powered synthetic biology creates exciting opportunities, but also raises biosecurity concerns. India lacks a national biosecurity strategy that cuts across disciplines. It is about time to implement a series of policy steps, including forming a biosecurity task force, conducting risk assessments, and establishing biosafety regulations to mitigate potential risks and ensuring the safe deployment of AI-bio technologies.