The intersection of artificial intelligence (AI) with biotechnology has ushered in a period of innovation in healthcare. The interdisciplinary nature of AI enables it to analyse and interpret large data sets from multiple domains. In the case of the life sciences, AI-biotech convergence holds the potential to alter the landscape of diagnostics, monitoring of disease progression, precision medicine, and prediction of public health threats.

While the convergence of AI with biotechnology holds immense potential for innovative developments in science and medicine it also carries the scope for misuse. Governance of AI in popular discourse includes the risk of AI-biotechnology (AI-bio) tools being utilised for malicious purposes or the production of bioweapons. Detailed understandings need to be acquired to determine the feasibility of the development of bioweapons with AI-bio tools and to formulate safeguards against such threats.
The case for governance
As AI progresses into multiple domains its governance to ensure its responsible and ethical use has come under scrutiny. The UN’s Governing AI for Humanity report underscores the need to govern AI as ‘no one currently understands all of AI’s inner workings enough to fully control its outputs or predict its evolution.’ The US and UK have made steps in this regard, but in the Indian scenario there exists a need to develop a framework to address these concerns.
Popular news articles have drawn considerable attention to how large language models (LLMs) like ChatGPT could increase the technical know-how available to malicious actors and provide them with information on how to create viruses with pandemic potential. Undue publicity over the potential of a new technology can overestimate the ease with which it can be developed. In the case of al-Qaeda’s bioweapons plan, they insisted that their attempts to produce a bioweapon stemmed from US reports that it was easy and cheap to carry out. Thus, overestimating the ease with which bioweapons can be manufactured may encourage the enemy to pursue production.
Here, OpenAI’s stress test in 2023 found that LLMs can provide information on how to order oligonucleotides, on experimental protocols, and can assist in troubleshooting experiments. Another study found that it can provide users with information on DNA technology companies that were unlikely to flag suspicious oligos, suggest mutations to enhance the pathogenicity of viruses, and suggest contract research organisations that could carry out these experiments. The information provided is publicly available but it is presented in a manner that is understandable to a non-expert. Upon closer inspection it is clear that LLMs democratise knowledge but only slightly lower the barriers to bioweapons development. Experts have long opined that bioweapons production requires not just information but also tacit knowledge – knowledge that is acquired through experimental training — in successfully carrying out a biological experiment. This is acquired primarily through advanced formal training.
Implications of the AI-Bio threat
AI is essentially a system that is based on the data it receives and its functioning depends on the users’ intent. There exists a tremendous gap in bridging the space between the digital design of a bioweapon to the physical manufacturing of one.
This relies heavily on the intent of the user and their level of scientific training. Existing AI-bio tools can ‘hallucinate’ or provide misleading information, which would not be distinguishable by a non-expert user. Further, biological experiments require expensive and specialised equipment and materials, which are a significant barrier to bioweapons production.
Possibilities exist, however, that in the future, other AI-bio tools like Biological Design Tools (BDTs), which currently aid in the design of biological molecules, can be advanced further in the future and aid in malevolent actors to develop bioweapons capable of evading immune responses or resist existing therapies. In addition, concerns exist that AI-bio tools may skew scientific knowledge triggering disinformation and misinformation particularly during public health emergencies, rendering bio-attribution (the assigning of responsibility to a biological threat) tricky.
Policy Recommendations
India is a signatory to the Bletchley Declaration, which advocates for the safe development and deployment of AI. The intersection of AI with biotechnology deserves considerable attention as well but is situated in a policy vacuum in India. A policy of dissuasion would discourage malicious actors from developing bioweapons using AI-bio tools.
India needs to formulate threat assessments using red-teaming exercises to develop an understanding of bioweapons capabilities of existing AI-bio tools in the Indian context. This entails determining the capabilities of current AI-bio tools and an understanding of India’s security architecture. These exercises can also ascertain the risks posed by future AI-bio tools. This can include intersections with other disruptive technologies including unmanned aerial vehicles (drones) and 3D printing, which are implicated in enhancing the delivery of bioweapons.
India can introduce guidelines where DNA companies mandate a know-your-customer based approach in the acquisition of biological materials. For instance, the International Gene Synthesis Consortium (IGSC), International Biosafety and Biosecurity Initiative for Science (IBBIS), and SecureDNA are examples of organisations that screen oligonucleotides. Educating and incentivising research and academia over potential areas of misuse is another long-term technical barrier. Another guardrail would be to engage with AI-bio tool developers to regulate the nature of biological data that is available.
Finally, AI Safety Institutes would foster an environment to ensure AI and its intersecting technologies are ethically driven. India’s IndiaAI Mission policy – which aims to promote ethical AI technologies – can take part in this endeavour.
Discourse on the development of bioweapons with aid from AI-bio tools is popularised by media reports. There exists a need to develop an assessment of its potentiality in the Indian context as it falls into a policy vacuum. Effective guardrails would act as a deterrent for bioweapons development.
A full report can be read here.