Overview
The rapid convergence of artificial intelligence (AI) and biotechnology is heralding a new era in the life sciences, fundamentally altering the landscape of biological research and development. AI-driven tools are dramatically accelerating the pace of discovery, from de novo protein design and drug discovery to the automation of laboratory workflows. However, this unprecedented power brings with it profound and complex challenges for biosafety and biosecurity. This extensive review article synthesizes the current state of AI applications in biology, systematically analyzes the associated dual-use risks, and critically examines the evolving regulatory and governance frameworks. It delves into the technical, policy, and ethical dimensions of mitigating risks, exploring safeguards such as model auditing, DNA synthesis screening, and controlled access platforms. Through detailed case studies and an analysis of the current policy landscape, this article argues that a proactive, multi-layered, and internationally coordinated approach is essential to harness the benefits of AI in biotechnology while safeguarding against catastrophic misuse or accidental harm. The review concludes that the biotechnology and AI communities have a critical responsibility to engage with these challenges, ensuring that the transformative potential of this convergence is realized responsibly and securely.
1. Introduction: The Dawn of a New Biological Era
The 21st century is witnessing a paradigm shift in the life sciences, driven by the powerful convergence of artificial intelligence and biotechnology. This synergy, often termed "AIxBio," is transforming biology from a predominantly observational science into a predictive and engineering discipline [1]. The ability to read, write, and edit genetic code is now being supercharged by machine learning algorithms that can decipher complex patterns in biological data, predict the structures and functions of biomolecules, and even design entirely novel biological systems from first principles [2]. Tools like AlphaFold have revolutionized structural biology by solving the long-standing "protein folding problem," accurately predicting the three-dimensional structure of proteins from their amino acid sequences [3]. Concurrently, generative AI models are being used to design new proteins, small molecules, and genetic circuits with desired properties, dramatically accelerating the pace of innovation in drug discovery, metabolic engineering, and materials science [4].
This transformative power, however, is a double-edged sword. The very capabilities that enable breakthroughs in medicine and sustainability also lower the technical and knowledge barriers to creating biological threats [5]. The dual-use nature of biotechnology—where research intended for benevolent purposes can be misapplied for harm—is amplified by AI [6]. AI can automate complex design tasks, "de-skill" sophisticated laboratory protocols, and generate novel biological agents that may evade existing detection and control mechanisms [7]. This creates a novel and urgent set of challenges for biosafety (preventing accidental release) and biosecurity (preventing intentional misuse) [8].
The current policy and governance frameworks, largely developed in a pre-AI era, are struggling to keep pace with this rapid technological evolution [9]. Traditional biosecurity regimes, such as the Biological Weapons Convention (BWC), focus on tangible pathogens and existing threat agents, but are less equipped to address the risks posed by "virtual" biological designs and AI-generated sequences that have no natural counterpart [10]. Similarly, biosafety oversight, designed for human-centric laboratory workflows, must adapt to the era of highly automated, AI-driven "self-driving labs" where the speed and scale of experimentation can outstrip conventional safety reviews [11].
This comprehensive review aims to provide a detailed examination of the intersection of AI and biosafety. It will begin by outlining the transformative applications of AI across the life sciences. It will then delve into the specific biosafety and biosecurity risks emerging from this convergence, including the potential for AI to enable the design of novel pathogens and toxins, to democratize access to dangerous capabilities, and to create new accident pathways through automation. The article will subsequently analyze the current regulatory and governance landscape, highlighting both national and international efforts to mitigate these risks. A critical evaluation of technical safeguards and risk mitigation strategies will follow, before presenting concrete case studies that illustrate these challenges and responses in practice. Finally, the review will conclude with a discussion of future directions and the overarching imperative for responsible innovation, arguing that a collaborative, multi-stakeholder approach is essential to navigate the promises and perils of AI-driven biology.
2. AI Applications in Life Sciences and Biotechnology
The integration of AI is revolutionizing nearly every facet of the life sciences. Its ability to process vast, unstructured datasets and identify complex, non-linear relationships is unlocking new possibilities in understanding, designing, and engineering biological systems.
2.1 Protein Structure Prediction and Design
One of the most celebrated successes of AI in biology is in the realm of protein science. For decades, determining the three-dimensional structure of a protein from its amino acid sequence was a monumental challenge. Deep learning systems like AlphaFold and RoseTTAFold have now achieved remarkable accuracy in predicting protein structures, often rivaling experimental methods like crystallography [3][12]. This capability has profound implications for understanding disease mechanisms, drug discovery, and enzyme engineering.
Building on predictive success, generative AI models are now enabling the de novo design of proteins. Tools such as RFDiffusion, Chroma, and FrameDiff can "hallucinate" entirely new protein folds and structures that do not exist in nature [4][13]. These models can be conditioned on specific functional requirements, such as creating a protein that binds to a particular target molecule or catalyzes a specific chemical reaction. This moves protein engineering from a process of laboriously modifying existing natural proteins to one of computationally inventing optimized proteins from scratch [14].
2.2 AI-Driven Drug Discovery and Development
The pharmaceutical industry has eagerly adopted AI to streamline the costly and time-consuming drug discovery pipeline. AI models are used for virtual screening of compound libraries, predicting the binding affinity of small molecules to therapeutic targets, and optimizing lead compounds for efficacy and safety [15]. Generative chemistry models can propose novel molecular structures with desired drug-like properties, significantly expanding the explorable chemical space [16]. Furthermore, AI is accelerating drug repurposing efforts by analyzing vast datasets of genetic, clinical, and pharmacological information to identify new therapeutic uses for existing drugs [17].
2.3 Genomics and Genetic Engineering
In genomics, AI tools are essential for analyzing the enormous datasets generated by next-generation sequencing. Models like DeepVariant call genetic variants with high accuracy, which is crucial for both research and clinical diagnostics [18]. Other tools, such as EVE and AlphaMissense, predict the pathogenicity of genetic mutations, helping to interpret the clinical significance of variants of unknown significance [19][20].
In the field of genetic engineering, AI is enhancing the precision and efficiency of tools like CRISPR-Cas9. Machine learning models can predict the on-target efficiency and off-target effects of guide RNAs, allowing researchers to select the most specific and effective sequences for gene editing [21]. This reduces the experimental burden and minimizes the risk of unintended genomic alterations.
2.4 Synthetic Biology and Metabolic Engineering
Synthetic biology, which aims to design and construct new biological parts, devices, and systems, is heavily reliant on AI. AI assists in the design of complex genetic circuits, predicting how genetic components will interact within a cellular environment to achieve a desired function, such as biosensing or chemical production [22]. In metabolic engineering, AI models can suggest optimal genetic modifications to microbial chassis organisms to maximize the yield of valuable compounds, from biofuels to pharmaceuticals [23]. This involves predicting the outcomes of multiple gene knockouts, knock-ins, and regulatory changes, a task too complex for traditional methods.
2.5 Laboratory Automation and the "AI Scientist"
Perhaps one of the most transformative applications is the rise of automated laboratories and "AI scientists." These systems combine AI with robotics to automate the entire design-build-test-learn (DBTL) cycle [24]. AI plans experiments, robotic platforms execute them (e.g., pipetting, culturing cells), and the resulting data is fed back to the AI to refine the next round of hypotheses and experiments [25]. Cloud-based "biofoundries" offer this capability as a service, allowing researchers to run experiments remotely [26]. These AI-driven systems can troubleshoot failed experiments, optimize protocols, and conduct high-throughput screening at a scale and speed impossible for human researchers alone, potentially de-skilling aspects of laboratory work and changing the nature of biological research [11][27].
3. Biosafety and Biosecurity Risks of AI-Driven Biotechnologies
The powerful capabilities of AI in biology, while promising immense benefit, simultaneously introduce and amplify a spectrum of biosafety and biosecurity risks. The dual-use dilemma is central to this challenge, as the same tool used to design a life-saving therapeutic could, in principle, be repurposed to engineer a bioweapon [5][6].
3.1 The Evolving Dual-Use Landscape
Historically, the development of biological weapons was constrained by significant technical, logistical, and knowledge barriers [28]. State-level programs required vast resources, specialized expertise, and large-scale infrastructure. AI has the potential to erode these barriers [7]. By automating complex design tasks and providing intuitive interfaces, AI can empower actors with less formal training to undertake sophisticated biological engineering. This "democratization" of capability, while beneficial for global innovation, also expands the pool of potential malicious actors [29]. Experts warn that advances in AI-enabled protein design "could be abused… for terrorist or criminal purposes" [30][31].
3.2 Design of Novel Pathogens and Toxins
A primary biosecurity concern is the use of generative AI to design novel biological threats. This could take several forms:
Novel Toxins: AI models can be used to design new protein-based toxins or to optimize existing ones for enhanced stability, potency, or delivery. For example, a model trained to design therapeutic peptides could be inverted to generate sequences predicted to be highly cytotoxic [32].
Pathogen Enhancement: AI could be used to modify existing viruses or bacteria to alter their properties, such as increasing transmissibility, expanding host range, or enabling evasion of natural or vaccine-induced immunity [33].
Synthetic Homologs: AI can generate functional proteins that are structurally similar to known toxins or virulence factors but have minimal sequence similarity. This poses a direct challenge to existing biosecurity controls, such as DNA synthesis screening, which often rely on sequence homology to flag "sequences of concern" [34][35]. A study by Wittmann et al. demonstrated that AI could generate thousands of variants of known toxins that initially evaded commercial screening tools [34].
3.3 Information Hazards and De-skilling
Large Language Models (LLMs) and other general-purpose AI systems pose a distinct risk as information hazards. Even models not specifically designed for biology can be prompted to provide step-by-step protocols for dangerous procedures, suggest sources for acquiring restricted materials, or troubleshoot complex laboratory methods [36][37]. Studies have shown that chatbots can generate detailed plans for acquiring and manipulating potential pandemic pathogens, effectively acting as a "non-judgmental AI virology expert" that could guide a malicious actor [38]. This reduces the need for deep, tacit knowledge, effectively "de-skilling" the process of biological threat creation [7][39].
3.4 Biosafety Risks in Automated Workflows
The integration of AI and laboratory automation introduces novel biosafety concerns. While automation can reduce human error, it also introduces new failure modes. An "AI scientist" operating autonomously could, due to a programming error or flawed training data, initiate a dangerous experiment outside of its intended safe parameters [11]. For instance, an AI tasked with optimizing viral growth for vaccine development might inadvertently select for mutations that increase pathogenicity or environmental stability [40]. The high-throughput, continuous nature of automated labs could also increase the risk of accidental exposure or release if physical containment protocols are not perfectly synchronized with the accelerated experimental pace [41]. Furthermore, over-reliance on AI systems could lead to complacency among human researchers, who might bypass critical safety checks under the assumption that the AI has already considered all risks [27].
3.5 Chain-of-Custody Gaps and Virtual Threats
The digital nature of AI-generated designs creates a "chain-of-custody" gap in biosecurity. A dangerous genetic sequence can be designed on a laptop in one country, emailed to a second, and synthesized in a third, evading traditional export controls and physical inspection regimes that govern the transfer of tangible pathogens [9]. The BWC, for example, currently has no explicit provisions for regulating purely digital biological blueprints [10]. This creates a governance vacuum where a virtual threat can be globally disseminated before any physical synthesis occurs.
4. Regulatory and Governance Challenges
The rapid pace of innovation in AIxBio has created a significant gap between technological capabilities and the policy frameworks designed to govern them. Existing regimes are often reactive, fragmented, and ill-suited to address the unique challenges posed by AI-generated biological risks.
4.1 The Limits of Current Biosecurity Regimes
The international cornerstone of biosecurity, the Biological Weapons Convention (BWC), prohibits the development, production, and stockpiling of biological weapons. However, its implementation relies on state-level compliance and lacks robust verification mechanisms [42]. More critically, the BWC and associated national regulations (like the U.S. Select Agent Rule) are primarily focused on a defined list of known pathogens and toxins. They are not designed to address the threat of novel, AI-designed biological agents that do not appear on any list [10][43]. This creates a dangerous loophole where a maliciously designed synthetic homolog would not be subject to existing physical security regulations until after it is synthesized and characterized—which could be too late.
4.2 DNA Synthesis Screening and its AI-Driven Obsolescence
A key pillar of modern biosecurity is the screening of synthetic DNA orders. The International Gene Synthesis Consortium (IGSC) and U.S. government guidance (e.g., from HHS) recommend that providers screen orders against databases of known pathogens and toxins to prevent the synthesis of regulated agents [44][45]. However, these screening protocols are fundamentally based on sequence similarity. As demonstrated by the Wittmann et al. study, AI can easily generate functionally similar sequences with low homology, allowing them to bypass these filters [34]. While screening companies are rapidly updating their algorithms in response, this creates a cat-and-mouse game between AI designers and screening tools [46]. The U.S. has recently moved to strengthen this system, with a new Framework for Nucleic Acid Synthesis Screening and an executive order requiring that federally funded research only use screened providers [47][48]. Similar efforts are underway in the UK [49]. Nonetheless, the core challenge remains: screening for function rather than just sequence is a much more complex computational problem.
4.3 The Challenge of Governing AI Models Themselves
There is an ongoing and contentious debate about how to govern the AI models that pose the highest biosecurity risks. Proposals range from voluntary commitments by developers to hard legal restrictions [50]. Key questions include:
Open vs. Closed Source: Should the weights and code of powerful biological design AI be released openly to foster innovation and transparency, or should they be kept "closed" behind APIs with controlled access to prevent misuse? Open-source models democratize beneficial research but also make them available to malicious actors without any safeguards [51].
Structured Access: A middle-ground approach involves "structured access" or "tiered access" models, where users must be vetted, agree to terms of use, and their activities are logged [52]. Platforms like Hugging Face and Kaggle offer models in controlled environments, but implementing robust "Know Your Customer" (KYC) checks for biological AI is resource-intensive and not foolproof [53].
Pre-release Evaluation and Red Teaming: There is a growing consensus that advanced AI models, particularly those trained on biological data, should undergo rigorous safety testing before release. This includes "red teaming" exercises where experts attempt to misuse the model to identify and patch vulnerabilities [54][55]. The U.S. Executive Order on AI mandates such evaluations for models exceeding a certain computational threshold, with a lower threshold specifically for biological models, recognizing their heightened dual-use concern [48].
4.4 International Coordination and Norm-Setting
The global nature of both science and the AI threat necessitates international coordination. While forums like the AI Safety Summit and the OECD are beginning to address AI risks, dedicated efforts for the AIxBio intersection are still nascent [56]. The AIxBio Global Forum, convened by organizations like the Nuclear Threat Initiative (NTI), is one initiative aiming to build a shared understanding and develop model policies [57]. Harmonizing DNA synthesis screening standards across countries is another critical priority to prevent "jurisdiction shopping" by malicious actors [58]. Integrating AI-enabled biorisks into the agenda of the BWC is also a crucial, though challenging, step for the international community [59].
4.5 Voluntary Governance and Community Initiatives
In the absence of binding international law, the scientific community has begun to self-organize. The "Responsible AI x Biodesign" statement is a prominent example, where leading protein designers committed to pre-release screening of models and purchasing DNA only from providers that screen orders [60]. Similarly, some research funders, like Wellcome, now require grant applicants to detail how they will mitigate dual-use risks in their work [61]. While these voluntary norms are important for building a culture of responsibility, they lack enforcement mechanisms and may not be adopted by all relevant actors [62].
5. Technical Safeguards and Risk Mitigation Strategies
A multi-layered defense-in-depth strategy is required to mitigate the biorisks associated with AI. This involves a combination of technical safeguards embedded in the AI tools themselves, procedural controls in the research environment, and overarching governance structures.
5.1 Model-Level Safeguards
Red Teaming and Adversarial Testing: Before release, AI models should be subjected to rigorous, independent red teaming to probe their potential for misuse. This involves systematically attempting to prompt the model into generating harmful outputs, such as dangerous protein designs or detailed protocols for weaponization [54]. The findings from these exercises can be used to refine the model's safety features.
Built-in Refusal Mechanisms: Similar to the safety filters in general-purpose LLMs like ChatGPT, biological design tools can be programmed to refuse requests that are clearly malicious or that output sequences with high similarity to known toxins and pathogens [63]. However, this is particularly challenging in biology due to the dual-use overlap; a request to "design a protein that binds strongly to human lung cells" could be for a therapeutic or a weapon. More sophisticated, context-aware refusal systems are needed [64].
Data Curation and Filtering: Developers can curate the training datasets for AI models to exclude the most sensitive information, such as the genomes of high-consequence pathogens or the sequences of known toxins [65]. Models like ESM3 and Evo have employed this strategy. However, this approach is controversial, as restricting pathogen data can impede critical public health research, and models can sometimes infer dangerous capabilities from benign data [66].
5.2 Operational and Access Controls
Tiered Access Platforms: Instead of releasing full model weights, developers can provide access through a controlled Application Programming Interface (API). This allows for user authentication, activity monitoring, and the enforcement of usage policies [52]. It enables a "know your user" approach, where access to the most powerful capabilities is granted only to vetted researchers affiliated with legitimate institutions [53].
Audit Trails and Design Metadata: A powerful safeguard is the comprehensive logging of all AI design activities. Capturing the "design metadata"—the input prompts, model parameters, and generated outputs—creates an audit trail [67]. If a suspicious sequence is later synthesized, it can potentially be traced back to the user and the AI tool that generated it. This creates a deterrent effect and aids in forensic investigations.
5.3 Strengthening the Biosecurity Infrastructure
Next-Generation DNA Synthesis Screening: Screening protocols must evolve from simple homology matching to more sophisticated, AI-powered functional prediction. Screening tools themselves need to use machine learning to predict whether a novel sequence is likely to code for a toxic or pathogenic function, even if its sequence is unique [46][68]. Public-private partnerships are essential to develop and deploy these enhanced screening tools globally.
Enhanced Biosafety Oversight for Automated Labs: Institutional Biosafety Committees (IBCs) and other oversight bodies need to develop specific expertise and protocols for reviewing AI-driven and automated research projects [11]. This includes assessing the safety of the AI's decision-making logic, the robustness of the human-machine interface, and the containment protocols for high-throughput automated systems.
5.4 The Role of AI Alignment and Human-in-the-Loop
Ultimately, technical safeguards must be underpinned by a fundamental commitment to AI safety and alignment—ensuring that AI systems robustly pursue their intended goals and avoid harmful behaviors [69]. In the context of biotechnology, this means designing AI "scientists" that are inherently cautious, transparent in their reasoning, and require human approval for critical steps [70]. Maintaining a "human-in-the-loop" for key decisions, especially those involving the design or manipulation of potentially hazardous biological agents, remains a crucial safety principle in the age of automation [71].
6. Case Studies and Real-World Examples
Concrete examples help to crystallize the abstract risks and mitigation strategies discussed above, illustrating both the vulnerabilities and the potential for resilience.
6.1 Case Study: AI-Designed Toxins and the Failure of DNA Screening
A landmark study by Wittmann et al. provided a sobering demonstration of AI's ability to circumvent biosecurity measures [34]. Researchers used publicly available AI protein design tools to generate 76,000 novel variants of 72 known toxin proteins. These AI-generated sequences were functionally similar to the toxins but had minimal sequence homology. When these sequences were submitted to commercial DNA synthesis screening services, the initial detection rate was alarmingly low. This proved that existing screening frameworks were vulnerable to AI-aided evasion. However, the case also demonstrates the capacity for proactive mitigation. The researchers collaborated with the DNA synthesis companies and biosecurity organizations like IBBIS to update the screening algorithms. Through this collaboration, detection rates for the most threatening AI-generated sequences were improved to approximately 97% [34][46]. This case underscores a critical dynamic: as AI creates new threats, it can also be harnessed to strengthen our defenses, but this requires continuous vigilance and collaboration.
6.2 Case Study: Chatbots as Enablers of Bioterrorism Planning
Several studies have probed the ability of general-purpose LLMs to assist in biological threat creation. A team at MIT tasked an AI system with listing potential pathogens, devising synthesis strategies, and identifying methods to evade DNA synthesis screening [37]. The AI produced comprehensive, multi-step plans over the course of an hour. In a separate instance, a generative chemistry model, when prompted to maximize toxicity, produced tens of thousands of molecule structures, including analogs of the VX nerve agent [32]. These experiments, conducted in a controlled research setting, revealed that even AIs not specifically designed for biology can significantly lower the information barrier to bioweapons development. In response to these findings, several AI companies, including OpenAI and xAI, have publicly committed to implementing and strengthening "virology safeguards" and other system-level mitigations in their models [38][72]. This highlights the need for continuous red teaming of even non-specialized AI systems.
6.3 The PROTEUS Platform: A Hypothetical Risk-Benefit Analysis
Consider a hypothetical but plausible cutting-edge platform called PROTEUS, which uses an iteratively fine-tuned protein language model (like ESM-2) combined with AlphaFold for structural validation and wet-lab automation for testing. Its goal is to rapidly optimize proteins for therapeutic applications.
Benefits: PROTEUS could drastically accelerate the development of new enzymes, vaccines, and targeted therapies, bringing life-saving treatments to market years faster.
Risks:
- Dual-use Misuse: A malicious actor could use PROTEUS to optimize the binding affinity, stability, or delivery of a toxin. By inputting a benign protein as a "cover," the system could be repurposed to generate novel toxic variants [73].
- Novel Variant Safety: The platform may generate thousands of novel protein variants with unpredictable properties, such as unforeseen allergenicity or off-target interactions, posing a biosafety risk if not thoroughly characterized [74].
- Lack of Oversight: If PROTEUS were open-source and freely accessible, it could be exploited without any vetting, logging, or constraints, allowing anyone to generate libraries of potentially hazardous designs [51].
Mitigation Strategies for PROTEUS:
- Built-in Screening: Integrate automatic biosecurity screening of all output sequences against toxin and pathogen databases before they are sent to synthesis [60].
- Controlled Access: Implement a tiered access model. A limited web interface could be available to all, while full model access requires institutional authentication and agreement to biosecurity terms [52].
- Audit Trails: Maintain immutable logs of all user queries and generated sequences, creating a deterrent and enabling forensic tracing [67].
- Human-in-the-Loop: Require human expert review for any design that is highly novel or that the model flags as high-risk [71].
- Community Governance: Embed PROTEUS within existing biosafety frameworks (e.g., iGEM safety rules) and invite external biosecurity experts to periodically red-team the system [75].
This hypothetical case illustrates that the benefits of powerful AI-bio platforms are inextricably linked to significant risks, which can only be managed through a deliberate and multi-faceted "safety-by-design" approach.
7. Conclusion and Future Directions
The convergence of artificial intelligence and biotechnology is an undeniable and transformative force. It holds the promise of solving some of humanity's most pressing challenges in health, food security, and environmental sustainability. However, this review has detailed the parallel and equally real risks it poses to biosafety and biosecurity. The core challenge lies in the potent dual-use nature of these technologies, where the power to create and heal is intimately linked to the power to harm and destroy.
The risks are multifaceted: AI can lower the barrier to biological weapon design, enable the creation of novel threats that evade existing detection systems, automate laboratory workflows in ways that introduce new accident pathways, and disseminate dangerous knowledge globally in an instant. The current governance ecosystem is fragmented and struggling to adapt, creating dangerous gaps between technological capability and regulatory oversight.
Addressing these challenges requires a proactive, collaborative, and multi-layered strategy. There is no single silver bullet. Effective mitigation will depend on the synergistic application of:
- Technical Safeguards: Robust red teaming, built-in refusal mechanisms, and controlled access platforms for the most powerful AI models.
- Policy and Governance: Updated international norms, harmonized and functionally-aware DNA synthesis screening standards, and the integration of AI risks into frameworks like the BWC.
- Cultural and Ethical Commitment: A strong culture of responsibility within the AI and biotechnology communities, embodied in voluntary commitments, ethical training for researchers, and proactive engagement with policymakers.
Looking forward, several key areas demand focused attention:
- Developing Robust Benchmarks: The community needs standardized, ethically-sound benchmarks to evaluate the misuse potential of biological design tools, moving beyond hypotheticals to evidence-based risk assessments [76].
- Fostering International Dialogue: Sustained diplomatic effort is needed to build global consensus on norms for AI in biology, preventing a destabilizing arms race and ensuring that safety measures are implemented worldwide [57].
- Investing in Defensive AI: Just as AI can be used to create threats, it must be harnessed for defense. This includes AI-powered surveillance for novel pathogens, advanced forensics for attributing attacks, and the development of rapid medical countermeasures [77].
- Maintaining Scientific Openness: While managing risks, it is imperative to preserve the open scientific collaboration that drives progress. Overly restrictive measures that stifle innovation or create inequitable access to beneficial technologies must be avoided [78].
The responsibility for navigating this new frontier does not lie with governments or industry alone. It is a shared duty among AI developers, biologists, security experts, ethicists, and policymakers. By working together to build a comprehensive framework of technical, ethical, and governance safeguards, we can steer the course of AI-driven biology towards a future of profound benefit for humanity, while robustly guarding against its inherent perils. The time to act is now, before the capabilities outrun our collective capacity to manage them.
References
[1] Oliveira A L. Biotechnology, big data and artificial intelligence[J]. Biotechnology Journal, 2019, 14(8): 1800613.
[2] Bhardwaj A, Kishore S, Pandey D K. Artificial intelligence in biological sciences[J]. Life, 2022, 12(9): 1430.
[3] Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with AlphaFold[J]. Nature, 2021, 596(7873): 583-589.
[4] Watson J L, Juergens D, Bennett N R, et al. De novo design of protein structure and function with RFdiffusion[J]. Nature, 2023, 620(7976): 1089-1100.
[5] National Research Council (US). Science and security in a post 9/11 world: a report based on regional discussions between the science and security communities[R]. Washington, DC: National Academies Press (US), 2007.
[6] Carter S R, Wheeler N E, Chwalek S, et al. The convergence of artificial intelligence and the life sciences: safeguarding technology, rethinking governance, and preventing catastrophe[R]. Washington, DC: Nuclear Threat Initiative, 2023.
[7] Sandbrink J B. Artificial intelligence and biological misuse: differentiating risks of language models and biological design tools[J]. arXiv preprint arXiv:2306.13952, 2023.
[8] O'Brien J T, Nelson C. Assessing the risks posed by the convergence of artificial intelligence and biotechnology[J]. Health Security, 2020, 18(3): 219-228.
[9] Synthetic biology/AI convergence (SynBioAI): security threats in frontier science and regulatory challenges[J]. AI & SOCIETY, 2025.
[10] UNODA. Biological Weapons Convention[EB/OL]. [2024-11-30]. https://disarmament.unoda.org/biological-weapons/
[11] Risks of AI scientists: prioritizing safeguarding over autonomy[J]. Nature Communications, 2025, 16: 63913.
[12] Krishna R, Wang J, Ahern W, et al. Generalized biomolecular modeling and design with RoseTTAFold All-Atom[J]. Science, 2024, 384(6693): eadl2528.
[13] Ingraham J B, Baranov M, Costello Z, et al. Illuminating protein space with a programmable generative model[J]. Nature, 2023, 623(7989): 1070-1078.
[14] Generative AI imagines new protein structures[J]. MIT News, 2023.
[15] Walters W P, Murcko M. Assessing the impact of generative AI on medicinal chemistry[J]. Nature Biotechnology, 2020, 38: 143-145.
[16] Urbina F, Lentzos F, Invernizzi C, et al. Dual use of artificial-intelligence-powered drug discovery[J]. Nature Machine Intelligence, 2022, 4: 189-191.
[17] Williams K, Bilsland E, Sparkes A, et al. Cheaper faster drug development validated by the repositioning of drugs against neglected tropical diseases[J]. Journal of the Royal Society Interface, 2015, 12(104): 20141289.
[18] Poplin R, Chang P C, Alexander D, et al. A universal SNP and small-indel variant caller using deep neural networks[J]. Nature Biotechnology, 2018, 36(10): 983-987.
[19] Frazer J, Notin P, Dias M, et al. Disease variant prediction with deep generative models of evolutionary data[J]. Nature, 2021, 599(7883): 91-95.
[20] Cheng J, Novati G, Pan J, et al. Accurate proteome-wide missense variant effect prediction with AlphaMissense[J]. Science, 2023, 381(6664): eadg7492.
[21] Abadi S, Yan W X. Deep learning approaches for CRISPR guide RNA design[J]. Briefings in Bioinformatics, 2022, 23(1): bbab433.
[22] Beardall W A V, Stan G B, Dunlop M J. Deep learning concepts and applications for synthetic biology[J]. GEN Biotechnology, 2022, 1(4): 360-371.
[23] HamediRad M, Chao R, Weisberg S, et al. Towards a fully automated algorithm driven platform for biosystems design[J]. Nature Communications, 2019, 10(1): 5150.
[24] Sparkes A, King R D, Aubrey W, et al. An integrated laboratory robotic system for autonomous discovery of gene function[J]. Journal of Laboratory Automation, 2010, 15(1): 33-40.
[25] Boiko D A, MacKnight R, Gomes G. Emergent autonomous scientific research capabilities of large language models[J]. arXiv preprint arXiv:2304.05332, 2023.
[26] Lee D H, Kim H, Sung B H, et al. Biofoundries: bridging automation and biomanufacturing in synthetic biology[J]. Biotechnology and Bioprocess Engineering, 2023, 28(6): 892-904.
[27] CNAS. AI and the evolution of biological national security risks: capabilities, thresholds, and interventions[R]. Washington, DC: Center for a New American Security, 2024.
[28] Ben Ouagrham-Gormley S. Barriers to bioweapons: the challenges of expertise and organization for weapons development[M]. Ithaca, NY: Cornell University Press, 2014.
[29] Soice E H, Rocha R, Cordova K, et al. Can large language models democratize access to dual-use biotechnology?[J]. arXiv preprint arXiv:2306.03809, 2023.
[30] Security challenges by AI-assisted protein design: The ability to design proteins in silico could pose a new threat for biosecurity and biosafety[J]. PMC, 2024.
[31] Responsible AI in biotechnology: balancing discovery, innovation and biosecurity risks[J]. PMC, 2024.
[32] Urbina F, Lentzos F, Invernizzi C, et al. Dual use of artificial-intelligence-powered drug discovery[J]. Nature Machine Intelligence, 2022, 4: 189-191.
[33] Pannu J, Bloomfield D, Zhu A, et al. Prioritizing high-consequence biological capabilities in evaluations of artificial intelligence models[J]. arXiv preprint arXiv:2407.13059, 2024.
[34] Wittmann B J, et al. Toward AI-Resilient Screening of Nucleic Acid Synthesis Orders: Process, Results, and Recommendations[R]. Redmond, WA: Microsoft Research, 2025.
[35] Baker D, Church G. Protein design meets biosecurity[J]. Science, 2024, 383(6681): 349.
[36] Confronting the AI-Accelerated Threat of Bioterrorism[EB/OL]. Global Biodefense, 2025.
[37] Soice E H, Rocha R, Cordova K, et al. Can large language models democratize access to dual-use biotechnology?[J]. arXiv preprint arXiv:2306.03809, 2023.
[38] Time. AI models outperform virologists in lab troubleshooting, raising biosecurity concerns[EB/OL]. 2024.
[39] Wheeler N E. Responsible AI in biotechnology: balancing discovery, innovation and biosecurity risks[J]. PMC, 2024.
[40] Thadani N N, Gurev S, Notin P, et al. Learning from prepandemic data to forecast viral escape[J]. Nature, 2023, 622(7984): 818-825.
[41] National Academies of Sciences, Engineering, and Medicine. Biodefense in the Age of Synthetic Biology[R]. Washington, DC: The National Academies Press, 2018.
[42] Cropper N R, Rath S, Teo R J C, et al. A modular-incremental approach to improving compliance verification with the biological weapons convention[J]. Health Security, 2023, 21(5): 421-427.
[43] Koblentz G D. Living weapons: biological warfare and international security[M]. Ithaca, NY: Cornell University Press, 2009.
[44] International Gene Synthesis Consortium. Harmonized Screening Protocol[EB/OL]. [2024-11-30]. https://genesynthesisconsortium.org/
[45] U.S. Department of Health & Human Services. Screening Framework Guidance for Providers and Users of Synthetic Nucleic Acids[EB/OL]. 2023.
[46] Wheeler N E, Bartling C, Carter S R, et al. Progress and prospects for a nucleic acid screening test set[J]. Applied Biosafety, 2024, 29(3): 133-141.
[47] US National Science and Technology Council. Framework for nucleic acid synthesis screening[R]. Washington, DC: The White House, 2024.
[48] The White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence[EB/OL]. 2023.
[49] GOV.UK. UK screening guidance on synthetic nucleic acids for users and providers[EB/OL]. 2024.
[50] AI and biosecurity: The need for governance[R]. Baltimore, MD: Center for Health Security, 2024.
[51] The Nuclear Threat Initiative. Developing guardrails for AI biodesign tools[EB/OL]. 2024.
[52] Helena. Biosecurity in the Age of AI[EB/OL]. 2023.
[53] Huggingface. The AI community building the future[EB/OL]. [2024-11-30]. https://huggingface.co/
[54] OpenAI. OpenAI o1 system card[R]. San Francisco: OpenAI, 2024.
[55] Anthropic. A new initiative for developing third-party model evaluations[EB/OL]. 2024.
[56] AI Safety Summit. Capabilities and risks from frontier AI[R]. United Kingdom: DSIT, 2023.
[57] NTI. AIxBio global Forum structure and goals[EB/OL]. 2024.
[58] IBBIS. International Biosecurity and Biosafety Initiative for Science[EB/OL]. [2024-11-30]. https://ibbis.bio/
[59] The InterAcademy Partnership. Proof of concept meeting on a BWC scientific advisory body procedural report[R]. 2024.
[60] Responsible AI x Biodesign. Community Values, Guiding Principles, and Commitments for the Responsible Development of AI for Protein Design[EB/OL]. 2024.
[61] Wellcome. Managing risks of research misuse[EB/OL]. 2024.
[62] The Nuclear Threat Initiative. International bio funders Compact[EB/OL]. 2024.
[63] Google Cloud. Gemini for google cloud and responsible AI[EB/OL]. [2024-11-30]. https://cloud.google.com/gemini/docs/discover/responsible-ai
[64] Grin C, Howard H, Paterson A, et al. Our approach to biosecurity for AlphaFold 3[EB/OL]. DeepMind, 2024.
[65] Hayes T, Rao R, Akin H, et al. Simulating 500 million years of evolution with a language model[J]. bioRxiv, 2024.
[66] Maxmen A. Why some researchers oppose unrestricted sharing of coronavirus genome data[J]. Nature, 2021, 593(7858): 176-177.
[67] Baker D, Church G. Protein design meets biosecurity[J]. Science, 2024, 383(6681): 349.
[68] Godbold G D, Kappell A D, LeSassier D S, et al. Categorizing sequences of concern by function to better assess mechanisms of microbial pathogenesis[J]. Infection and Immunity, 2021, 89(15): e0033421.
[69] Nature. AI pioneers win 2024 Nobel prizes[J]. Nature Machine Intelligence, 2024, 6(11): 1271.
[70] Nature. Risks of AI scientists: prioritizing safeguarding over autonomy[J]. Nature Communications, 2025, 16: 63913.
[71] Sadasivan S, Zare R N. The perils of machine learning in designing new chemicals and materials[J]. Nature Machine Intelligence, 2022, 4: 884-885.
[72] xAI. Announcing new virology safeguards[EB/OL]. 2024.
[73] Ekins S, et al. Generative artificial intelligence-assisted protein design must consider repurposing potential[J]. GEN Biotechnology, 2023, 2(4): 275-279.
[74] Sumida K H, Núñez-Franco R, Kalvet I, et al. Improving protein expression, stability, and function with ProteinMPNN[J]. Journal of the American Chemical Society, 2024, 146(3): 2054-2061.
[75] iGEM. Safety and Security[EB/OL]. [2024-11-30]. https://responsibility.igem.org/safety
[76] RAND. On the responsible development and use of chem-bio AI models[R]. Santa Monica, CA: RAND Corporation, 2024.
[77] U.S. Department of Homeland Security. FACT sheet and report: DHS advances efforts to reduce the risks at the intersection of artificial intelligence and chemical, biological, radiological, and nuclear (CBRN) threats[EB/OL]. 2024.
[78] Committee on Genomics Databases for Bioterrorism Threat Agents, Board on Life Sciences. Seeking security: pathogens, open access, and genome databases[R]. Washington, DC: National Academies Press, 2004.