Overview: Safety Beyond the Laboratory
Throughout the development of the PROTEUS project, we have maintained that safety extends far beyond conventional laboratory protocols. As a cutting-edge, AI-driven life science technology, its safety is fundamentally demonstrated through how we proactively identify, manage, and govern the novel social and ethical risks it may introduce. Therefore, we have deeply integrated safety principles into every facet of our Human Practices, constructing a multi-dimensional safety system encompassing Education & Popularization, Communication & Collaboration, Policy & Governance Analysis, and Dual-Use Analysis.
Education & Popularization
We believe the first step towards responsible innovation is public understanding of the technology itself, including its potential and its boundaries.
AI Biosecurity Enlightenment for Youth and the Public
During our science outreach at the BIT Affiliated High School and public streets, we not only introduced the wonders of synthetic biology but also actively guided students to consider the underlying safety questions. Impressively, students spontaneously raised profound questions such as "Could AI protein design open Pandora's box?" and "What if harmful proteins are designed?" This demonstrated the innate sensitivity of the younger generation towards tech ethics.
Our Action: We affirmed their critical thinking and used this opportunity to explain how scientists establish "safety guardrails" through measures like ethical review and genetic synthesis screening. We compiled these interactions into a public article titled "When Cells Become 'Legos', AI is the Super Designer," published on our official account to disseminate the seeds of safety discussions to a broader audience.
Internal "Self-Education" and Safety Culture Building
We actively avoid operating in an echo chamber. The team organized a study session on the recent Science paper discussing "AI-reformulated proteins evading security screening." This in-depth discussion made us realize that biosecurity is a dynamic game of "attack and defense" requiring constant vigilance.
Our Action: This learning directly prompted us to add a dedicated "AI Biosecurity" section to our project Wiki's Safety page. It also solidified our decision to incorporate structure-based screening into our platform design as a supplement to traditional sequence alignment, proactively addressing this frontier challenge.
Communication & Collaboration
Through in-depth dialogue with experts from diverse backgrounds, we have continuously reinforced our project's safety framework.
Foresighted Discussions with AI Safety Experts
In our email interview with Senior Researcher Huang Niu, we specifically inquired about the unpredictable risks of AI models in protein design and methods for physical plausibility verification.
His Guidance: He recommended a layered coupling strategy of "AI preliminary screening + Physical model (e.g., Molecular Dynamics) refinement" and embedding computational validation modules.
Our Integration: This directly influenced our workflow design. We plan to mandate AlphaFold 3 structure prediction and rapid energy optimization as compulsory validation steps for AI design outputs, enhancing the reliability of results.
Strategic Communication with Domain Researchers
In our discussion with Researcher Xu Chunfu, we deeply explored how to ensure the model possesses powerful optimization capabilities without generating harmful or uncontrollable designs.
His Advice: He emphasized the importance of rigorous functional and toxicity screening of training datasets and alerted us to potential biases inherent in the model.
Our Integration: Based on this, we re-reviewed and cleaned our protein dataset, excluding sequences related to known toxins and pathogens, and set clearer safety boundaries at the algorithmic level.
Policy & Governance Analysis
We proactively situate our project within existing and emerging governance frameworks for scrutiny.
| Perspective | Relevant Policies/Regulations | PROTEUS Countermeasures & Compliance |
|---|---|---|
| Laboratory Biosafety | Regulations on Biosafety Management of Pathogen Microorganism Laboratories; BSL-1/2 level laboratory standards. | We commit to using only E. coli and B. subtilis from the iGEM White List. All experiments are conducted in appropriately classified laboratories, with waste sterilized by autoclaving. |
| Genetic Engineering Safety | Safety Administration of Genetic Engineering; categorizes work into four safety levels. | Our project uses organisms and procedures not involving highly pathogenic microbes, classified as Safety Level I, posing minimal risk to humans and the environment. |
| AI Data & Privacy Security | Laws like the Personal Information Protection Law; emerging AI governance principles (e.g., Beijing AI Principles). | Our platform does not store user-submitted protein sequences during online service, performing only real-time computation. All training data comes from public databases and undergoes data anonymization. |
| AI Model & Biosecurity | Emerging AI for Science ethics and governance frameworks (e.g., Tianjin Guidelines on Biosecurity). | We embed ethical constraints in model design. Open-source code and models are released with strict usage licenses explicitly prohibiting malicious use, advocating for responsible application. |
Dual-Use Analysis
We have conducted a dual-use potential analysis of the PROTEUS platform with the utmost prudence.
Potential Benefits
- Accelerating Beneficial Research: Significantly speeds up the development of proteins for disease treatment (e.g., optimized antibodies, therapeutic enzymes), green manufacturing (e.g., high-performance industrial enzymes), and environmental protection (e.g., pollutant-degrading enzymes).
- Democratizing R&D Tools: Enables research teams with limited resources to utilize advanced protein design tools, promoting the democratization of scientific discovery.
- Advancing Methodologies: The "AI prediction + experimental validation" closed-loop we explore provides a new paradigm for synthetic biology research itself.
Potential Risks & Mitigation Measures
(1) Risk One: Technology Misuse – Designing Harmful Biological Agents
Risk Description: Malicious actors could potentially use the platform to design toxins or enhance pathogen virulence.
Mitigation Measures:
- ① Source Screening: Implement strict biosecurity screening on databases used for training and optimization.
- ② Built-in Constraints: Set safety rules within the model algorithms to reject generation of sequences highly similar to known harmful proteins.
- ③ Structural Verification: Introduce AlphaFold 3 for structural comparison to identify harmful functional proteins disguised through "sequence recoding".
- ④ Responsible Open-Sourcing: Explicitly prohibit malicious use in open-source licenses and provide full models only to responsible research institutions.
(2) Risk Two: Unpredictability of AI Models
Risk Description: The AI "black box" nature could lead to the design of unpredictable, structurally unstable, or dysfunctional proteins.
Mitigation Measures:
- ① Wet-Lab Validation: Adhere to the DBTL cycle; all key designs must be validated through small-scale wet-lab experiments, the ultimate "real-world" test.
- ② Computational Verification: Use structure prediction and physio-chemical property calculation as mandatory filters for design outputs.
(3) Risk Three: Potential Coupling Risk with Automated Experimentation
Risk Description: If the platform were directly connected to automated experimental equipment in the future, it could potentially be hijacked for remote, automated synthesis of harmful substances.
Mitigation Measures: PROTEUS is currently positioned strictly as a "design tool," not an "execution tool." All experimental steps are performed by personnel in physical laboratories, ensuring "human-in-the-loop" oversight.
Risk-Benefit Conclusion
We conclude that, with the implementation of the multi-layered, defense-in-depth safety measures described above, the significant social benefits and scientific value brought by the PROTEUS project far outweigh its potential risks. Through proactive governance and transparent communication, we have reduced these risks to an acceptably low level. PROTEUS represents a responsible technological exploration within the AI for Science field, and we are committed to ensuring it becomes a definitive force for societal progress.