Overview In BEAM model, we proposed a methodological framework for the quantitative assessment of ethical risks in synthetic biology for the first time. By identifying the core risk dimensions, applying Structural Equation Modeling (SEM) to extract influencing factors, and constructing a multi-layer Bayesian network, we went beyond the traditional limitations of qualitative analysis. Building on this foundation, we further developed a companion software application BEAMer which presents the complex assessment process through an interactive interface, thereby transforming the theoretical framework into a operationalized tool. This integration of model and software not only enhances the usability and generalizability of the model but also provides a novel solution for the systematic evaluation of ethical risks in synthetic biology. BEAMer adopts a decoupled front-end/back-end architecture. Specifically, the back end incorporates a Bayesian network inference engine based on pgmpy, while the front end utilized HTML5, CSS3, and JavaScript to provide interaction and visualization. Through completing evaluation form, users are able to obtain probabilistic risk estimates across multiple dimensions, together with explanatory insights. Consequently, BEAMer transforms the complex process of ethical risk analysis into a transparent, reliable, and user-oriented web application. The entire source code is available at the repository https://gitlab.igem.org/2025/software-tools/cjuh-jlu-china. Introduction In the construction of the BEAM, we first conducted literature research and ethical guideline analysis to identify the core ethical risk dimensions in synthetic biology. We then employed Structural Equation Modeling (SEM) to quantitatively measure the causal relationships among these dimensions, thereby scientifically assessing the strength of their impacts. Based on the SEM results, we designed a hierarchical Bayesian network structure and specified conditional probability distributions for each node. This allowed the risk dimensions to propagate probabilistically within the network, enabling quantitative risk assessment. Finally, we demonstrated inference through representative project scenarios as an initial validation, ensuring that the model outputs are both reasonable and interpretable. This process of turning abstract analysis to quantitative modeling laid a solid theoretical foundation for the development of BEAMer. After completing the BEAM model, we gradually realized that the key to broadening its application lies in lowering the barriers to usage and enhancing the user experience. To this end, we developed a new web-based application derived from the BEAM model — BEAMer. BEAMer is a web based application with clean and intuitive interface and user-friendly interactions. Furthermore, BEAMer requires neither complicated installation procedures nor specialized system training. These ensure the minimal learning cost for users when using BEAM model's risk assessment functions. In BEAMer, ethical risk analysis is highly automated. Users only need to complete the corresponding Risk Chart, after which the system transmits the data to the back end for computation. The results are then returned to the front end in the form of clear, interpretable scores, completing an end-to-end workflow. In particular, within iGEM projects, BEAMer can assist wet-lab teams efficiently identifying potential ethical and safety risks prior to experiments, enabling timely adjustments and optimization. Our goal is not only to overcome the obstacle for applying BEAM, but also making cutting-edge ethical assessment tools more accessible to researchers or wider public, thereby benefit more people. Architecture BEAMer is a synthetic biology ethical risk assessment software tool based on BEAM. It adopts a separation of front-end and back-end architecture design, providing an intuitive user interface and strong probabilistic reasoning capabilities, allowing users to complete complex ethical risk assessments and visualization analyses in a simple interactive environment. We designed two API endpoints based on the Flask framework, integrating a Bayesian network engine built with pgmpy and networkx that can quickly perform complex modeling and reasoning tasks. With the scientific computing power of numpy, scipy, and pandas, as well as the performance optimization mechanisms of joblib and numexpr, BEAMer is lightweight and efficient when processing data. Front-end is centered on HTML5, CSS3, and JavaScript, creating a responsive and modern interface design. The card layout, gradient colors, and animation transitions make information presentation clearer, while network diagrams and graphical risk indicators provide users with intuitive visual feedback. Through the combination of intelligent reasoning and user- friendly interaction, BEAMer transforms synthetic biology ethical risk assessment from complex model computations into a usable tool, offering transparent and reliable decision support for researchers and the public. The overall technical architecture diagram of BEAMer is shown below:
Architecture diagram of BEAMer
The core advantage of the front-end and back-end collaborative architecture lies in the complete decoupling of the front-end and back-end through standardized APIs: the front-end can call the inference services without relying on the specific implementation logic of the back- end, while the back-end can independently upgrade and optimize the computational models. This decoupling design significantly enhances the system's flexibility, maintainability, and scalability, laying a solid foundation for subsequent functional iterations and multi-platform adaptations. From an overall performance perspective, this architecture ensures the robust operation and computational efficiency of the Bayesian network inference system, guaranteeing the accuracy of complex probabilistic reasoning; at the same time, it provides users with an intuitive and friendly operational experience through carefully designed front-end interactions, achieving an organic unity of technological rigor and user experience. Implementation How it works This section introduces the construction process of the BEAM ethical risk inference API. It is divided into two key steps: Model Structure Design and Inference Method Implementation. STEP 1: Model Structure Design Determine the network structure (Bayesian Network Structure) First, based on the BEAM theoretical framework, we identify the causal relationships between the variables. We use DiscreteBayesianNetwork to convert these dependencies into a directed acyclic graph (DAG).
Top-level goal
EthicalRisks
Middle layer
Indicators in five major dimensions such as ecological security, technological governance, etc.
Bottom layer factors
Specific observable variables, such as natural_environment, biological_warfare, etc.
Code Block
Python
1
model = DiscreteBayesianNetwork([
('Ecological_Security', 'EthicalRisks'),
('natural_environment', 'Ecological_Security'),
('ecosystem_balance', 'Ecological_Security'),
('genetic_abnormality', 'Ecological_Security'),
('natural_gene_pool', 'Ecological_Security')])
Design Conditional Probability Table Based on the data results obtained from BEAM, define the Conditional Probability Table (CPT) for each node:
Code Block
Python
1
cpd_ecosecurity = TabularCPD(
'Ecological_Security', 2,
[
[1, 0.766, 0.739, 0.505, 0.752, 0.5184, 0.491, 0.2572,
0.7428, 0.509, 0.4816, 0.248, 0.495, 0.261, 0.234, 0],
[0, 0.234, 0.261, 0.495, 0.248, 0.4816, 0.509, 0.7428,
0.2572, 0.491, 0.5184, 0.752, 0.505, 0.739, 0.766, 1]
],
evidence=['natural_environment',
'genetic_abnormality', 'ecosystem_balance', 'natural_gene_pool'],
evidence_card=[2, 2, 2, 2])
Assemble and validate the model Add all CPTs to the network and use check_model() to validate the legality of the model structure.
Code Block
Python
1
model.add_cpds(cpd_eco_balance, ...) assert model.check_model()
This ensures the correctness of the network in terms of probabilistic consistency and structural integrity. STEP 2: Inference Method Implementation Discrete Situation (For Basic version) Perform uncertainty inference based on the defined Bayesian Network and its CPT.
Code Block
Python
1
def predict_ethical_risks(evidence_dict):
2
Example: {'natural_environment': 1, 'genetic_abnormality': 0, ...}
3
// 0 False, 1 True
4
model = build_model()
5
inference = VariableElimination(model)
6
result = inference.query(variables=['EthicalRisks'], evidence=evidence_dict)
7
return result
Example of output result:
Code Block
Python
1
if __name__ == "__main__":
2
evidence = {
3
'natural_environment': 1,
4
'genetic_abnormality': 0,
5
'biological_warfare': 1,
6
'biological_weapon': 0
7
}
8
res = predict_ethical_risks(evidence)
9
print(res)
Continuous Situation (For PRO version) We convert the score value (0-100) into soft evidence. This mechanism allows the model to incorporate vague and uncertain inputs.
Code Block
Python
1
def score_to_soft_evidence(score):
2
p1 = score / 100.0
3
return {0: 1 - p1, 1: p1}
Use Likelihood Weighted Sampling to implement the inference process:
Given partial evidence (in the form of soft evidence).
Large-scale sampling and statistics of the target variable EthicalRisks distribution.
Code Block
Python
1
def predict_ethical_risks_sampling(model, soft_evidence_dict, num_samples=5000):
2
sampler = BayesianModelSampling(model)
3
evidence_states = []
4
for node, dist in soft_evidence_dict.items():
5
chosen_state = np.random.choice(list(dist.keys()), p=list(dist.values()))
6
evidence_states.append(State(node, chosen_state))
7
samples = sampler.likelihood_weighted_sample(evidence=evidence_states, size=num_samples)
8
unique, counts = np.unique(samples['EthicalRisks'], return_counts=True)
9
probs = {state: count / num_samples for state, count in zip(unique, counts)}
10
return probs
Example of output result:
Code Block
Python
1
if __name__ == "__main__":
2
model = build_model()
3
score_inputs = {
4
'natural_environment': 75.2,
5
'genetic_abnormality': 50.6,
6
'biological_warfare': 90,
7
'democratic_review': 64,
8
}
9
soft_evidence = {node: score_to_soft_evidence(score)
10
for node, score in score_inputs.items()}
11
result = predict_ethical_risks_sampling(model, soft_evidence, num_samples=10000)
12
print("EthicalRisks :", result)
In the end, we obtained the probability distribution for the target node EthicalRisks. This indicates that the model is capable of integrating multi-dimensional ethical risk factors into quantified risk probabilities. Through the above steps, we completed the construction of the BEAM API computation core from theoretical model → structural implementation → inference method. For developers Basic version - Discrete Bayesian Network Prediction API Endpoint /api/predict (HTTP: POST) Function Description This API is based on a discrete model that performs inferential calculations on the risk factors provided by users, returning the probability distribution of five core dimensions (ecological safety, bioweapons risk, technological governance, public acceptance, regulatory framework) and overall ethical risk. It is mandatory to fill in the values for all risk factors in order to execute the inferential calculation. If any risk factor is missing, an error message will be returned. Input Parameters The request body is in JSON format, structured as follows:
Code Block
JSON
1
{
2
"evidence": {
3
"natural_environment": 1,
4
"genetic_abnormality": 0,
5
"natural_gene_pool": 1,
6
"ecosystem_balance": 1,
7
"biological_warfare": 0,
8
"biological_weapon": 0,
9
"technology_leakage": 1,
10
"technological_equity": 0,
11
"intergenerational_equity": 1,
12
"technology_limitation": 0,
13
"ethical_preassessment": 1,
14
"Minimization_of_harm_to_the_public": 1,
15
"public_acceptance": 0,
16
"democratic_review": 1,
17
"legal_regulation": 1,
18
"academic_freedom": 0,
19
"ip_rights": 1
20
}
21
}
evidence: A dictionary where the keys are the names of risk factors and the values are the observed states of those factors. State value description: 0 → Indicates that the factor is in a low risk/no state; 1 → Indicates that the factor is in a high risk/yes state. Output Results The response is in JSON format, an example structure is as follows:
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": 1,
4
"low_risk": 0.72,
5
"high_risk": 0.28
6
},
7
"Ecological_Security": {
8
"low_risk": 0.65,
9
"high_risk": 0.35
10
},
11
"Biological_weapon_Risk": {
12
"low_risk": 0.88,
13
"high_risk": 0.12
14
},
15
"Technology_Governance": {
16
"low_risk": 0.56,
17
"high_risk": 0.44
18
},
19
"Public_Acceptance": {
20
"low_risk": 0.81,
21
"high_risk": 0.19
22
},
23
"Regulatory_Framework": {
24
"low_risk": 0.74,
25
"high_risk": 0.26
26
}
27
},
28
"evidence": {
29
"natural_environment": 1,
30
"genetic_abnormality": 0,
31
"...": "..."
32
},
33
"success": true
34
}
Field Description: results: Inference results, containing the risk probabilities for each key dimension. Each dimension contains: high_risk: The probability of the dimension being in a high-risk state; low_risk: The probability of the dimension being in a low-risk state; high_risk + low_risk = 1, represents a binary classification probability distribution; Note: In subsequent risk level determinations, only the value of high_risk will be used, while low_risk is only its complementary value for reference. To avoid confusion, high_risk can also be renamed to risk_probability to more accurately convey its meaning. evidence: Echoes the factor values inputted by the user. success: A boolean value indicating whether the call was successful. Usage example Request:
Code Block
JSON
POST /api/predict
Content-Type: application/json
1
{
2
"evidence": {
3
"natural_environment": 1,
4
"genetic_abnormality": 0,
5
"natural_gene_pool": 1,
6
"ecosystem_balance": 1,
7
"biological_warfare": 0,
8
"biological_weapon": 0,
9
"technology_leakage": 1,
10
"technological_equity": 0,
11
"intergenerational_equity": 1,
12
"technology_limitation": 0,
13
"ethical_preassessment": 1,
14
"Minimization_of_harm_to_the_public": 1,
15
"public_acceptance": 0,
16
"democratic_review": 1,
17
"legal_regulation": 1,
18
"academic_freedom": 0,
19
"ip_rights": 1
20
}
21
}
Response:
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": {"low_risk": 0.68, "high_risk": 0.32},
4
"Ecological_Security": {"low_risk": 0.59, "high_risk": 0.41},
5
"Biological_weapon_Risk": {"low_risk": 0.92, "high_risk": 0.08},
6
"Technology_Governance": {"low_risk": 0.61, "high_risk": 0.39},
7
"Public_Acceptance": {"low_risk": 0.79, "high_risk": 0.21},
8
"Regulatory_Framework": {"low_risk": 0.71, "high_risk": 0.29}
9
},
10
"evidence": {
11
"natural_environment": 1,
12
"genetic_abnormality": 0,
13
"...": "..."
14
},
15
"success": true
16
}
Pro Version - Continuous Bayesian Network Prediction API Endpoint /api/predict_pro (HTTP: POST) Function Description This API is used to perform inference predictions for the Pro version (Continuous Bayesian Network). Unlike the basic version, the Pro version allows for input of continuous scores on a 0-100 scale and converts them into soft evidence probability distributions, which are then used to calculate the high/low risk probabilities for various risk dimensions through sampling inference. Input Parameters Request form: application/json
Code Block
JSON
1
{
2
"scores": {
3
"natural_environment": 75,
4
"genetic_abnormality": 40,
5
"biological_warfare": 60,
6
"technology_leakage": 30,
7
"public_acceptance": 85,
8
"democratic_review": 50
9
}
10
}
Note: scores: The key represents the model's input nodes, and the value is a score from 0-100 (integer or float). Each score will be transformed into soft evidence, for example, 75 → {0: 0.25, 1: 0.75}. Output Results The response is in JSON format, an example structure is as follows:
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": {
4
"low_risk": 0.31,
5
"high_risk": 0.69
6
},
7
"Ecological_Security": {
8
"low_risk": 0.42,
9
"high_risk": 0.58
10
},
11
"Biological_weapon_Risk": {
12
"low_risk": 0.77,
13
"high_risk": 0.23
14
},
15
"Technology_Governance": {
16
"low_risk": 0.49,
17
"high_risk": 0.51
18
},
19
"Public_Acceptance": {
20
"low_risk": 0.15,
21
"high_risk": 0.85
22
},
23
"Regulatory_Framework": {
24
"low_risk": 0.62,
25
"high_risk": 0.38
26
}
27
},
28
"scores": {
29
"natural_environment": 75,
30
"genetic_abnormality": 40,
31
"biological_warfare": 60,
32
"technology_leakage": 30,
33
"public_acceptance": 85,
34
"democratic_review": 50
35
},
36
"success": true
37
}
Usage Example Request:
Code Block
JSON
POST /api/predict_pro
Content-Type: application/json
1
{
2
"scores": {
3
"natural_environment": 75,
4
"genetic_abnormality": 40,
5
"biological_warfare": 60,
6
"technology_leakage": 30,
7
"public_acceptance": 85,
8
"democratic_review": 50
9
}
10
}
Response:
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": {"low_risk": 0.31, "high_risk": 0.69},
4
"EthicalRisks": {"low_risk": 0.31, "high_risk": 0.69},
5
"Biological_weapon_Risk": {"low_risk": 0.77, "high_risk": 0.23},
6
"Technology_Governance": {"low_risk": 0.49, "high_risk": 0.51},
7
"Public_Acceptance": {"low_risk": 0.15, "high_risk": 0.85},
8
"Regulatory_Framework": {"low_risk": 0.62, "high_risk": 0.38}
9
},
10
"scores": {
11
"natural_environment": 75,
12
"genetic_abnormality": 40,
13
"biological_warfare": 60,
14
"technology_leakage": 30,
15
"public_acceptance": 85,
16
"democratic_review": 50
17
},
18
"success": true
19
}
System Workflow The operating process of BEAMer is divided into four stages:
In the initialization stage, the front end loads the page, displays the Bayesian network structure and generates an assessment form containing 17 risk factor questions;
In the user interaction stage, users only need to fill out the form, then initiate the assessment request by clicking the 'calculate ethical risk' button;
In the computation processing stage, the front end organizes the form content into an evidence dictionary and submits it to the back-end API via AJAX. The back end constructs the Bayesian network model based on this and performs variable elimination reasoning to output the risk probabilities for each dimension;
Finally, in the result display stage, the front end generates risk assessment cards based on the returned data and visually presents different levels of risk through progress bars and color coding, along with an explanatory guide for risk levels to help users understand the results.
Operating process of BEAMer
Web Client Structure In our web client implementation, we have created a user interface to access the server's endpoints. The UI is primarily divided into four parts: Structure, Chart, Result, and Guide.
STRUCTURE In this section, you can view the architecture of all our evaluation dimensions and understand the impact relationships between these dimensions.
CHART Fill in this section based on the characteristics of different synthetic biology events, specifically the input observations, which can be used to predict risks.
RESULTS Displays the ethical risk prediction results.
GUIDE Provides different instructions/suggestions based on the varying levels of prediction scores.
The entire source code is freely available at the repository https://gitlab.igem.org/2025/software-tools/cjuh-jlu-china. Usage Instruction We offer two versions to meet the needs of different audiences:
Basic Version Perfect for those curious about ethical assessment in synthetic biology but without prior expertise. It provides a clear and accessible introduction to the field.
PRO Version Designed for professionals in synthetic biology or those with experience in ethical assessment, empowering you with deeper insights and advanced tools for comprehensive evaluation.
Basic Version You can follow the simple instructions at the top of the webpage to begin your journey of ethical risk assessment in synthetic biology.
For the synthetic biology event you wish to evaluate, please analyze its characteristics with reference to the assessment dimensions outlined in the Dimension Network Structure section.
Fill out the assessment form on the right side
Click "Calculate Ethical Risk" to get detailed results
PRO Version You may also follow the simple instructions at the top of the webpage to begin a more refined and precise ethical risk assessment in synthetic biology.
Conclusion BEAMer is an interactive web tool based on the BEAM model, utilizing a front-end and back-end separation architecture to achieve efficient Bayesian inference. It presents the quantitative assessment of ethical risks in synthetic biology, transforming complex theoretical methodologies into associable tools. In application scenarios such as iGEM, BEAMer can assist teams in identifying and interpreting potential ethical and safety risks before conducting wet-lab experiments, thereby guiding more robust experimental design and scientific decision-making. Looking ahead, the system is expected to be adopted by more teams as an important reference during the project design stage, fostering the responsible development of synthetic biology. Beyond that, BEAMer transforms abstract theories of ethical risk quantification into a standardized, visualized, and tool-based workflow, building an innovative bridge between methodology and application. It introduces a new paradigm for ethical risk assessment in synthetic biology and explores a transferable technological pathway for advancing ethical governance in the life sciences.