In BEAM model, we proposed a methodological framework for the quantitative assessment of ethical risks in synthetic biology for the first time. By identifying the core risk dimensions, applying Structural Equation Modeling (SEM) to extract influencing factors, and constructing a multi-layer Bayesian network, we went beyond the traditional limitations of qualitative analysis.
Building on this foundation, we further developed a companion software application BEAMer which presents the complex assessment process through an interactive interface, thereby transforming the theoretical framework into a operationalized tool. This integration of model and software not only enhances the usability and generalizability of the model but also provides a novel solution for the systematic evaluation of ethical risks in synthetic biology.
BEAMer adopts a decoupled front-end/back-end architecture. Specifically, the back end incorporates a Bayesian network inference engine based on pgmpy, while the front end utilized HTML5, CSS3, and JavaScript to provide interaction and visualization. Through completing evaluation form, users are able to obtain probabilistic risk estimates across multiple dimensions, together with explanatory insights.
Consequently, BEAMer transforms the complex process of ethical risk analysis into a transparent, reliable, and user-oriented web application.
The entire source code is available at the repository https://gitlab.igem.org/2025/software-tools/cjuh-jlu-china.
Introduction
In the construction of the BEAM, we first conducted literature research and ethical guideline analysis to identify the core ethical risk dimensions in synthetic biology. We then employed Structural Equation Modeling (SEM) to quantitatively measure the causal relationships among these dimensions, thereby scientifically assessing the strength of their impacts. Based on the SEM results, we designed a hierarchical Bayesian network structure and specified conditional probability distributions for each node. This allowed the risk dimensions to propagate probabilistically within the network, enabling quantitative risk assessment. Finally, we demonstrated inference through representative project scenarios as an initial validation, ensuring that the model outputs are both reasonable and interpretable.
This process of turning abstract analysis to quantitative modeling laid a solid theoretical foundation for the development of BEAMer.
After completing the BEAM model, we gradually realized that the key to broadening its application lies in lowering the barriers to usage and enhancing the user experience. To this end, we developed a new web-based application derived from the BEAM model — BEAMer. BEAMer is a web based application with clean and intuitive interface and user-friendly interactions. Furthermore, BEAMer requires neither complicated installation procedures nor specialized system training. These ensure the minimal learning cost for users when using BEAM model's risk assessment functions.
In BEAMer, ethical risk analysis is highly automated. Users only need to complete the corresponding Risk Chart, after which the system transmits the data to the back end for computation. The results are then returned to the front end in the form of clear, interpretable scores, completing an end-to-end workflow.
In particular, within iGEM projects, BEAMer can assist wet-lab teams efficiently identifying potential ethical and safety risks prior to experiments, enabling timely adjustments and optimization. Our goal is not only to overcome the obstacle for applying BEAM, but also making cutting-edge ethical assessment tools more accessible to researchers or wider public, thereby benefit more people.
Architecture
BEAMer is a synthetic biology ethical risk assessment software tool based on BEAM. It adopts a separation of front-end and back-end architecture design, providing an intuitive user interface and strong probabilistic reasoning capabilities, allowing users to complete complex ethical risk assessments and visualization analyses in a simple interactive environment.
We designed two API endpoints based on the Flask framework, integrating a Bayesian network engine built with pgmpy and networkx that can quickly perform complex modeling and reasoning tasks. With the scientific computing power of numpy, scipy, and pandas, as well as the performance optimization mechanisms of joblib and numexpr, BEAMer is lightweight and efficient when processing data.
Front-end is centered on HTML5, CSS3, and JavaScript, creating a responsive and modern interface design. The card layout, gradient colors, and animation transitions make information presentation clearer, while network diagrams and graphical risk indicators provide users with intuitive visual feedback. Through the combination of intelligent reasoning and user- friendly interaction, BEAMer transforms synthetic biology ethical risk assessment from complex model computations into a usable tool, offering transparent and reliable decision support for researchers and the public.
The overall technical architecture diagram of BEAMer is shown below:
Architecture diagram of BEAMer
How it works
This section introduces the construction process of the BEAM ethical risk inference API. It is divided into two key steps: Model Structure Design and Inference Method Implementation.
STEP 1: Model Structure Design
Determine the network structure (Bayesian Network Structure)
First, based on the BEAM theoretical framework, we identify the causal relationships between the variables. We use DiscreteBayesianNetwork to convert these dependencies into a directed acyclic graph (DAG).
Code Block
Python
1
model = DiscreteBayesianNetwork([
('Ecological_Security', 'EthicalRisks'),
('natural_environment', 'Ecological_Security'),
('ecosystem_balance', 'Ecological_Security'),
('genetic_abnormality', 'Ecological_Security'),
('natural_gene_pool', 'Ecological_Security')])
Code Block
Python
1
cpd_ecosecurity = TabularCPD(
'Ecological_Security', 2,
[
[1, 0.766, 0.739, 0.505, 0.752, 0.5184, 0.491, 0.2572,
0.7428, 0.509, 0.4816, 0.248, 0.495, 0.261, 0.234, 0],
[0, 0.234, 0.261, 0.495, 0.248, 0.4816, 0.509, 0.7428,
0.2572, 0.491, 0.5184, 0.752, 0.505, 0.739, 0.766, 1]
],
evidence=['natural_environment',
'genetic_abnormality', 'ecosystem_balance', 'natural_gene_pool'],
evidence_card=[2, 2, 2, 2])
Code Block
Python
1
model.add_cpds(cpd_eco_balance, ...) assert model.check_model()
Code Block
Python
1
def predict_ethical_risks(evidence_dict):
2
Example: {'natural_environment': 1, 'genetic_abnormality': 0, ...}
3
// 0 False, 1 True
4
model = build_model()
5
inference = VariableElimination(model)
6
result = inference.query(variables=['EthicalRisks'], evidence=evidence_dict)
7
return result
Code Block
Python
1
if __name__ == "__main__":
2
evidence = {
3
'natural_environment': 1,
4
'genetic_abnormality': 0,
5
'biological_warfare': 1,
6
'biological_weapon': 0
7
}
8
res = predict_ethical_risks(evidence)
9
print(res)
Code Block
Python
1
def score_to_soft_evidence(score):
2
p1 = score / 100.0
3
return {0: 1 - p1, 1: p1}
Code Block
Python
1
def predict_ethical_risks_sampling(model, soft_evidence_dict, num_samples=5000):
2
sampler = BayesianModelSampling(model)
3
evidence_states = []
4
for node, dist in soft_evidence_dict.items():
5
chosen_state = np.random.choice(list(dist.keys()), p=list(dist.values()))
6
evidence_states.append(State(node, chosen_state))
7
samples = sampler.likelihood_weighted_sample(evidence=evidence_states, size=num_samples)
8
unique, counts = np.unique(samples['EthicalRisks'], return_counts=True)
9
probs = {state: count / num_samples for state, count in zip(unique, counts)}
10
return probs
Code Block
Python
1
if __name__ == "__main__":
2
model = build_model()
3
score_inputs = {
4
'natural_environment': 75.2,
5
'genetic_abnormality': 50.6,
6
'biological_warfare': 90,
7
'democratic_review': 64,
8
}
9
soft_evidence = {node: score_to_soft_evidence(score)
10
for node, score in score_inputs.items()}
11
result = predict_ethical_risks_sampling(model, soft_evidence, num_samples=10000)
12
print("EthicalRisks :", result)
Code Block
JSON
1
{
2
"evidence": {
3
"natural_environment": 1,
4
"genetic_abnormality": 0,
5
"natural_gene_pool": 1,
6
"ecosystem_balance": 1,
7
"biological_warfare": 0,
8
"biological_weapon": 0,
9
"technology_leakage": 1,
10
"technological_equity": 0,
11
"intergenerational_equity": 1,
12
"technology_limitation": 0,
13
"ethical_preassessment": 1,
14
"Minimization_of_harm_to_the_public": 1,
15
"public_acceptance": 0,
16
"democratic_review": 1,
17
"legal_regulation": 1,
18
"academic_freedom": 0,
19
"ip_rights": 1
20
}
21
}
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": 1,
4
"low_risk": 0.72,
5
"high_risk": 0.28
6
},
7
"Ecological_Security": {
8
"low_risk": 0.65,
9
"high_risk": 0.35
10
},
11
"Biological_weapon_Risk": {
12
"low_risk": 0.88,
13
"high_risk": 0.12
14
},
15
"Technology_Governance": {
16
"low_risk": 0.56,
17
"high_risk": 0.44
18
},
19
"Public_Acceptance": {
20
"low_risk": 0.81,
21
"high_risk": 0.19
22
},
23
"Regulatory_Framework": {
24
"low_risk": 0.74,
25
"high_risk": 0.26
26
}
27
},
28
"evidence": {
29
"natural_environment": 1,
30
"genetic_abnormality": 0,
31
"...": "..."
32
},
33
"success": true
34
}
Code Block
JSON
POST /api/predict
Content-Type: application/json
1
{
2
"evidence": {
3
"natural_environment": 1,
4
"genetic_abnormality": 0,
5
"natural_gene_pool": 1,
6
"ecosystem_balance": 1,
7
"biological_warfare": 0,
8
"biological_weapon": 0,
9
"technology_leakage": 1,
10
"technological_equity": 0,
11
"intergenerational_equity": 1,
12
"technology_limitation": 0,
13
"ethical_preassessment": 1,
14
"Minimization_of_harm_to_the_public": 1,
15
"public_acceptance": 0,
16
"democratic_review": 1,
17
"legal_regulation": 1,
18
"academic_freedom": 0,
19
"ip_rights": 1
20
}
21
}
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": {"low_risk": 0.68, "high_risk": 0.32},
4
"Ecological_Security": {"low_risk": 0.59, "high_risk": 0.41},
5
"Biological_weapon_Risk": {"low_risk": 0.92, "high_risk": 0.08},
6
"Technology_Governance": {"low_risk": 0.61, "high_risk": 0.39},
7
"Public_Acceptance": {"low_risk": 0.79, "high_risk": 0.21},
8
"Regulatory_Framework": {"low_risk": 0.71, "high_risk": 0.29}
9
},
10
"evidence": {
11
"natural_environment": 1,
12
"genetic_abnormality": 0,
13
"...": "..."
14
},
15
"success": true
16
}
Code Block
JSON
1
{
2
"scores": {
3
"natural_environment": 75,
4
"genetic_abnormality": 40,
5
"biological_warfare": 60,
6
"technology_leakage": 30,
7
"public_acceptance": 85,
8
"democratic_review": 50
9
}
10
}
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": {
4
"low_risk": 0.31,
5
"high_risk": 0.69
6
},
7
"Ecological_Security": {
8
"low_risk": 0.42,
9
"high_risk": 0.58
10
},
11
"Biological_weapon_Risk": {
12
"low_risk": 0.77,
13
"high_risk": 0.23
14
},
15
"Technology_Governance": {
16
"low_risk": 0.49,
17
"high_risk": 0.51
18
},
19
"Public_Acceptance": {
20
"low_risk": 0.15,
21
"high_risk": 0.85
22
},
23
"Regulatory_Framework": {
24
"low_risk": 0.62,
25
"high_risk": 0.38
26
}
27
},
28
"scores": {
29
"natural_environment": 75,
30
"genetic_abnormality": 40,
31
"biological_warfare": 60,
32
"technology_leakage": 30,
33
"public_acceptance": 85,
34
"democratic_review": 50
35
},
36
"success": true
37
}
Code Block
JSON
POST /api/predict_pro
Content-Type: application/json
1
{
2
"scores": {
3
"natural_environment": 75,
4
"genetic_abnormality": 40,
5
"biological_warfare": 60,
6
"technology_leakage": 30,
7
"public_acceptance": 85,
8
"democratic_review": 50
9
}
10
}
Code Block
JSON
1
{
2
"results": {
3
"EthicalRisks": {"low_risk": 0.31, "high_risk": 0.69},
4
"EthicalRisks": {"low_risk": 0.31, "high_risk": 0.69},
5
"Biological_weapon_Risk": {"low_risk": 0.77, "high_risk": 0.23},
6
"Technology_Governance": {"low_risk": 0.49, "high_risk": 0.51},
7
"Public_Acceptance": {"low_risk": 0.15, "high_risk": 0.85},
8
"Regulatory_Framework": {"low_risk": 0.62, "high_risk": 0.38}
9
},
10
"scores": {
11
"natural_environment": 75,
12
"genetic_abnormality": 40,
13
"biological_warfare": 60,
14
"technology_leakage": 30,
15
"public_acceptance": 85,
16
"democratic_review": 50
17
},
18
"success": true
19
}
Operating process of BEAMer
We offer two versions to meet the needs of different audiences:
BEAMer is an interactive web tool based on the BEAM model, utilizing a front-end and back-end separation architecture to achieve efficient Bayesian inference. It presents the quantitative assessment of ethical risks in synthetic biology, transforming complex theoretical methodologies into associable tools.
In application scenarios such as iGEM, BEAMer can assist teams in identifying and interpreting potential ethical and safety risks before conducting wet-lab experiments, thereby guiding more robust experimental design and scientific decision-making. Looking ahead, the system is expected to be adopted by more teams as an important reference during the project design stage, fostering the responsible development of synthetic biology.
Beyond that, BEAMer transforms abstract theories of ethical risk quantification into a standardized, visualized, and tool-based workflow, building an innovative bridge between methodology and application. It introduces a new paradigm for ethical risk assessment in synthetic biology and explores a transferable technological pathway for advancing ethical governance in the life sciences.