Engineering
Synthetic biology is fundamentally an engineering discipline - it applies the same design principles that, for example, an electrical engineer would use, but within a biological context. The goal is to design and construct biological systems that perform specific, predictable functions. Like any other field of engineering, synthetic biology follows a series of iterative steps known as the Engineering Cycle. This cycle repeats until the system meets the desired specifications - in this case, until a biological system capable of achieving the intended function is identified and optimized. The four steps are Design - Build - Test - Learn.
In the design phase, scientists define the target biological system and its intended function. Often literature research and modelling tools are utilized to accelerate the design process by learning from previous results and simulating different constructs. This saves time and resources in the lab. During the build phase, these designs are implemented in a target organism, which is most commonly a strain of bacteria or yeast. The build phase is followed by the test phase, where the system is evaluated to ensure it performs the desired function. Finally, the learn phase deals with the analysis of the results: Does the performance align with the expected outcomes? What can be changed in the next iteration to improve the outcome? Insights gained during this phase are then used to refine the initial design, and the cycle begins again.
To structure our work, we divided the project into two big fields: Drylab: Hardware and Modeling, and Wetlab: Kaempferol branch (sensor + pathway) and Vanillin branch (sensor + pathway). Each of these components underwent its own engineering cycles. The hardware group designed and tested our system for cultivation and measurement. The modeling group created an optimization framework and predictive simulations, which themselves required iterative refinement. Within the kaempferol branch, we focused both on engineering a biosensor for detection and on troubleshooting the biosynthetic pathway. Similarly, the vanillin branch combined regulatory circuit design with pathway construction, each requiring several engineering rounds.
Wetlab
Vanillin Pathway Engineering Cycle
In order to investigate the dynamic control of vanillin biosynthesis, the initial design of the vanillin pathway plasmid consisted of two enzymes involved in the simplified vanillin biosynthesis pathway, each regulated by an artificial regulatory circuit (Liu et al., 2023). Literature research identified Feruloyl-CoA synthetase (FCS) and Enoyl-CoA hydratase/aldolase (ECH), which catalyze the conversion of ferulic acid into vanillin/vanillic acid (Chen et al., 2022). Additionally, the promoters and transcription factors from the E.coli 10β Marionette strain were chosen for their relatively low leakiness and reliability (Meyer et al., 2019).

Figure 1: Initial design of regulatory circuit.
Our first construct paired FCS with the placI promoter. However, persistent incorrect sequence verification implied assembly issues that prevented success. Therefore, we replaced placI with the promoter–inducer-fusion pL-lacO-1, which enabled both successful assembly and sequence verification. Similar sequence verification issues arose with the ECH construct paired to the Ptet promoter. To verify the integrity of the Level 0 ECH and FCS constructs, restriction digestion analysis was performed, confirming both constructs were correct. The sequencing issue was ultimately resolved by re-preparing the Ptet promoter from a freshly streaked plate.

Figure 2: Results from restriction digestion of FCS and ECH L0 constructs
The transcriptional repressors regulating the inducible promoters—LacI (later omitted after the switch in promoters) and TetR—were originally expressed under the strong constitutive promoter J23100. However, high metabolic burden from this configuration prevented successful plasmid transformation. To reduce stress, we replaced J23100 with the weaker constitutive promoter J23114. This adjustment enabled the successful transformation of the Level 1 transcription factor plasmids, all of which were subsequently sequence verified. As an additional selection feature, a gentamicin resistance marker was introduced, providing double-antibiotic selection.

Figure 3: Final design of regulatory circuit
References
Chen, Qi Hang et al. “Developing efficient vanillin biosynthesis system by regulating feruloyl-CoA synthetase and enoyl-CoA hydratase enzymes.” Applied microbiology and biotechnology vol. 106,1 (2022): 247-259. doi:10.1007/s00253-021-11709-w
Liu, Y., Sun, L., Huo, YX. et al. Strategies for improving the production of bio-based vanillin. Microb Cell Fact 22, 147 (2023).
Meyer, A.J., Segall-Shapiro, T.H., Glassey, E. et al. Escherichia coli “Marionette” strains with 12 highly optimized small-molecule sensors. Nat Chem Biol 15, 196–204 (2019).
Kaempferol Pathway Engineering Cycle
Design: The pathway engineering process began with the design of the final Level 2 (L2) construct in SnapGene, along with the corresponding Level 0 (L0) and Level 1 (L1) plasmid maps. Based on previous results, it was discovered that initial L1 plasmids were incompatible with the intended L2 assembly due to linker incompatibility. Therefore, we decided to reconstruct the L0 plasmids from scratch.
Promoter selection played a key role in the design phase. The initial constructs included combinations such as AB-plac-B0034–cufls-B0015 and CD-tetR–B0034-cisf3h-B0015. Alternative promoters, including the inducible pL-lacO-1 and constitutive J23100, were later incorporated to evaluate the effect of promoter type on construct stability and expression

Figure 4 and 5: Plasmid maps of AB-plac-B0034–cufls-B0015 (top) and CD-tetR–B0034-cisf3h-B0015 (bottom). The map was generated using SnapGene (Version 7.1; GSL Biotech LLC, 2025).
Build: Since no backup strains were available, the cloning process started from the ground up. We first performed colony PCR on the existing L1 plasmids containing cufls and cisf3h to verify their presence. Agarose gel electrophoresis, however, did not show the expected bands, indicating unsuccessful amplification or potential plasmid incompatibility.
To overcome this, we conducted Q5 high-fidelity PCR using DNA templates provided by IDT Technologies to re-amplify cufls and cisf3h. Q5 polymerase was selected for its high accuracy, low error rate, and high performance with GC-rich templates, minimizing the risk of mutations during amplification (Sittivicharpinyo et al., 2018). Following PCR product purification, we successfully constructed new L0 plasmids from the Q5-amplified fragments.
The cloning workflow presented several technical difficulties. Initial primer selections were incorrect for some extractions, and occasional miscalculations occurred when preparing plasmid dilutions (e.g., errors in converting to 75 ng/μL). Additionally, promoter or primer mismatches were identified during early L0 assembly attempts. With the assistance of PIs and fellow wet lab members, and through reference to online protocols, we refined the workflow and achieved correct L0 assembly. (Picture 2)

Figure 6: Sequence verified plates of Level 0 plasmid constructs containing cisf3h (top) and cufls (bottom) strains
Test: During the testing phase, we conducted multiple transformations with various promoter constructs to assess compatibility and functionality. The first L1 construct containing cufls under the plac promoter yielded no colonies, suggesting promoter incompatibility.
A subsequent attempt with an alternative lac promoter variant also resulted in no colony formation.
We then redesigned the construct as AB–plac–B0034–cufls–B0015, but repeated transformations (three independent attempts) produced no colonies.
Following consultation with our PI, we replaced the plac promoter with the pL-lacO-1 promoter–inducer fusion see partslist, inspired by its success in the vanillin pathway subgroup. This change finally resulted in colony growth. Plasmid minipreps were performed and sent for Saenger sequencing, but results revealed incorrect plasmid sequences, suggesting background plasmid contamination or misassembly. To further investigate promoter influence, we substituted plac01 with the constitutive J23100 promoter. Both plac01–cufls and J23100–cufls constructs produced colonies, with more robust growth on J23100 plates (Figure 7). However, sequencing again showed incorrect plasmid sequences.

Figure 7: Plates of Level 1 cufls plasmid constructs containing promotors p-lacO-1 (top) and J23100 (bottom)
Upon further troubleshooting, we identified several potential causes, including errors in linker sequences, incompatibility between backbone and inserts, and a mismatch in antibiotic resistance. Sequencing traces revealed chloramphenicol resistance in the backbone, despite selection on kanamycin plates, indicating possible mislabeling. To verify this, we prepared new agar plates with kanamycin and repeated the transformations, but sequencing continued to show no correct plasmid assembly.
Learn: Overall, repeated assembly and transformation attempts did not yield fully verified plasmid constructs, despite promoter substitutions, antibiotic validation, and sequence verification. These difficulties likely arose from a combination of promoter incompatibility, linker sequence errors, and antibiotic resistance mismatches, highlighting the need for backbone verification and construct redesign before proceeding to higher-level plasmid assembly. Nevertheless, the process demonstrated several key successes, including reconstruction of L0 plasmids and one L1 plasmid, resolution of amplification challenges, and optimization of promoter choice to enable colony formation. These iterative refinements provided valuable insights into plasmid compatibility, promoter functionality, and assembly workflows, laying the groundwork for more efficient and reliable designs in future construct development.
References
Sittivicharpinyo, T., Tang, W., Kiatpathomchai, W., Somboonwiwat, K., Charoenwongpaisan, S., & Tawatsin, A. (2018). Efficiency comparison of four high-fidelity DNA polymerases for dengue virus detection and genotype identification in field-caught mosquitoes. Heliyon, 4(9), e00705. https://doi.org/10.1016/j.anres.2018.05.012
Vanillin sensor engineering cycles
Design: The initial design of the vanillin sensor was based on the J23100-B0034-vanR-B0015-GH construct. The encoded transcription factor represses the sensing fluorophores PVan promoter. Once vanillic acid is present, it binds to the transcription factor, leading to its unbinding to the promoter and thereby allows for expression of GfP relative to the amount of product (Meyer et al., 2019).
Figure 8: Initial vanR construct
Build: The construct was built with the parts described above, yielding the construct shown in Figure 8.
Test: We tested the construct by transforming it into E.coli. No colonies were observed 22 hours after transformation.
Learn: The lack of colonies suggests toxicity potentially linked to high-level expression of the VanR transcription factor from the strong (relative strength: 1) J23100 promoter on a high-copy plasmid (Anderson promoter collection).
To mitigate toxicity, we redesigned the constriction, now using the substantially weaker J23114 promoter in place of J23100. We again built the construct (see Figure 9) and transformed it into E.coli. The test showed that substituting with the weaker J23114 (relative strength: 0.10) (Anderson promoter collection) promoter significantly improved the transformation results, yielding about 200 colonies per plate. Sanger sequencing confirmed these clones carried the desired plasmid. We learned that although the original reference employed J23100 with chromosomal integration (Meyer et al., 2019), the high plasmid copy number in this context likely led to higher transcription factor synthesis (Rouches et al. 2022), which might have toxic effects. Results
Figure 9: Modified vanR construct
References Anderson promoter collection: parts.igem.org/Promoters/Catalog/Anderson
Meyer AJ, Segall-Shapiro TH, Glassey E, Zhang J, Voigt CA. Escherichia coli "Marionette" strains with 12 highly optimized small-molecule sensors. Nat Chem Biol. 2019 Feb;15(2):196-204. doi: 10.1038/s41589-018-0168-3. Epub 2018 Nov 26. PMID: 30478458.
Rouches, M.V., Xu, Y., Cortes, L.B.G. et al. A plasmid system with tunable copy number. Nat Commun 13, 3908 (2022). https://doi.org/10.1038/s41467-022-31422-0
Kaempferol Sensor Engineering Cycle
QdoR Promoter Testing and Engineering Success:
In our construction of the Level 1 construct/plasmid containing our repressor qdoR we faced some difficulties. Our first assembly iteration focused on constructing the J23100-qdoR combination. We selected J23100 as our initial promoter due to its well-characterized properties as a standard high-strength promoter (Anderson promoter collection), expecting it to give us high expression and effective repression of pqdoI. However, despite multiple assembly attempts under consistent conditions, we were unable to obtain any viable colonies following transformation. This complete absence of colony growth suggested that high-level expression of qdoR may exert cytotoxic effects on the host cells, however no sources were found to support this theory.

Figure 10: Qdor construct
To differentiate between promoter-specific toxicity and general assembly problems, we tried promoters with alternative strengths, while maintaining all other assembly parameters constant. We designed equivalent constructs incorporating two additional promoters with reduced expression strength relative to J23100. This strategy allowed us to evaluate whether modulating QdoR expression levels could mitigate the observed cytotoxicity while simultaneously assessing the robustness of our assembly methodology.
We used J23108, a medium-strength promoter with 0.24 RPU(Relative Promoter Units) relative to J23100, and J23114, a weak promoter at 0.1 RPU (see Anderson promoter collectopn), to try and deduce the issue. Assembly attempts with J23108 yielded results consistent with our J23100 experiments, producing no viable colonies despite multiple attempts. This outcome suggested that even at reduced expression levels, qdoR maintains sufficient cytotoxic effects to prevent successful transformation and colony establishment.
In contrast, the J23114 promoter proved successful. This construct produced viable colonies that were subsequently verified through sequence confirmation, establishing J23114 as the only promoter in our test series capable of supporting functional QdoR expression within acceptable toxicity parameters.
References
Anderson promoter collection: parts.igem.org/Promoters/Catalog/Anderson
Vanillin and Kaempferol Level 2 Engineering
Level 2 Assembly Optimization : The transition to Level 2 assembly presented substantial technical challenges that required systematic optimization of our assembly protocols. For our final L2 construct, we designed a 2-kilobase backbone capable of accepting ten fragments to generate a 4.7-kilobase plasmid. Our selection strategy incorporated two complementary markers: an inserted Spectinomycin resistance cassette and a constitutively expressed mCherry fluorescent reporter.

Figure 11: KS level 2 constructs
In our initial approach employed the standardized mass ratio methodology[1] commonly used in Golden Gate assembly protocols. This approach typically employs a 2:1 or 3:1 molar excess of insert fragments relative to the destination vector,with all insert fragments standardized to 75 nanograms of plasmid DNA per reaction. Following our first assembly attempt, we observed colony formation after transformation, which initially suggested successful ligation events. However, upon selecting purple colonies for colony PCR verification, we obtained non-functional results that indicated incomplete assembly. Subsequent analysis revealed that we had achieved only partial assemblies or, more problematically, religation products consisting solely of our Level 1 Spectinomycin resistance cassette with the Level 1 mCherry module being cotransformed. These findings suggested that the conventional mass-based approach failed to promote uniform representation of all assembly components throughout the reaction, instead allowing preferential religation of compatible insert fragments while excluding other critical elements of the construct.
When this conventional approach failed to yield successful assemblies, we recognized that our construct's inherent complexity,especially a 4.7-kilobase final product accepting ten inserts required optimization strategies.
We subsequently implemented an equimolar stoichiometry approach, normalizing the amount of all fragments to 40 femtomoles per reaction. This strategy ensures balanced representation of all assembly components regardless of fragment size, thereby promoting more efficient ligation and reducing the formation of partial or incorrect assemblies that can occur when larger fragments are present in molar excess.
In our next assembly reaction, we diluted all fragments to the target concentration of 40 femtomoles. As an additional optimization measure, we reduced the Spectinomycin resistance cassette to 30 femtomoles while maintaining a 3:1 molar excess of insert fragments relative to the destination vector, with the intention of achieving high-efficiency assembly. Following transformation, we again observed colony formation, including colonies displaying the expected purple, indicative of mCherry expression.
However, subsequent verification through colony PCR and restriction digestion analysis revealed persistent challenges. Despite our optimization efforts, we continued to observe only co-transformations involving the Level 1 mCherry module and Level 1 Spectinomycin resistance cassette, rather than the intended complete ten-fragment assembly. This recurring pattern suggested that our assembly strategy requires more substantial modifications to the protocol. Given the persistent challenges encountered with both approaches, we have two potential alternative strategies that could be implemented in the future to may circumvent the observed religation and incomplete assembly issues.
The first approach involves incorporating a ccdB negative selection cassette into the Spectinomycin Level 1 construct. The ccdB system encodes a toxin that is lethal to standard Escherichia coli strains (Hartley et al., 2000), thereby preventing the recovery of colonies containing religation products. This modification would ensure that only successful multi-fragment assemblies, which replace the ccdB cassette with the intended insert fragments, would yield viable colonies following transformation. The second alternative approach involves separating the digestion and ligation steps, combined with gel purification of individual fragments. In this methodology, each fragment would be digested separately from its Level 1 vector, followed by separation on an agarose gel. The desired fragments would then be extracted and purified to remove residual vector backbone and any incomplete digestion products. Subsequently, these purified fragments would be combined in a separate ligation reaction under optimized stoichiometric conditions. While this approach requires additional hands-on time and introduces potential for fragment loss during purification steps, it offers the advantage of ensuring that only properly digested, linear fragments are present in the final ligation reaction, thereby eliminating the possibility of undigested or partially digested plasmids contributing to background colonies.
References
[1] Golden Gate Assembly Protocol for Using NEBridge® Golden Gate Assembly Kit (NEB #E1601) https://www.neb.com/en/protocols/2018/10/02/golden-gate-assembly-protocol-for-using-neb-golden-gate-assembly-mix-e1601?srsltid=AfmBOooNO4TDzOGa7PyPVOBBToY0IAf-99W4BQYy4jrGCi4MtXhlNeEt (last accessed 10.04.2025 15:32)
[2] Hartley JL, Temple GF, Brasch MA. DNA cloning using in vitro site-specific recombination. Genome Res. 2000 Nov
Drylab
Hardware engineering cycles
Glassware
Design: For our chemostat we wanted to design our own hardware because buying glassware for chemostats is very expensive and lower end options do not always include all features we wanted in our design, like a good amount of space for ports. Also we wanted to have a comparatively small working volume.
Figure 12: Picture of first initial sketch
Built: We created a first initial, true to scale drawing, see Figure 12.
Test: This design was reviewed in consultation with our university’s glassblower, Tanja Noch, who provided practical feedback on manufacturability and usability.
Learn: From these discussions, several key design improvements were identified:
- Redesign the system as a two-part assembly for easier construction and handling.
- Adopt a conical vessel shape with a working volume of ~150 mL to ensure a higher liquid column.
- Optimize port angles to allow proper immersion of the dissolved oxygen (DO) sensor, which in the original design remained above the liquid.
- Add a PTFE (Teflon) insert for the stirrer rod to ensure stability and safe stirring.
- Simplify the lid design by incorporating slanted extrusions to accommodate all required ports and leave space for future expansion.
- Increase the overall vessel height so sensors could reliably reach the liquid while still being manufacturable in glass.
- Evaluate three options for autoclavable silicone dividers for the top section.
- Integrate a custom-designed, autoclavable 3D-printed clamp mechanism.
Iterate: We met multiple times over several weeks to refine the design, address emerging issues, and integrate these modifications into the final glassware.
Motor attachment to stirrer
Design: This iterations goal was to attach the motor to the stirrer
Built: Using Cloth-reinforced adhesive tape and heat-shrink wrap, we connected the stirrer to the motor shaft.
Test: The setup was tested in a beaker with water at varying RPMs. While the connection was sufficiently stable at very low speeds, it lost stability over time as rotational speed increased.
Learn: A more stable connection was needed. Therefore we designed a 3D printable adapter that we screwed together with 4 xmm screws. This ensures a mechanically stable and repeatable connection between motor and stirrer.
Optimization Solvers Engineering Cycle
How to get the best profit for minimal input?
Running the simulations of the model for different parameters quickly lead to the realization that adding more inducer will always increase the product yield in the end. So while you could theoretically just increase the inducer amount to a maximum, this is not feasible in practice. The inducers used are often expensive chemicals that should not be seen as a “catch-all approach”. This raises a central question: How can we maximize the yields, with minimal inducer input?.
The goal is to achieve maximized yields, without requiring unrealistic amounts of inducers, which are expensive and sometimes hard to obtain and can stress the producing cells by increasing metabolic burden. For more details, see the Model page.
Our initial design was based on using Differential Evolution as a solver. Differential Evolution is a population-based stochastic optimization method, often used in global optimization problems (Storn et al., 1997). We picked this as our initial solver, as it is easy to implement and does not require deep mathematical derivations, so the “building” part was without unnecessary complications. When testing, Differential Evolution however yielded in long computation times and did not always converge successfully to the optimal solution (see Figure 14).
This is why we set out for a different solver. To make a based choice, we chose to repeat the engineering cycle with two solvers, and then choosing the best performing one.
Particle Swarm Optimization (PSO) is a stochastic optimizer inspired by the social behavior of bird flocks and fish schools. It simulates a swarm of "particles", each representing a candidate solution. Particles move through the search space, adjusting their position based on their own best results and the best results found by the swarm. Over time, they converge toward the best solution (Kennedy et. al, 1995). Even though comparisons have shown DE-based solvers to outperform PSO, it is still commonly used in practice (Piotrowski et al., 2023). We therefore chose PSO in our tests to evaluate its performance in our specific problem context by changing the solver and keeping the rest of the parameters the same. When testing, our simulations show that PSO evaluates the problem much faster, compared to DE, for the same bounds in spike dosage and number of total spikes. Further it always converges to an optimal solution (for our parameter space). We learned that PSO will present a better option than Differential Evolution for our problem.
Bayesian Optimization (BO) is commonly applied in hyperparameter tuning of machine learning models. It builds a probabilistic model of the objective function by first guessing parameters and then creating a map based on these parameters. The map is explored in a balance of testing uncertain regions that might reveal new optima (exploration) and refining conditions that already appear promising (exploitation) (Akiba et al., 2019). It was relevant for testing, as BO is specifically designed for problems with expensive function evaluations, potentially requiring fewer total evaluations compared to population-based methods by constructing a surrogate model that approximates the original expensive objective function (Bliek et al., 2023). We again applied the engineering cycle by first designing and then building our problem based on this solver while keeping the other parameters constant, followed by testing this in various simulations for comparable maximal spike times and dosages. The simulations show very fast convergence, leading to the learning outcome that BO is the fastest solver for our problem.
The results of our test cycles are visualized in Figure 13 and Figure 14. Note that for BO and PSO, the following graphs were generated using the average optimized productivity over five simulations, while for DE, due to the high runtime, only one simulation is shown.
Figure 13: Total productivity for different simulations by solver. The x-axis shows the bounds of the problem: a defined maximal number of spikes and maximal dosage per spike.
Figure 14: Computation time for different simulations by solver. Note that the right plot has a y-axis limit of 10, to highlight the low computation time of the Bayesian Optimizer. The x-axis shows the bounds of the problem: a defined maximal number of spikes and maximal dosage per spike.
Figure 13 presents the total productivity, integrated over the simulation time, for the different solvers. The error bars represent the standard deviations. While DE seems a promising candidate, it can be ignored as it did not converge to an optimal solution. Therefore, only BO and PSO are considered for a reliable comparison. Although the total productivity does not differ significantly between BO and PSO, the computation times vary considerably (see Figure 14). BO is notably faster, consistently converging in less than two seconds to the optimal solution.
Our engineering cycles have shown that for our optimization problem, Bayesian Optimization stands out as the fastest computing while still achieving good productivity values across all these different evaluations. We learned, that this makes BO the recommended method for our problem, and helps us to solve the question of high yields vs. low cost in a practical time with valuable, consistent, and robust results.
References
Akiba T, Sano S, Yanase T, Ohta T, Koyama M. Optuna: A Next-generation Hyperparameter Optimization Framework. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD ’19. Association for Computing Machinery; 2019:2623-2631.doi:10.1145/3292500.3330701
Bayesian Optimization Algorithm - MATLAB & Simulink Understand the underlying algorithms for Bayesian optimization. (84 kB)
Bliek L, Guijt A, Karlsson R, Verwer S, de Weerdt M. Benchmarking surrogate-based optimisation algorithms on expensive black-box functions. Applied Soft Computing. 2023;147:110744. doi:10.1016/j.asoc.2023.110744 Bayesian Optimization Algorithm - MATLAB & Simulink. Accessed September 28, 2025. https://www.mathworks.com/help/stats/bayesian-optimization-algorithm.html
J. Kennedy and R. Eberhart, "Particle swarm optimization," Proceedings of ICNN'95 - International Conference on Neural Networks, Perth, WA, Australia, 1995, pp. 1942-1948 vol.4, doi: 10.1109/ICNN.1995.488968.
Piotrowski AP, Napiorkowski JJ, Piotrowska AE. Particle Swarm Optimization or Differential Evolution—A comparison. Engineering Applications of Artificial Intelligence. 2023;121:106008. doi:10.1016/j.engappai.2023.106008
Storn R. Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. DIFFERENTIAL EVOLUTION, Journal of Global Optimization 11: 341–359, 1997
Overall Hardware and Software Engineering Cycle
Creating a functional, reliable, and accessible chemostat was never going to be a straightforward project. From the very beginning, we approached it as an iterative engineering challenge, learning and rethinking our plans as the project progressed. Each component: glassware, sensors, pumps, and software, needed to work not just in isolation, but as part of a tightly integrated system. Our journey was defined by repeated cycles of trial, error, and improvement. Initial designs quickly revealed practical limitations, sensor signals clashed, pumps misbehaved, and code failed to keep up with the complexity of real-time control. Each failure became a learning opportunity, shaping the next iteration of the system and providing us with new knowledge and skills to push our project further.
Design
We imagined a system where a glass vessel, fitted with a cap, would house the culture. Integrated sensors would monitor pH, temperature, dissolved oxygen, CO₂, and liquid and gas flow, while four peristaltic pumps would regulate nutrients and waste removal. A stainless-steel stirrer, driven by a motor, would maintain homogeneous mixing, and the entire system would be controlled by an Arduino Mega 2560 with an ESP8266 providing wireless communication. Our initial design went through several stages until we settled on a final vision. This process involved us talking to experts in the field about what would be feasible and more suitable for our project. Some big dreams such as AI support turned into dust, while some small ideas grew to take the central stage, such as the software.
Build
Once the design was set, construction revealed just how complex integration could be. Our initial vision of the ESP communicating over a common WiFI shared with the computer running the software was faced with an opponent many of us had battled with before: eduroam. Eduroam's secure credential-requiring authentication and network restrictions prevented us from connecting the ESP to it. We ultimately decided to utilize the ability of the ESP8266 to serve as a hotspot, to which we connected the computer. Each sensor worked individually, but combining them exposed interference, erratic readings, and voltage mismatches. Shared ground loops caused sporadic signal corruption, and the ESP8266 Wi-Fi module frequently failed to connect. Solving these issues required multiple iterations of wiring reorganization, capacitor placement, and introduction of level shifters to ensure proper voltage translation. It took many poorly soldered level converters and I2C buses before we reached stable connectivity and readouts, and we came out of this fight with a few lessons in cable management. Pumps were integrated gradually, and the stirrer was refined from a large propeller to a smaller, pitch-bladed design that improved mixing and prevented dead zones (see meeting with Prof. Wierckx). But connecting parts with each other wasn't just down to wiring, as components compiled, our accompanying code also grew more complex. We learned Arduino Programming language and its many caveats as we went, often pulling our hairs over code that wasn't working, with many instances boiling down to having chosen the wrong baud rate or I2C address. As the complexity of our system increased, we realized that keeping track of individual components meant to work as one was not so easy, and we saw the need for a software that would solve this problem. After having spent countless hours staring at serial readouts and manually adjusting everything, we came to the idea of designing a graphical user interface (GUI) that would provide a sleek and simple overview of our data, while being easy to use and taking off some of the burden of manual monitoring and adjustments. At first this was just several Jupyter Notebook files running independently to at first log numerical data into csv files, then provide graphs. Facing challenges with maintaining connectivity gave birth to the idea of the programme reestablishing connection by itself, instead of us having to intervene every time connection is lost. Next, proportional-integral control was introduced. Thus, the STREAM Chemostat Controller was born.
This phase was a months-long exercise in trial, error, and adaptation, where each small success: stable sensor readings, synchronized pump operation, or reliable wireless connection, felt like a major milestone.
Figures 15 and 16: Impressions from the Dry Lab working phase - breadboard setup and one of the first successful communication tests.
Test
Testing the integrated system required patience and a careful, step-by-step approach. We reintroduced sensors one at a time, observing the serial outputs to ensure each device was reporting accurate and stable data. The pumps were tested under operational conditions, meaning we ran them at different speeds and directions while connected to tubing and fluid, simulating real experimental flow to confirm that they could reliably deliver and remove liquids without failures or leaks. Once the chemostat hardware and software were integrated and stabilized, we moved on to testing its real-world functionality through a proof-of-concept experiment. The goal was to confirm that our system could maintain continuous bacterial growth, and for this purpose we used the Escherichia coli DH10B Marionette strain carrying an inducible sfGFP plasmid. The system maintained continuous culture at 37°C with controlled inflow and outflow, while sensors tracked temperature, pH, dissolved oxygen, CO₂, and optical density in real-time (for information about the run, see here). Despite some experimental shortcomings, the chemostat reliably sustained flow, maintained stable environmental conditions and Sensor readings, despite minor fluctuations, provided consistent feedback, validating the integration of our components. This test confirmed that the system could support continuous bioproduction, a major milestone for us. We also developed and verified sterilization protocols to ensure all components in contact with the culture (glassware, tubing, and sensors) were safe for lab use. Even after months of refinement, subtle issues like transient Dissolved Oxygen (DO) signal drift or Wi-Fi dropouts reminded us that careful testing and observation were essential.
Turning this vision into reality was far from straightforward. At first, each sensor worked individually, but combining them into a single system quickly exposed a web of challenges. Signals interfered, readings fluctuated unpredictably, and the ESP8266 often failed to connect. Shared ground loops and voltage mismatches caused erratic sensor behavior, forcing us to rethink the wiring multiple times. Weeks were spent adding capacitors, level shifters, and reorganizing circuits, often discovering new problems with every adjustment. The software presented its own hurdles. Blocking sensor reads prevented pumps from operating correctly, and streaming real-time data to the PC caused lag and inconsistencies. The PI controller for pH often oscillated, requiring careful calibration over repeated test runs. Debugging this system became a months-long process of trial, error, and careful observation, where each small victory, such as stable Wi-Fi communication, synchronized readings, or smooth pump control, felt like a major breakthrough. Testing required patience and methodical iteration. We reintroduced sensors one by one, monitored serial outputs, ran pumps under load, and continuously compared software readings to physical measurements. Demo Mode became essential, allowing us to simulate culture conditions without risking live cells, while sterilization tests ensured that glassware, tubing, and probes could safely be used in a lab environment. Through this prolonged process, we learned valuable lessons. Hardware and software could not be developed in isolation; small changes in wiring often required code adjustments. Signal integrity and circuit layout were critical to reliable measurements, and asynchronous, non-blocking code was necessary to maintain smooth operation. Perhaps most importantly, we realized that a user-friendly interface with live plots and intuitive controls was essential for effective experimentation. With each cycle, the system grew more reliable. Wiring was reorganized for clarity, code was rewritten for robustness, the PI controller was tuned, and the GUI was enhanced for live monitoring and control. Months of iterative refinement transformed a fragile prototype into a fully integrated, open-source chemostat capable of stable, continuous cultures. Today, our chemostat combines affordable, standardized hardware with intuitive software, demonstrating that high-quality continuous bioproduction does not require expensive commercial systems. Beyond the technical achievement, it embodies a philosophy of accessibility, collaboration, and open science. By sharing our designs, code, and assembly guides, we hope to inspire other iGEM teams and small labs to explore continuous culture experiments, proving that sophisticated biotechnology can be democratized, modular, and within reach for all.
Learn
Every challenge we encountered during design, build, and testing became a learning opportunity. We realized early on that hardware and software development cannot happen in isolation: changes to wiring often required corresponding code adjustments, and voltage mismatches or shared ground loops could cause seemingly inexplicable sensor errors. Incremental debugging taught us the importance of signal integrity, proper circuit layout, and careful cable management. One of the more interesting learning curves was soldering, which started off as unshapely masses of metal on our components, failing to even fulfill their purpose, and grew into a sort of art for us as our skills improved. Learning proper soldering lead us to not only better results, but also the occasional unconventional solution. Using our software helped us realize that user-friendly interfaces are not a luxury - they are essential: the GUI we developed transformed complex serial outputs into intuitive, live-updating plots that allowed us to monitor and control the system efficiently. Putting it to practice also provided a framework for future optimization, pointing us to ways to improve it further, like adding additional control algorithms.
Testing with the proof-of-concept E. coli run reinforced these lessons. We confirmed that even lower-cost sensors could provide consistent and actionable feedback, that pump timing and flow rates needed precise coordination, and that sterilization protocols were critical for safe, reproducible experiments. Each observation informed the next design iteration, guiding improvements in wiring, software stability, sensor integration, and operational reliability.
Through this process, we learned that a functional, accessible chemostat is more than just the sum of its parts. It requires careful coordination between hardware, software, and biological testing, continuous observation, and a willingness to adapt based on what the system teaches us. These lessons now underpin the reliability, usability, and modularity of our final open-source chemostat, setting the stage for future cycles of improvement and innovation.
Final Reflection
Looking back across all four parts of our project, the engineering cycle heavily influenced our work. Each subgroup experienced its own version of iteration: the hardware team refined prototypes, the modeling team adjusted solvers, and the wet lab teams redesigned pathways and sensors after repeated challenges.
For the kaempferol pathway, promoter incompatibility and backbone mismatches highlighted the importance of systematic verification. The kaempferol sensor cycle showed us how regulatory design requires persistence and fine-tuning. On the vanillin side, the pathway work revealed the metabolic burden of strong promoters, while the vanillin sensor subgroup demonstrated how carefully chosen circuits can reduce leakiness and improve reliability. The modeling group on the other hand showed us that even computational tools require cycles of trial, error, and refinement before producing useful results.
These engineering cycles made clear that failures were not endpoints but turning points. Each unsuccessful transformation, misassembled plasmid, or faulty prototype became a lesson that strengthened the next design. Through this process, STREAM matured scientifically, and our team developed resilience, adaptability, and problem-solving skills.
In conclusion, the real value of our project lies not only in the constructs we built, but in the engineering mindset we developed. By embracing iteration across hardware, modeling, and both the kaempferol and vanillin branches, we prepared ourselves to carry these lessons beyond iGEM towards building synthetic biology that is robust, predictable, and impactful.