Engineering Success

Our Engineering Success page lays out the Design-Build-Test-Learn (DBTL) cycles that each subteam iterated on in relation to their main deliverables this season.

What defines Engineering Success?

In iGEM, Engineering Success highlights how teams apply the iterative Design-Build-Test-Learn (DBTL) cycle to transform project goals into functional workflows. This is a core process of Synthetic Biology where each cycle helps us refine our designs, troubleshoot challenges and integrate new insights learned until our solution works as intended.

Throughout meduCA, we demonstrate Engineering Success across many deliverables. From early concept sketches to validated prototypes and biocementation attempts, our DBTL cycles helped push our project forward in a controlled manner. On this page, we showcased each of our DBTL cycles across deliverables to demonstrate how thoughtful iteration and data-driven design guided our progress towards a sustainable, carbon-negative future.

Biocementing Bacteria

Cyanobacteria Cloning

Our project aims to develop a modular surface display system for Synechococcus elongatus UTEX 2973 to enable expression of proteins, such as carbonic anhydrases, for biocementation applications on Mars. By combining cyanobacterial expertise, rational construct design, and iterative testing, we engineered and validated a recombination-based vector, pRepv3, optimized for stable protein display and counterselection in UTEX. This work establishes a foundation for future cyanobacterial surface engineering and carbon capture strategies in space biotechnology.

Cycle 1: Display Fusion Protein Design and Validation

design block icon

We designed a construct to surface-display a carbonic anhydrase (CA) fusion protein Synechococcus elongatus UTEX 2973 (UTEX). The fusion protein sandwiches a CA peptide between the N and C domains of S-layer protein of UTEX, VCBS.

build block icon build block icon

We assembled the fusion protein sequences in silico.

test block icon test block icon

We used AlphaFold to model the fusion proteins and used a structural alignment algorithm to compare the domains’ structures against their wild-type counterparts. The scores could inform us whether the fusion is likely to be misfolded, which could compromise display, enzymatic activity, and/or secretion efficiency.

learn block icon learn block icon

From these in silico results, we think the SazCA, BtCAII, and HpCA fusions have a likelihood of success, whereas the BhCA fusion scored poorly and may have low activity or no secretion.

Cycle 2: Refining Plasmid Design for Cloning

design block icon

We sought to introduce the surface display system via either homologous recombination or a shuttle vector. Using Golden Gate Assembly, we designed modular vectors that allow insertion of different CA sequences with SapI fusion sites.

build block icon build block icon

We constructed these plasmids using Golden Gate cloning, assembling the required parts (homology arms, resistance markers, CA coding sequences, and tags) into destination backbones.

test block icon test block icon

We transformed the resulting plasmids into E. coli for propagation and validation. The empty vectors were sequenced and constructs from said vectors used restriction digest and PCR to confirm CA insertion.

learn block icon learn block icon

Sequencing revealed occasional deletions or sequence errors within the constructs, highlighting the importance of screening multiple colonies and optimizing cloning strategies, and the sepT2 gene possibly being problematic. As such, we cloned in a lacI cassette to better suppress sepT2 expression, resulting in our v3 plasmids.

Cycle 3: Adapting the electroporation protocol

design block icon

We needed to evaluate the UTEX electroporation protocol from Mühlenhoff and Chauvat, 1996, since there we were able to find little documentation on its effectiveness, and the version last used by iGEM HK_SSC 2021 lacked specifics.

build block icon build block icon

With input from Kalen Dofher, we designed a screening experiment to test the effect of different factors, such as electroporation voltage, number of washes, wash buffer, amount of DNA, cell density, recovery time, and antibiotic strength.

test block icon test block icon

We did not end up running the experiment, but instead performed some trial runs<b>.</b>

learn block icon learn block icon

Although we did not get to optimizing the electroporation parameters as we had hoped, we were able to specify steps such as centrifugation speed during washing, doing the final reuspension in 10% glycerol, and using low antibiotic strength for selection.

Visit our page to learn more!

Caulobacter Cloning

Our project aims to engineer Caulobacter crescentus as a living platform for enzymatic surface display to support sustainable mine tailing remediation. By leveraging its crystalline S-layer system and resilience in harsh environments, we developed tools for displaying carbonic anhydrase (CA) and other functional proteins to enable calcium carbonate precipitation and biocementation in polluted sites.

Cycle 1: Surface Display Vector Assembly

design block icon

To surface-display a carbonic anhydrase (CA) fusion protein in Caulobacter crescentus CB2A JS4038, we sought to modify an existing cloning vector for Caulobacter surface display to fuse the CA to the s-layer protein, RsaA.

build block icon build block icon

We recreated the vector in SnapGene, and modified the multiple cloning site to allow modular insertion of our CA into the middle of the RsaA sequence, flanked by SapI recognition sites for Golden Gate cloning. As recommended by our advisors, we added extra base pairs at the end of the SapI recognition sites for stabilized enzyme binding.

test block icon test block icon

We assembled the recombinant backbone via Gibson assembly. Then we transformed the plasmids into E. coli to propagate the plasmid. We validated clones through colony PCR and restriction digest, before sending them for Nanopore sequencing.

learn block icon learn block icon

After sequencing, we found that the surface display sample contained the original cloning vector instead of the recombinant vector. We reamplified the parts for assembly, then added an additional fragment purification step which led to successful assembly.

Cycle 2: RsaA-CA Fusion Protein Design

design block icon

The original cloning vector included a myc tag in the MCS, which was lost when inserting CA. To simplify protein detection, we modified the plasmid so that any foreign protein could be detected without having to design a new tag for every CA inserted.

build block icon build block icon

In our construct design, we moved the myc tag out of the MCS such that if the entire fusion was expressed, the tag would be retained in the middle of RsaA. Additionally, we codon optimized CAs for expression in Caulobacter and E. coli.

test block icon test block icon

We modelled the fusion proteins in Alphafold, to compare the structure of CA and RsaA against the fusion RsaA-CA protein. We also electroporated both codon optimized versions of CA into Caulobacter.

learn block icon learn block icon

The in silico results predicted that the CAs would have a significantly different structure from the reference, when fused to RsaA. Experimentally, we were only able to validate a colony harbouring the display construct with BtCAII (codon optimized for E. coli) into CB2A. We expected that the protein structure may change due insertion in the middle of the protein and plan to compare protein expression and activity in future experiments.

Cycle 3: Electrotransformation of CB2A

design block icon

We adapted a protocol provided by our advisor, Beth, incorporating additional information from the literature on electroporation parameters and competent cell preparation. In addition, we developed a procedure for plasmid extraction from Caulobacter cells.

build block icon build block icon

We prepared competent cells, and designed electroporation experiments to test recovery time, antibiotic resistance concentration and plating amount to optimize transformation efficiency.

test block icon test block icon

We carried out several electroporation trials, in which we monitored cell yield, morphology and time for colony formation. Then we performed colony PCR and miniprep to validate successful uptake of the sufrace display, secretion and intracellular CA expression vectors.

learn block icon learn block icon

The secretion and intracellular strains yielded colonies at the expected rate of 2 - 3 days post-electroporation whereas the surface display strain was slow to recover on both solid media and liquid culture. We couldn’t extract sufficient concentration of our construct from the surface display strain for sequencing. However, we found that lowering the chloramphenicol concentration improved culturing time for all strains.

Visit our page to learn more!

Modelling

To assist Wet Lab in selecting candidates for surface display, AlphaFold was used to generate 3D models of the fusion proteins. Structural alignment on PyMOL allowed for the investigation of conformational changes. Finally, the stability of the proteins were analyzed through molecular dynamics simulations with GROMACS.

Cycle 1: Preliminary AlphaFold Run

design block icon

We want to perform preliminary run of AlphaFold and examine the quality of the predicted structure.

build block icon build block icon

We used Google DeepMind’s AlphaFold web server to perform structural prediction.

test block icon test block icon

Several fusion proteins and CA sequences were testeed, the results were then imported into PyMOL to examine the quality of folding.

learn block icon learn block icon

The predicted CA structures had mostly high confidence (pLDDT scores) and agreed with the experimentally solved CA structure, whereas the fusion protein structures tend to have regions with lower confidence.

Cycle 2: Batch AlphaFold Structure Prediction Pipeline

design block icon

We wanted to build an AlphaFold pipeline that allows batch protein structure prediction.

build block icon build block icon

We created a Nextflow pipeline that allows batch AlphaFold2 runs.

test block icon test block icon

We performed batch AlphaFold prediction of the fusion proteins sequences with the pipeline, and adjusted the computing resources requested based on the protein sequence lengths number of inputs, ensuring that sufficient memory were allocated.

learn block icon learn block icon

We took the predicted structure with the highest confidence of each input and organzied them to analyze the conformational changes in downstream analysis.

Cycle 3: Structural Alignment Pipeline

design block icon

We wanted to build a pipeline that allows batch structural alignment using PyMOL.

build block icon build block icon

We created a Nextflow workflow that takes the predicted structure sand reference sturctures and writes the alignment output (RMSD values) to text files.

test block icon test block icon

The AlphaFold predicted fusion protein structures were imported into the pipeline to examine their conformational changes.

learn block icon learn block icon

The RMSD values from the output are cross-compared to select for surface display candidates.

Cycle 4: maestro integration

design block icon

We wanted to move our workflows from Nextflow onto maestro.

build block icon build block icon

We created a maestro workflow that has two modes: “predict” and “align”. It can run either workflow, configured at runtime to execute an AlphaFold or PyMOL analysis.

test block icon test block icon

This workflow was used to generate structural predictions of secretory fusion proteins and align them against reference structures (CA/surface protein) .

learn block icon learn block icon

The RMSD values from the output are cross-compared to select for lab testing candidates.

Cycle 5: Molecular Dynamics Simulation Pipeline

design block icon

We wanted to build a pipeline that can automate the protonating the AlphaFold predicted structures at a specified pH and perform MD simulation.

build block icon build block icon

We chose H++ to protonate the protein residues, and used tleap and ParmEd to convert the topology and coordinate files into GROMACS-compatible formats for MD simulation.

test block icon test block icon

The AlphaFold-predicted structures were subjected to MD simulations under different pH conditions to evaluate their structural stability.

learn block icon learn block icon

We analyzed the pipeline outputs to assess protein stability across different pH conditions, providing insights into the range of pH that supports enzymatic activity.

Cycle 6: maestro integration

design block icon

We wanted to move our raw bash scripts onto maestro.

build block icon build block icon

We created a maestro workflow that allows users to provide a protonated PDB file as input, and execute a complete molecular dynamics pipeline (tleap, ParmEd, GROMACS) in multiple environments.

test block icon test block icon

The AlphaFold-predicted structures were subjected to MD simulations under different pH conditions to evaluate their structural stability.

learn block icon learn block icon

We analyzed the pipeline outputs to assess protein stability across different pH conditions, providing insights into the range of pH that supports enzymatic activity.

Visit our Modelling page to learn more!

Bioreactors

An iGEM classic, bioreactors are crucial to synthetic biology projects, providing and optimizing bacterial growth to increase wet lab stock and decreasing costs. This year, we built two bioreactors tailored towards our two chassis, C. crescentus and S. Elongatus. Our research questions for these two bioreactors explores the characteristics of each and how they affect our bioreactors’ conditions.

CB2A Bioreactor

Cycle 1: Initial Prototyping

design block icon

Starting with extensive background research on the cellular properties of <i>Caulobacter </i>and its growing conditions, we focused on its adhesive and biofilm-producing properties. Exisiting literature guided our research with the goal of exploring different agitation methods and their effects on bacterial growth. Weighted decision matrices, concept-sketching, and concept evaluation helped determine which design best suited our timeline, resources, and needs. iHP contacts and advisors were involved during this process, ensuring our design covered any potential issues.

build block icon build block icon

Using SolidWorks and AutoCAD, we built 3D models for virtual prototypes that were then 3D-printed in polylactic acid. These parts were manufactured and assembled with simple Arduino circuits to test functionality and sensor outputs. iHP contacts and advisors contributed here, providing manufacturing solutions.

test block icon test block icon

During the test phase, growth curve experiments were done with the bioreactor measuring optical density as the a record of the biomass vs time. To evaluate the reactor’s performance, it was compared to traditional wet lab techniques.

learn block icon learn block icon

After concluding the test phase, we consulted with our iHP contacts and advisors to discuss the cause of any issues and how the experiment and design could be improved or scaled. We then iterate through the DBTL cycle again to further enhance our products.

Cycle 2: Motor upgrades

design block icon

The next cycle of DBTL involved using a NEMA 17 stepper motor instead of a low voltage stepper motor. The main design changes that happened was accomodating the larger motor with a larger housing design and a new CAD model to house the DRV8825 motor driver.

build block icon build block icon

The build process consisted of implementing a new circuit and 3D printing the custom parts mentioned during the design phase. Here we needed a taller grid interlock to allow for tubing to escape since the previous motor was not as large.

test block icon test block icon

The test phase was once again running a new growth curve with the new motor and testing the circuit for functionality. Here the circuit ran into overheating issues, particularly in the 30 °C room.

learn block icon learn block icon

In the learn phase, we found that overheating issues would weaken PLA materials causing stepper motor failure and loss of agitation during the experiment. The stepper motor would lose grip on the PLA impeller and would spin in place while the impeller would remain still

Cycle 3: Software, UI upgrades, and remote capabilities

UTEX 2973 Bioreactor

Cycle 1: Bioreactor Testing

design block icon

We conducted comprehensive literature review on the cellular properties of cyanobacteria, analyzing experiments that culture this specific strain. We mainly focused on the influence of lights, agitation, and aeration on cyanobacteria biomass production. Using the takeaways from the literature, we then established our design needs, using weighted scorings and collaborative sketches to find an optimal design.

build block icon build block icon

Computer-aided designs are generated and 3D-printed during our build phase. We developed a few comprehensive virtual prototypes before moving on to physical prototyping. The parts are assembled and circuits are built during this phase.

test block icon test block icon

The bioreactor is put to use during the test phase. UTEX 2973 cells are cultured using the bioreactor. Growth curves are created to evaluate the bioreactor’s performance.

learn block icon learn block icon

Based on our wet lab validation results and iHP feedback, we reflect on the what we did well and what still needs improvements. We then iterate through our DBTL cycle to further refine our design.

Cycle 2: Bioreactor Media Optimization

design block icon

In the design phase, we define the strategy to optimize UTEX 2973 growth by selecting key metabolic and environmental variables, such as CO₂ concentration, O₂ restriction, light input, and nutrient composition, for simulation and analysis.

build block icon build block icon

The build phase implements these designs computationally, constructing flux balance models, single reaction knockouts, and flux variability analyses to capture how different conditions impact growth.

test block icon test block icon

In the test phase, predictions from computational models are validated in the wet lab by culturing UTEX 2973 under the designed media and environmental conditions, comparing observed growth rates with simulations.

learn block icon learn block icon

Finally, the learn phase interprets results to identify the most critical nutrients and fluxes, guiding refinements of the model and informing future experimental designs for media optimization.

Cycle 3: Hardware Upgrades and Design Changes

design block icon

Given the first protoype did not have very efficient lighting mechanisms, we designed a new enclosure for the photo bioreactor. The new design allowed for consistent and secure lighting using an LED strip that was interwoven between. The carbon chamber also got a new drip mechanism design using a sand column. This will help extend the evaporation time of the dry ice in the container.

build block icon build block icon

Computer-aided designs were once again generated and 3D-printed during our build phase. The LED strip added required an additional wall plug and adapter. A sand column was added with water dripping steadily at a desired rate of 1 drop per second. A 3D printed tray was added to the styrofoam vessel to reduce carbon dioxide escaping from the top.

test block icon test block icon

The bioreactor is put to use during the test phase and UTEX 2973 cells are cultured using the bioreactor. Growth curves are created to evaluate the bioreactor’s performance and it shows the difference applying LED lights made. The CO2 chamber was qualitatively assessed and found to have extended evaporation time in comparison to pouring water all at once during OD measurements.

learn block icon learn block icon

In this learn phase, LED strips and temperature were shown to have a profound effect on the growth. Additionally, the servo motor also ran into overheating issues and the carbon chamber provided a more even distribution of carbon dioxide using the drip mechanism.

Low Gravity Bioreactor

Cycle 1: Circuit

Visit our Bioreactor Overview page to learn more and navigate to each bioreactors design, test and validation pages!

Bioprinter

To print our bricks, we modified a 3D clay printer into a bioprinter compatible with our novel bioink. We created a biomaterial composition that can incorporate microbes to serve as a bioink for bioprinting 3D living building materials for Mars.

Earth Sand Bioink Composition testing

Cycle 1

design block icon

Before finalizing the formulation of our bioink end product, we first aimed to optimize the ideal bio-ink composition (i.e., structural integrity and stability) before adding our transformed bacteria.

build block icon build block icon

We achieved this by creating a protocol (listed in methodology) that served as the baseline for our future bio-ink formulations. Here we tested for various factors such as 850, 425 and 150 µm particle size and 30-70 weight % sand.

test block icon test block icon

We carried the protocol out with Earth Sand first, noting how easy it was to extrude the sand from the syringe and whether the scaffold can maintain its structure with crosslinking.

learn block icon learn block icon

We learned what steps of the protocol needed to be optimized for better gel formation and what can be concluded based on particle size and weight % for Earth Sand.

Martian Regolith (MGS-1) Composition testing

Cycle 1

design block icon

We now want to repeat the same experiment using MGS-1 given what was learned using Earth Sand

build block icon build block icon

The same protocol was used, investigating the optimal weight percentage (wt%) of MGS-1 at different wt% and with two different sieve sizes (150 µm, 425 µm).

test block icon test block icon

When first testing the MGS-1 alginate gel, we noted any differences in crosslinking and extrusion of MGS-1 gels compared to Earth Sand gels.

learn block icon learn block icon

We learned that the MGS-1 alginate gel was not crosslinking readily in the calcium chloride solution as expected.

Cycle 2

design block icon

In order to troubleshoot the crosslinking issues with the MGS-1 alginate gel, we brainstormed a variety of possible issues, such as pH, MGS-1 components, interactions with calcium chloride or CMC, etc. that could be the cause.

build block icon build block icon

For each issue, we devised a protocol to confirm and investigate whether it was interfering with the MGS-1 alginate gel crosslinking.

test block icon test block icon

We carried out a variety of experiments to investigate the issues, mainly comparing the gel scaffolds to the Earth Sand alginate gels, and qualitatively observing whether solidification and crosslinking has improved.

learn block icon learn block icon

We learned and iterated our protocols following each troubleshooting experiment, which led us to a final range of operable compositions that can form a crosslinked MGS-1 alginate gel

Cycle 3: MGS-1

design block icon

Now that we have a range of operable bioink components using MGS-1, we wanted to determine which crosslinking method was best for the bioink in order to incorporate into the bioprinter.

build block icon build block icon

We implemented a protocol for introducing calcium chloride through spraying or submerging, and also not submerging the solution at all to see whether the calcium oxide in MGS-1 is sufficient for crosslinking.

test block icon test block icon

We tested the various crosslinking methods in hollow, solid, and grid shapes, noting how many layers can be stacked and whether the scaffolds can be picked up using a spatula.

learn block icon learn block icon

We learned that not submerging the MGS-1 alginate gels in calcium chloride at all was qualitatively sufficient for crosslinking and allowed more stacking compared to the other methods

Cycle 4: MGS-1

design block icon

Since EDTA is commonly used as a chelating agent to dissolve cross-linked alginate gels, we hypothesized it potentially having an effect on our bioink structures since BG-11 growth media for UTEX 2973 contains EDTA.

build block icon build block icon

To address this concern, we devised protocols to submerge Earth Sand, MGS-1, and pure alginate gels in various growth mediums, as well as submerge the gels in BG-11 media containing varying EDTA concentration.

test block icon test block icon

We carried out the experiments, noting whether the gels became dissolved or collapsed in varying EDTA concentrations and growth mediums.

learn block icon learn block icon

We learned that EDTA indeed had an effect on all of the alginate gels, as completing removing this component from BG-11 was the only solution to the MGS-1 alginate gels dissolving.

Model Validation

Cycle 1: Earth Sand

design block icon

Through literature review, we designed and decided on the factors and levels for a potential factorial design of experiments. The aim is to optimize Earth Sand based bioink composition for UTEX 2973 viability and production of calcium carbonate from engineered UTEX 2973.

build block icon build block icon

We used Response Surface Modelling to model experimental factors and estimate the optimal factors and conditions to test. To test this model, we created a standardized protocol to incorporate UTEX 2973.

test block icon test block icon

We tested various methods of sterilizing the components of the bioink and carried out the protocol of incorporating UTEX 2973 into the Earth Sand alginate gel.

learn block icon learn block icon

We learned that mixing the UTEX 2973 into the biomaterial homogeneously, at such a high seeding density, proved to be a challenge. We also needed a quantifiable measure of viability in order to proceed with screening the DoE model.

Cycle 2: MGS-1

design block icon

Based on the issues we ran into with finalizing the MGS-1 alginate gel composition, we decided a different set of factors and levels for optimizing the MGS-1 bioink composition for UTEX 2973 viability and production of calcium carbonate.

build block icon build block icon

Again, we used Response Surface Modelling to model experimental factors and estimate the optimal factors and conditions to test. To test this model, we revised our protocol to incorporate UTEX 2973. To demonstrate that the MGS-1 alginate gels can form bricks, we also devised a protocol to incorporate carbonic anhydrase powder in the MGS-1 alginate gels to simulate the enzymatic process.

test block icon test block icon

We tested our in-house carbon dioxide chamber with MGS-1 alginate gels that contain carbonic anhydrase powder to investigate its ability to form calcium carbonate crystals.

learn block icon learn block icon

Qualitatively, we can demonstrate that the alginate gels containing carbonic anhydrase powder were stiffer and more opaque than gels not containing any carbonic anhydrase. We will need a quantifiable measure to validate this, as well as evaluate the viability of UTEX 2973 in the MGS-1.

Calcium diffusion modelling

Cycle 1: Microscope-based Diffusion + Crosslinking Modelling

design block icon

We designed an experimental protocol to measure diffusion and crosslinking of our alginate-based bioinks by microscope.

build block icon build block icon

We built microscope experiments to view cross-linking for our alginate-only and 30 wt% earth sand alginate gels.

test block icon test block icon

We tested a mathematical framework which uses images of the cross-linking front to model diffusion in 3D shapes.

learn block icon learn block icon

We learned that we could not view cross-linking for our sand-alginate bioink formulations under a microscope, so we needed a new approach.

Cycle 2: Volumetric-based Diffusion + Crosslinking Modelling

design block icon

We designed another experimental protocol which measures absorbed volume changes in alginate-based bioinks in vials.

build block icon build block icon

We built experiments to view cross-linking for our alginate-only, 30wt% earth sand, and 20wt% MGS-1 alginate gels.

test block icon test block icon

We tested a one-dimensional model that uses coupled differential equations to simulate calcium diffusion and crosslinking in our bioink.

learn block icon learn block icon

We learned that we could successfully implement our computational model, and calibrate it to experimental data from our sand-alginate bioink formulations.

Learn more about our bioprinter and bioink development at the Bioprinter Overview deliverable page and its corresponding sub-deliverables!

Software

In silico tools are vital to synthetic biology. They enable researchers to design, model, optimize, and analyze systems in a rapid and cost-effective manner, and meduCA is no exception. The intersection of bioinformatics and modelling laid the foundation for our carbonic anhydrase selection process before we ever needed to pick up a pipette, while firmware enabled custom-built hardware to optimize our culture conditions. The groundwork for our software stack was laid by dagger, a package for intelligently parallelizing processes based on the flow of data. Then, building off this foundation, we designed miso, a framework for creating intuitive user interfaces to control hardware, and maestro, a novel workflow executor focused on making bioinformatics more accessible and reliable. While miso drives our hardware, and maestro powers our computational analyses, both frameworks were built with future iGEM teams in mind: extensible, open-source, and ready to accelerate the next generation of synthetic biology research.

dagger

Cycle 1 (syntax): Macro syntax V1

design block icon

The syntax will enable linking tasks to identifiers and defining task inputs. Only permissive toward an identifier followed by a function call.

build block icon build block icon

Implementation is done via a procedural macro that builds nodes out of (Identifier, Identifier, Punctuated<Identifier, Comma>) where the first Identifier is the node name, second Identifier is the called function, and Punctuated<Identifier, Comma> is the called function’s arguments (where each Identifier in this list is the name of another node). Implementation is in this commit.

test block icon test block icon

We are able to parse node definitions and build a DAG out of dependency relationships, serving as an initial proof of concept of the dagger methodology.

learn block icon learn block icon

In future, node definitions should be more permissive toward complex Rust expressions rather than simply a function call and arguments.

Cycle 2 (syntax): Macro syntax V2

design block icon

The syntax will recursively parse Rust expressions; for each node, all identifiers which correspond to other nodes are collected. As such, an identifier followed by an arbitrary Rust expression is permitted.

build block icon build block icon

Implementation is done via recursive parsing of the expression, and DAG construction from the resultant relationships. First implementation was done in this commit.

test block icon test block icon

We are able to parse complex Rust expressions and identify nodal relationships out of the noise.

learn block icon learn block icon

The primary advantage of the V2 syntax is that existing Rust code can be easily converted into dagger-compatible syntax. For instance, we had existing bioreactor code which was running sequentially; conversion to dagger syntax took less than 5 minutes, and made our code run in parallel.

Cycle 1 (algorithm): Thread scheduling

design block icon

This initial implementation will run every node in a different thread. One thread will be spawned per node, and threads will wait until all “parent” nodes complete before executing the node’s function.

build block icon build block icon

Initial implementation is in this commit. Threads are synchronized via waiting on atomics via platform-specific APIs (e.g., futex on Linux).

test block icon test block icon

Initially, various bugs from the platform-specific implementations and use of unsafe code arose (for instance, an ownership bug which broke Rust’s invariants caused double-free and use-after-free bugs). These have been resolved, but the implementation still feels very fragile.

learn block icon learn block icon

In future, greater reliance on std APIs over hand-rolled, platform-specific APIs would be desirable.

Cycle 2 (algorithm): Optimized thread sharing

design block icon

The above algorithm will be implemented via a greedy local search algorithm. Broadly, each “orphan” node (in the above example, this is ‘a’) becomes a starting point for search, and the graph is traversed top-to-bottom, with “free threads” (due to multi-parent nodes) tracked for later distribution.

build block icon build block icon

The algorithm is implemented in a testing workspace, enabling algorithmic validation on manually constructed DAGs.

test block icon test block icon

The algorithm successfully resolves complex DAG structures in an efficient manner. However, it is still suboptimal since it cannot leverage runtime information (e.g., in the above figure, if c terminates before f, it can be re-used, enabling the process to only run with 2 threads).

learn block icon learn block icon

DAG execution can only be fully optimized by leveraging both compile-time and runtime information. At compile-time, relationships between processes must be determined from the source code tokens; then, at runtime, thread scheduling should be determined.

Cycle 3 (algorithm): Supervised thread pool

design block icon

The runtime scheduler will run on a supervisory thread, remaining parked for most of the DAG’s execution, waking sporadically when processes terminate to check if their children can be run. If so, it will schedule child tasks on a previously-spawned thread that is now free; if no such thread exists, a new thread is spawned.

build block icon build block icon

This was built by designing a novel Scheduler structure, along with data structures to hold task/thread information. Implementation details are available in the library internals section.

test block icon test block icon

This implementation has been tested extensively to be robust, including by leveraging dynamic analysis tools.

learn block icon learn block icon

Leveraging runtime introspection is critical when designing parallelization systems, as it improves thread reuse efficiency beyond what can be reasoned at compile-time.

maestro

Cycle 1: finalflow

design block icon

The first proof of concept design sketch of a workflow executor. Designed for basic local execution and piping, with inbuilt parallelism primitives.

build block icon build block icon

finalflow was implemented as a single-crate Rust library. The completed version of this DBTL is available at this Git commit.

test block icon test block icon

finalflow is able to execute simple (e.g., 2-step) processes consistently, but is riddled with small bugs and does not have proper error handling and stdout/stderr piping.

learn block icon learn block icon

For future versions, parallelism should be outsourced to a dedicated framework (i.e., [dagger](./dagger)) and support for additional execution environments should be expanded.

Cycle 2: Atomicity and Extensibility

design block icon

finalflow was highly reliant on mutable global state, making it difficult to parallelize due to high reliance on locks for safe, concurrent access. Additionally, it was not built to be extended for other execution platforms. This version is designed to make each process atomic, stateless, and configurable to a specific executor.

build block icon build block icon

finalflow was rewritten to improve its atomicity, making it solely reliant on a one-time session directory setup. Furthermore, execution was offloaded to an Executor trait, allowing extensibility for future execution platforms. This DBTL is related to this Git commit.

test block icon test block icon

This version of maestro is able to more reliably execute local processes, and all parallelization primitives have been stripped in favour of designing with [dagger integration](#dagger-integration) in mind. Execution is done by passing a process definition to a struct that implements Executor.

learn block icon learn block icon

For future versions, execution should be extended to support additional platforms other than direct execution.

Cycle 3: SLURM support

design block icon

maestro was initially only able to execute scripts directly. This DBTL cycle aimed to add support for SLURM execution and configuration.

build block icon build block icon

Implementation of Slurm support is done by implementing a novel SlurmExecutor struct and the Executor trait for it. The SlurmExecutor struct contains a slurm_config field that holds fields wrapping sbatch directives. Slurm support was added in this Git commit.

test block icon test block icon

This version of maestro was tested to successfully schedule and execute scripts via Slurm on the University of British Columbia’s high performance compute cluster, ARC Sockeye.

learn block icon learn block icon

For future versions, execution should be dynamic based on user configuration rather than hardcoded. Additionally, a framework needs to be developed for passing arguments into workflows.

Cycle 4: Reflection and runtime configuration

design block icon

Instead of hardcoding executor configurations, they will be configured at runtime via a Maestro.toml file that contains tables to define executors and input arguments.

build block icon build block icon

All executor fields are made deserializable via serde-derive, and TOML parsing is done via toml. The process! API is updated to leverage custom, user-defined executors, and arg! is provided to access user-provided arguments.

test block icon test block icon

Maestro.toml support was added in this Git commit. Various configurations were tested to ensure all deserialization is parsed as expected, including nested fields and tagged enumerations.

learn block icon learn block icon

The runtime-configuration format provides far more transparent user configuration than hardcoded configurations. The most important learning point from this DBTL cycle is that all configuration must be checked at program startup, thus ensuring that execution does not fail unexpectedly midway through program execution.

miso

Cycle 1: LED switch

design block icon

We planned a tech stack fully powered by Rust, with hardware, server, database, and frontend layers.

build block icon build block icon

We designed a simple proof-of-concept that switched an LED on and off.

test block icon test block icon

We were able to successfully build this first proof of concept. However, we ran into issues building the RPPAL library on non-Linux platforms.

learn block icon learn block icon

We decided to modularize the codebase into multiple self-contained “crates” to alleviate compilation issues. Furthermore, we decided to persist data in plaintext as a CSV for simplicity.

Cycle 2: Sensor polling algorithm

design block icon

We designed a framework that enables sensor polling, configured to specific hardware at runtime.

build block icon build block icon

We implemented a custom sensor definition and deserialization framework using a procedural macro, and a dynamic graphing frontend in Dioxus.

test block icon test block icon

We were able to successfully poll a sensor and record its data into a CSV file; however, the graphing frontend was unable to successfully build an graph using third-party charting libraries.

learn block icon learn block icon

Macro-based definition is too restrictive; traits should instead be used to define shared behaviour. Furthermore, Dioxus leads to complications when interacting with third-party libraries, so a custom native-web framework should be used instead.

Cycle 3: miso

design block icon

We redesigned all subsystems significantly. A trait-based sensor and motor implementation framework will be set up, and a native web frontend will be used.

build block icon build block icon

We built jumpdrive, a custom library for serving static files and managing WebSocket connections. Motor and Sensor traits were implemented, and a standard library of motors was implemented.

test block icon test block icon

The finished web frontend was tested with both simulated and real data. We were able to successfully drive multi-motor and multi-sensor systems.

learn block icon learn block icon

Prioritizing extensibility and flexibility is typically beneficial when designing modular systems; as well, leveraging simple native APIs typically creates less headache than using complex libraries.

Learn more about our software tools and cycles at the Software deliverable page and its corresponding sub-deliverables!

Inclusive Design Space

We ask all those who work or have worked in a wet lab; what would your life look like if something happened to your hands? Our Inclusive Design Space project followed a inclusive design framework to make wet labs more inclusive for individuals who have musculoskeletal disorders. As per the inclusive design framework, with worked with a user to develop a tool that specifically target her needs, leading to the development of mutliple marks that can be found in our Gitlab. To learn more, check out our Inclusivity page!

MSK Tool

DBTL 1 - Creating the Weighted Decision Matrix (WDM)

design block icon

Using our literature review, we determined what are likely to be our user’s needs.

build block icon build block icon

We built the WDM by determining design requirements, and choosing weights associated to the literature statements based on what needs were found most often.

test block icon test block icon

We had our WDM evaluated by our co-PI.

learn block icon learn block icon

We learnt from our co-PI that the next decision framework most go further into detail regarding quantitative testable requirements. We also finally met with our stakeholder, which taught us her specific needs for the design.

DBTL 2 - Creating C-sketches and WDM

design block icon

We created C-sketches of different possible designs that targeted the needs we learnt about from the literature review and from our user discussion,.

build block icon build block icon

We created a combined weighted decision matrix (WDM) by using the advice from our co-PI, Dr. Jenna Usprech, and incorporating our users needs alongside the literature review.

test block icon test block icon

We tested the C-sketches by evaluating them against the WDM we had built. All our designs failed so we had to then perform another screening of new C-sketches.

learn block icon learn block icon

We learnt what aspects of the designs work and don’t work, which will inform our next cycle of C-sketches. As well, we had 3 iHP conversations to help inform the following steps to of design.

DBTL 3 - Improving C-Sketches and WDM

design block icon

Considering requirements must also include that the tool does not interfere with the pipetting, we documented what aspects of the pipette mustn’t be interfered with. We also performed the second round of C-sketches.

build block icon build block icon

With our user, we made modifications to our WDM, and created satisfaction curves based alongside our user that will then be used to evaluate our designs. A conversation with our co-PI clarified our WDM.

test block icon test block icon

Using our new WDM, we screened our C-sketches to determine which one we should print.

learn block icon learn block icon

We learnt which C-sketch we should move forward in designing our CADs.

DBTL 4 - Prototyping

design block icon

From the encouragement of Dr. Kudzia, we created a preliminary CAD of Design E from the Round 2 C-sketches.

build block icon build block icon

We had another meeting with our user to determine the measurements of the Round 2 Design E CAD. We used clay to find the optimal size for our user ,and using dry clay created a prototype of aforementioned design.

test block icon test block icon

We measured the dry clay prototype. We also measured the pipette to determine how the tool will attach to the pipette.

learn block icon learn block icon

We learned what the exact measurements of the CAD needs to be.

DBTL 5 - Mark 1

design block icon

We incorporated the previous measurements into the CAD design.

build block icon build block icon

The first CAD was printed in PLA to save money, making the mk 1.

test block icon test block icon

We attempted to put the tool onto the pipette, which failed. We were also told that the tool was excessively big based on the hand of the person testing the tool.

learn block icon learn block icon

We learnt we need to print a smaller print, as well as fix the internal shape of the tool so that it better fits the pipette.

DBTL 6 - Mark 2

design block icon

We created a smaller CAD design with an updated internal curvature.

build block icon build block icon

The second CAD was printed, making the mk 2.

test block icon test block icon

The tool again did not fit onto the pipette.

learn block icon learn block icon

We learnt we need to further modify the opening by flattening the end of the opening and making the beginning of the opening wider. As well, we decided to go back to the thicker design because the thin design didn’t feel helpful for our testers.

DBTL 7 - Mark 3

design block icon

We incorporated the previously determined measurements into the CAD design.

build block icon build block icon

The third CAD, Mk 3, was printed.

test block icon test block icon

The print failed a third time, the print did not account for the width of the pipette not being constant.

learn block icon learn block icon

We learnt that we needed to have a larger entrance for the opening and a smaller end of the opening.

DBTL 8 - Mark 4

design block icon

We incorporated the previously determined changes to the measurements into the CAD design. Due to wanting to save material, we went back to the thinner print.

build block icon build block icon

The fourth CAD was printed, making the mk 4.

test block icon test block icon

This time, we were able to insert the tool onto the pipette, however it was on the wrong side of the pieptte. When we tested the correct side, it didn’t actually fit pipette, meaning modification to the dimensions were still needed.

learn block icon learn block icon

We learnt we need to make more modifications to the internal shape. We decided to try and make it “click” onto the pipette, rather than being a direct replica of the shape of the pipette’s body.

DBTL 9 - Mark 5

design block icon

We designed mark 5, where the entrance of the internal opening is smaller than the end of the opening, with the hope it “clicks” into place.

build block icon build block icon

We printed the mark 5.

test block icon test block icon

We attached it to the pipette, where it clicked onto the pipette, making the print a success. We tested the prints and had our user test the prints (details are found in the next page).

learn block icon learn block icon

In the next page, we describe what we learnt from our tests.

DBTL 9 continuation

design block icon

In the previous page, we designed mark 5.

build block icon build block icon

In the previous page, we printed the mark 5 and it was a success!

test block icon test block icon

Our user tested the prints and determined what she liked. We also performed an EMG electrode measurements to get quantitative measurements of the effect of the tool.

learn block icon learn block icon

We learnt our user stated she would like a smaller print with additional grip.

DBTL 10 - Mark 6

design block icon

We designed a smaller print with a grip.

build block icon build block icon

We printed twice, one print was successful, the other failed.

test block icon test block icon

However, when performing additional literature review, we found a study that created new design parameters.

learn block icon learn block icon

We learnt we needed to retry a smaller print, but with a specific diameter.

DBTL 11 - Mark 7

design block icon

We modified Mark 6 to have a handle diameter of exactly 51 mm.

build block icon build block icon

We successfully printed the design, making a Mark 7.

test block icon test block icon

We gathered qualitative opinions comparing Mark 5 to Mark 7.

learn block icon learn block icon

We learnt we need to include a ribbed version of Mark 5. We shared our CADs on GitLab for other teams to use.

To learn more about the iterations of the MSK tool in detail, check out our Inclusivity pages!

Entrepreneurship

To consider the real-world applications of our project, we created an entrepreneurship plan which encompasses a product proposal, business pitch and long term growth strategies.

Business Development

Elevator Pitch DBTL

design block icon

Design: Defining the project’s core context by identifying critical problems and user needs both on Earth and in space helped lay the groundwork for a clear and compelling narrative.

build block icon build block icon

Build: We crafted and refined our pitch by mapping stakeholders, testing messaging, and positioning our solution within the competitive landscape.

test block icon test block icon

Test: We conducted iterative reviews and benchmarking of the pitch with peers and advisors to assess clarity, impact, market fit, and technical feasibility.

learn block icon learn block icon

Learn: Feedback-informed continuous improvements, sharpening the emotional appeal, emphasizing dual-use value, and validating competitive advantages.

Stakeholder Determination DBTL

design block icon

Design: Identified and categorized potential stakeholders across Earth and space sectors, focusing on roles and motivations to anticipate concerns and opportunities.

build block icon build block icon

Build: Insights were translated into segmentation tables and value propositions, highlighting how MeduCA meets stakeholder-specific needs in both terrestrial and extraterrestrial markets.

test block icon test block icon

Test: Stakeholder assumptions were validated through competitor analysis, policy reviews, and iterative refinement to prioritize key decision-makers and adopters.

learn block icon learn block icon

Learn: We learned that stakeholders prioritize solutions that address economic and environmental challenges simultaneously, confirming the strength of our dual-purpose approach.

To learn more about meduCA’s entrepreneurship plan in detail, check out our entrepreneurship pages starting at Needs Finding

Overall, these DBTL cycles are integrated all throughout meduCA’s project workflow and guided our work all season. These iterations were instrumental in ensuring that our deliverables proceeded in a logical fashion and demonstrate true Engineering Success across every subteam. To read about these cycles in context, we encourage you to navigate to their home pages which are linked below each section.