Engineering Success

engineering success Header

Engineering in iGEM is not just about having an idea --- it's about making that idea work in the lab through iteration. iGEM encourages teams to show how they applied the Design--Build--Test--Learn (DBTL) cycle: identifying problems, designing possible solutions, testing them, and learning from the results. Our project followed this principle closely. Each time we encountered a challenge, we treated it not as a failure, but as feedback. This way, we could keep moving forward, refining our system step by step.

The Design--Build--Test--Learn cycle we applied to our project

Figure 1: The Design--Build--Test--Learn cycle we applied to our project. Each engineering challenge followed this iterative framework.

At the start, our designs were simple: clone the gene, express the protein, and test its function. But reality is never that straightforward. Protein yields were lower than expected, expression sometimes stressed the bacteria, and assembling functional MHC complexes required more than just following textbook protocols. Each time, we went back to the DBTL cycle --- redesigning parts of the workflow, changing experimental conditions, or rethinking our approach entirely.

We want to highlight two important things about our engineering journey:

  1. Iteration matters more than perfection. Our first attempts rarely worked as planned, but each attempt gave us data to improve the next step.
  2. Decisions were guided by measurable outcomes. Whether it was SDS-PAGE bands, BCA assay readouts, or fluorescence signals, we always tested our assumptions against real results.

This page documents how we turned setbacks into progress. From rescuing low protein expression by testing multiple colonies, to improving antigen detection by building MHC tetramers instead of relying on monomers, each engineering step reflects the DBTL cycle in action. Our experience shows that even as high school students, with systematic problem-solving, we can engineer complex biological systems in ways that align with professional research practices.

Challenge 1: Low Protein Expression in E. coli

When we first tried expressing the HLA-A*02:01 heavy chain in E. coli, the outcome was disappointing. Colonies that carried our plasmid grew more slowly than controls, and protein yields were very low. On SDS-PAGE, the heavy chain band appeared weak and irregular, instead of the clear bands we expected. This suggested two possibilities: either the protein was not being expressed efficiently, or the expression burden was toxic to the bacteria (Rosano & Ceccarelli, 2014).

This became our first major engineering obstacle. Without enough protein, purification, refolding, and binding assays would not be possible.

Iterative Solution: Splitting Colonies and Selecting Survivors

Instead of repeating the same culture from a single colony, we used the Design--Build--Test--Learn (DBTL) cycle to approach the problem systematically.

  • Design: Hypothesis --- different colonies from the same plate might vary in how well they tolerated heavy chain expression.
  • Build: We inoculated multiple flasks with different colonies, rather than relying on just one.
  • Test: Growth curves (OD₆₀₀) were measured for each flask. Some showed slow growth and high stress, while others grew more steadily.
  • Learn: Colonies differed significantly in their ability to express the protein. By scaling up the healthier colonies, we obtained higher protein yields.

This adjustment turned a stalled experiment into usable protein production. It was our first clear success in applying engineering principles to a real biological problem.

By iterating the engineering cycle, healthier colonies were selected to enhance protein yield

Figure 2: By iterating the engineering cycle, healthier colonies were selected to enhance protein yield.

Challenge 2: From Monomers to Tetramers

Once we successfully purified the heavy chain and β2-microglobulin, we faced a decision. Should we test antigen binding with monomeric MHC--peptide complexes, or go further and form tetramers?

We chose the more ambitious option: tetramer formation. This required extra steps, but the benefits outweighed the effort.

  • Higher Sensitivity: Monomers bind T-cell receptors very weakly and transiently. Tetramers, made by linking four biotinylated MHC molecules to streptavidin, have much stronger avidity (Altman et al., 1996). This makes binding more detectable and reliable.
  • Established Method in Immunology: MHC tetramers are widely used in immunology labs to identify antigen-specific T cells (Davis & Bjorkman, 1989). By following this path, our project aligned with professional standards.
  • Better Testing of Engineering Impact: Since our goal was to improve antigen delivery, we needed a system sensitive enough to reveal differences between wild-type and engineered MHC molecules. Tetramers gave us that level of resolution, while monomers might not have.

Choosing tetramers was not the easiest route, but it represented a deliberate engineering choice: building a more advanced testing system so our results could be meaningful and trusted.

Challenge 3: Measuring Binding Affinity

The next step was to confirm whether our engineered MHC molecules showed improved peptide presentation compared to wild-type. For this, we needed a quantitative and reproducible assay.

We selected a fluorescence-based plate assay, similar in principle to ELISA (Crowther, 2000). In our setup:

  • MHC tetramers carrying tumor-specific peptides were immobilized on a nickel-coated plate.
  • Detection reagents, such as streptavidin--fluorophore conjugates, were used to measure binding strength.
  • Fluorescence intensity was measured using a microplate reader, with stronger signals corresponding to stronger or more stable peptide binding.

This method was ideal for two reasons:

  1. It provided a direct quantitative measurement to compare wild-type vs. engineered variants.
  2. It was accessible and reproducible for a high school team, while still reflecting how professional labs validate protein interactions.

Integrating this assay into our DBTL cycle ensured that our engineering decisions were backed by measurable outcomes, not just assumptions.

Binding affinity analysis set-up

Figure 3. Binding affinity analysis set-up

Integration of Data and Iteration

At the heart of our engineering workflow was a continuous data-driven feedback loop that connected every stage of the Design–Build–Test–Learn cycle. Each experimental outcome—whether a gel band, absorbance value, or fluorescence signal—was treated as both a result and a guidepost for the next iteration.

Our approach began with SDS-PAGE analysis, which revealed irregular band patterns and inconsistent expression levels across colonies. Rather than dismissing this as experimental noise, we treated the heterogeneity as meaningful data. The observation that some colonies produced sharper, more defined bands prompted us to investigate colony-level expression variability—a decision that ultimately doubled our usable protein yield. This exemplified the first layer of iteration: moving from observational data to informed selection.

Next, the BCA protein quantification assay (Smith et al., 1985) transformed these qualitative improvements into quantitative insight. By establishing a standard curve and measuring absorbance at 562 nm, we could normalize protein concentrations across different preparations. This normalization was critical for downstream comparability. Without equalized inputs, interpreting differences in peptide-binding strength between wild-type and mutant HLA-A*02:01 would have been unreliable. Quantification ensured that every subsequent measurement reflected genuine biochemical behavior, not uneven sample loading.

In the binding affinity experiments, fluorescence-based assays provided the most direct evidence that our engineering was effective. The data revealed that Mutant 1 (W167A) maintained a stable binding signal over time, while Mutant 2 (Y7A/Y99A/Y159A/Y171A) exhibited broader signal variability—suggesting that the second set of mutations increased flexibility at the expense of stability. These results, though subtle, were vital in confirming that our computationally suggested modifications were biochemically relevant.

Crucially, these quantitative datasets were not analyzed in isolation. They were systematically compared across cycles to guide subsequent design decisions. Each “Learn” phase produced a distinct hypothesis, which then re-entered the “Design” phase as an improved construct or refined assay condition. For example, the misfolding identified in SDS-PAGE prompted a redesign of expression conditions; the protein quantification results informed how much sample to use per binding test; and the fluorescence data indicated which mutant should be prioritized for extended peptide docking simulations.

Through this cyclical integration of data and learning, our project advanced from uncertain early-stage expression to reproducible, quantitative measurement of engineered antigen presentation. This tight coupling between measurement and design exemplifies what iGEM defines as Engineering Success: reproducible, data-informed iteration that transforms uncertainty into reliable performance. It also provided a foundation for future reproducibility—any lab following our documented workflow could repeat, validate, and extend our findings without ambiguity.

Decorative design separator

Reflection and Future Directions

The iterative engineering process not only shaped our technical results but also reshaped our team’s understanding of what it means to “engineer biology.” Initially, we approached the project with a conventional research mindset—testing hypotheses and seeking positive results. Over time, however, we began to see the deeper value of iteration: every unexpected result was a form of communication from the biological system itself.

We learned that failure is feedback. When our early protein yields were low, it wasn’t merely a setback—it was an indication that our design assumptions about bacterial tolerance were incomplete. By systematically mapping OD₆₀₀ growth rates against expression patterns, we learned to recognize subtle correlations between stress response and yield optimization. This insight has since become a reusable design rule within our lab: variability across colonies can be leveraged as a natural selection tool for protein production.

Moreover, we recognized that measurement is inseparable from design. Every assay we performed—SDS-PAGE, BCA, fluorescence—wasn’t just a test but an extension of the engineering process itself. Each method introduced constraints that shaped how we thought about our system. For example, the resolution limit of the SDS-PAGE gel informed our choice of molecular weight markers and buffer composition, while the sensitivity of the fluorescence reader determined our minimum measurable peptide-binding threshold. This alignment between data and decision-making refined our ability to design with quantitative foresight rather than reactive troubleshooting.

From a broader perspective, the experience cultivated a new level of systems thinking. Engineering MHC molecules is inherently complex because it sits at the intersection of structural biology, immunology, and synthetic design. By combining AI-based mutation prediction (via DiffDock modeling) with iterative wet-lab validation, we began to bridge computational inference with experimental reality. This hybrid workflow not only accelerated our learning but also demonstrated how computational biology can guide empirical engineering in education-focused research environments like iGEM.

Looking forward, we aim to extend this iterative philosophy beyond the current project. Our immediate next step is to integrate computational modeling feedback directly into the wet-lab cycle—for instance, using predictive structural analysis to suggest new stabilizing mutations before synthesis. This would create a closed-loop engineering system where data from fluorescence assays automatically inform new simulation parameters, enabling real-time design optimization.

Additionally, we plan to transform our experimental pipeline into an open-source educational module. By adapting our DBTL documentation into a digital teaching format (complete with annotated protocols, data templates, and visualization guides), we hope to empower future iGEM teams—particularly high school teams—to replicate and iterate upon our work. This aligns directly with the iGEM Foundation’s mission of knowledge dissemination and community reproducibility.

In the longer term, our modular framework could be expanded for broader immunological applications, such as antigen discovery in infectious diseases or autoimmunity studies. The workflow of iterative design, measurement-driven feedback, and AI-supported prediction is not limited to MHC engineering; it represents a scalable, generalizable approach to rational protein design.

Ultimately, this journey redefined how we perceive “engineering success.” It is not the end product that defines success but the resilience of the process—the willingness to redesign, retest, and relearn. By combining data integration, computational reasoning, and educational outreach, we turned a challenging biological question into a repeatable, teachable model of scientific iteration.

References

  1. Altman, J. D., Moss, P. A. H., Goulder, P. J. R., Barouch, D. H., McHeyzer-Williams, M. G., Bell, J. I., McMichael, A. J., & Davis, M. M. (1996). Phenotypic analysis of antigen-specific T lymphocytes. Science, 274(5284), 94–96.
  2. Ando, D., Garcia Martin, H., & Keasling, J. D. (2016). Engineering cycles in synthetic biology: from DNA assembly to models. Current Opinion in Biotechnology, 39, 139–144.
  3. Crowther, J. R. (2000). The ELISA Guidebook. Humana Press, Totowa, NJ.
  4. Rosano, G. L., & Ceccarelli, E. A. (2014). Recombinant protein expression in Escherichia coli: advances and challenges. Frontiers in Microbiology, 5, 172.
  5. Smith, P. K., Krohn, R. I., Hermanson, G. T., Mallia, A. K., Gartner, F. H., Provenzano, M. D., Fujimoto, E. K., Goeke, N. M., Olson, B. J., & Klenk, D. C. (1985). Measurement of protein using bicinchoninic acid. Analytical Biochemistry, 150(1), 76–85.