In synthetic biology, design and building are only as good as the measurements that confirm whether the system works. Measurement is how we move from hypotheses to evidence. For our project, measurement was not just an afterthought — it was the core of our progress. Every stage of our workflow depended on reliable data to show whether we were moving closer to our goal of enhancing MHC Class I antigen presentation.
To confirm the production of our engineered proteins, we used SDS-PAGE (sodium dodecyl sulfate–polyacrylamide gel electrophoresis). This technique separates proteins according to molecular weight and is an essential method in protein biochemistry to validate expression and purity. We expressed four constructs—wild-type HLA-A*02:01, Mutant #1 (W167A), Mutant #2 (Y7A/Y99A/Y159A/Y171A), and β2-microglobulin (β2M)—in E. coli BL21(DE3) under IPTG induction, then subjected lysates and purified fractions to SDS-PAGE using the Laemmli system (1).
Initially, our gels revealed dense, irregularly shaped bands around the expected molecular weights of 44 kDa and 12 kDa. While this indicated high expression, it also suggested the presence of partially misfolded aggregates, a common problem in bacterial overexpression of eukaryotic proteins. We recognized that intensity alone could not be mistaken for success. This led us to optimize the folding and solubility of our proteins by employing a colony-splitting strategy, selecting clones that demonstrated consistent expression without aggregation artifacts. The difference was immediately visible: after optimization, our SDS-PAGE results displayed sharp, well-defined bands with minimal smearing, confirming that the recombinant proteins were folding more correctly and were suitable for downstream use.
This improvement was not only a technical milestone—it was a validation of our iterative engineering process. SDS-PAGE provided immediate, visual evidence that our system was functioning and gave us confidence to proceed to functional testing. These results established the first quantitative checkpoint in our workflow: that expression and solubility could be objectively measured, compared, and improved.
Figure 1. Images of SDS-PAGE gel with clear bands indicating successful expression of MHC, including HLA-A 0201 and B2M
After confirming expression, our next objective was to quantify the amount of protein produced. This step was critical for maintaining experimental consistency, as differences in input concentration can lead to misleading functional comparisons. We used the Bicinchoninic Acid (BCA) assay (2), a colorimetric method in which copper ions are reduced by protein peptide bonds under alkaline conditions, producing a measurable color change proportional to total protein content.
Each sample was mixed with the BCA reagent, incubated at 37°C, and measured spectrophotometrically at 562 nm. A standard curve using bovine serum albumin (BSA) as a reference allowed us to interpolate concentrations with high precision. We observed concentration variations of up to 15% among different purification batches, highlighting how seemingly minor differences in expression yield could confound binding data if not normalized. After adjusting all samples to the same total protein concentration, we were able to conduct subsequent binding assays on an equal footing.
The BCA assay not only standardized our data but also illustrated an important lesson in reproducibility. Measurement in synthetic biology is not simply about confirming that something exists—it’s about quantifying it in a way that can be replicated by others. By incorporating this normalization step, we ensured that our comparisons between wild-type and mutant HLAs were based on true biological differences, not experimental noise.
Figure 2. Standard curve from BCA assay, with sample absorbance values mapped to concentrations.
In our workflow, we aimed not just to express monomeric MHC Class I proteins but to assemble them into tetramers. This step was critical because tetramers provide a much stronger binding signal to T-cell receptors compared to monomers, which bind only weakly and transiently (3,4). Tetramerization is achieved by attaching four biotinylated MHC--peptide complexes to a streptavidin backbone, creating a multivalent structure.
While some groups perform separate assays to confirm tetramer assembly, our team chose a more integrated approach: instead of measuring tetramers directly, we validated their proper formation through the binding assays themselves. If tetramers were not forming, we would not expect to see specific peptide binding signals in the ELISA. In contrast, clear positive binding signals provided strong indirect evidence that tetramers had formed correctly.
This was a deliberate engineering decision. By combining the quality check (tetramer formation) and the functional test (binding affinity) into a single experiment, we streamlined our workflow and reduced the number of separate measurements we needed to perform --- an important consideration for a high school team working with limited time and resources.
Figure 3. Refolding/tetramerization were performed in a single-step process, along with fluorescence measurement.
The ELISA (Enzyme-Linked Immunosorbent Assay) was the heart of our measurement strategy. It allowed us to quantify how well wild-type and engineered MHC tetramers interacted with tumor-specific peptides.
In practice, we coated 96-well plates with peptide-loaded MHC tetramers and used enzyme-linked detection reagents (e.g., streptavidin--HRP) to measure binding. The enzymatic reaction produced a color change, and the intensity of this color was measured at 450 nm (5). Stronger absorbance indicated stronger or more stable binding.
By comparing the absorbance curves of wild-type HLA-A*02:01, Mutant 1 (W167A), and Mutant 2 (Y7A, Y99A, Y159A, Y171A), we could evaluate whether our engineered designs improved antigen presentation. This gave us a quantitative and reproducible way to judge engineering success.
Figure 4. Fluorescence measurement to analyze the peptide binding affinity of each HLA type.
Our journey through measurement revealed that precision in synthetic biology is not only about obtaining data—it is about understanding what that data truly means. From our experience, several key insights emerged that reshaped how we think about both engineering and biology.
First, we learned that not all expression is good expression. The initial SDS-PAGE results taught us that a strong signal does not necessarily equate to functional protein. Misfolded or aggregated proteins can stain just as intensely as correctly folded ones, but they lack the biological activity that makes them useful. This distinction highlighted the importance of combining qualitative and quantitative measurement approaches—looking at both what is present and how it behaves. We came to realize that the ultimate goal of measurement is not to generate numbers or images, but to interpret what those outputs signify about molecular function. In future iterations, this understanding will guide us to incorporate folding assays, solubility tests, or circular dichroism spectroscopy to complement our electrophoretic analyses.
Second, we discovered that measurement can be both efficient and multifunctional. Early in the project, we often separated tasks for the sake of simplicity: one assay for purification, another for tetramerization, another for binding. However, as our workflow matured, we realized that integrated measurement—designing assays that fulfill multiple objectives—was not only more efficient but also more reliable. By using the same functional assay to verify tetramer assembly and to measure peptide binding, we reduced the number of independent variables and sources of error. This principle of “measurement synergy” became one of the defining philosophies of our team. It demonstrated that robust engineering in biology does not always require more equipment or more steps, but rather smarter experimental design.
Third, we learned that quantitative validation is universal and irreplaceable. The BCA assay and ELISA transformed our project from descriptive to analytical science. They allowed us to move beyond subjective observations—like “the bands look clearer”—to objective, numerical comparisons supported by replicable standards. Quantitative data enabled us to make meaningful statements about relative performance, reproducibility, and engineering success. This experience showed us how numbers can unify biology with the rigor of physics and engineering. It also reinforced that measurement is not a one-time event but a continuous process—each dataset feeds the next cycle of design, build, and test. For future teams, we hope our measurement framework serves as a template for how quantitative reasoning can be embedded into every stage of a synthetic biology project.
Finally, we realized that measurement connects every discipline in iGEM. It bridges wet-lab experimentation with modeling, hardware, human practices, and even education. Each gel image, absorbance curve, or standardization protocol we developed had implications beyond our immediate goals—it provided resources for reproducibility, accessibility, and open scientific communication. In many ways, measurement became the common language of our team, integrating insights from biology, computation, and ethics into a coherent story of evidence-driven discovery. This mindset, we believe, embodies the true spirit of iGEM: building not just biological systems, but systems of understanding.
These lessons not only advanced our project but also gave us experience in how measurement drives iteration in synthetic biology.