Engineering

Overview

Since its inception, our project has undergone continuous refinement across multiple domains, including wet lab work, modeling, hardware development, human practices, and education. Despite encountering challenges, we consistently adhered to the Design-Build-Test-Learn (DBTL) cycle, which ultimately led to our success. The following sections detail our iterative development process in the wet lab, modeling, and hardware components.

Wet Lab

Cycle 1: Chemical Immobilization Attempt & Pivot

Design

Advances in synthetic biology have expanded the use of genetically modified microorganisms in medicine, particularly in living biomaterials. A prevalent strategy in engineered yeast systems involves chemically inducible suicide switches to prevent cellular escape. However, the required inducers are often not fully biocompatible and necessitate manual intervention, thus lacking autonomy. As our dressing contacts blood directly, preventing live cells from escaping the carrier into the bloodstream was critical. Initially, we explored cell surface modification to immobilize yeast cells stably within the hydrogel matrix.

Build

We employed sodium periodate oxidation to introduce aldehyde groups onto the yeast cell surface, aiming to enable stable covalent cross-linking with the hydroxyl groups present in the L-DOPA modified hydroxybutyl chitosan (L-HBC) hydrogel.

Test

After following the established protocol, we observed a significant loss of viability or complete cell death in the modified yeast. No colony formation was observed on YPD plates after a three-day incubation.

Learn

Consultation with our principal investigators, Professor Guanglei Liu and Professor Ya Liu, revealed that this oxidative modification is standard for Pichia pastoris but unsuitable for our chassis, Yarrowia lipolytica, due to its more fragile cell wall and lower tolerance to such chemical treatments. Consequently, we pivoted to exploring gentler cell encapsulation strategies.


Cycle 2: Hydrogen Bond-Mediated Gentle Encapsulation

Design

Conventional cell encapsulation often employs harsh reagents that can significantly compromise cell viability. To address this, we devised a strategy leveraging hydrogen bonding between the surface-displayed antimicrobial peptide Pexiganan by constructing new plasmid based on pINA1317(BBa_258PCOD1) and the L-DOPA modified hydroxybutyl chitosan (L-HBC) hydrogel matrix. This approach enabled the encapsulation of the engineered yeast within the biocompatible L-HBC hydrogel.

Build

To achieve microbial surface display, we cloned the Pexiganan sequence into the pINA1317-ylcwp110 plasmid (BBa_25VB2NC8) and transformed it into Yarrowia lipolytica Po1h.

Test

Successful surface display of functional Pexiganan was confirmed by an Oxford cup assay, which demonstrated a clear zone of inhibition against target bacteria. We then conducted encapsulation experiments to quantitatively assess yeast leakage over time

Img.1 Time-course evaluation of engineered yeast encapsulation and leakage within the L-HBC hydrogel. A time-course assay was conducted to quantify the leakage dynamics of engineered yeast encapsulated in the L-HBC hydrogel. (a) At 24 hours, no yeast colonies were observed, indicating excellent integrity of the encapsulation system initially. (b) At 48 hours, the emergence of 4 single colonies indicated the onset of minimal leakage. (c) By 72 hours, the colony count increased to 26, demonstrating a significant, time-dependent increase in leakage. These results confirm that our encapsulation strategy provides complete containment within the first 24 hours and effectively suppresses leakage up to 48 hours. Furthermore, the clear time-dependent leakage trend offers crucial experimental evidence and a defined time window for future optimization of the encapsulation efficacy.

Learn

Leakage assay results (Figure 1) confirmed that our encapsulation strategy, based on non-covalent interactions, effectively contained the yeast for 24-48 hours, with minimal leakage thereafter, demonstrating its reproducibility and initial efficacy. This approach garnered positive feedback from microbiology experts and interdisciplinary team members during human practices engagements, who suggested computational modeling could further elucidate the underlying interactions. Through this DBTL cycle, we significantly mitigated the biosafety risk of cell escape and established a gentle encapsulation method for living biomaterials, while also identifying a clear pathway for future optimization.

Dry Lab

Hardware

Cycle 1: More Reliable "Wound Recognition + Spatial Positioning"

Design

Problem Discovery: In wound visual recognition and spatial positioning, the initial version using RGB threshold segmentation missed boundaries under dark skin/strong reflection. In 3D spatial positioning, ToF/radar solutions were too costly, which is not conducive to low-cost adoption.

Goal: Under low computing power and low-cost constraints, achieve robust 2D contour segmentation and 3D relative spatial position/scale estimation.

Build

1. Data & Model: Collected wound images of feet/hands with different skin colors, then trained FCN/YOLO/SAM models.

2. Geometric Calibration: Integrated camera intrinsic/extrinsic calibration; micro-step sizes (Bx, By) were precisely given by the robotic arm.

3. We attempted machine learning algorithms for semantic segmentation, testing YOLO, SAM, and FCN models for wound segmentation. Mean IoU (mIoU) was used as the evaluation metric.

4. Depth/Scale: Used micro-baseline disparity method: the camera on the robotic arm performed known small horizontal/vertical micro-shifts, disparity was calculated via pinhole imaging to obtain relative height difference ΔZ and pixel physical scale (m/px).

5. Edge Hardware: Low-distortion visible-light camera + ESP/STM32 control.

Test

Recognition Robustness: Compared RGB threshold vs. FCN/YOLO/SAM in dark skin, strong reflection, and fluid environments, using mIoU and boundary F1. Results showed FCN achieved the best accuracy and was chosen as the main recognition model for future applications. Depth Accuracy: Verified ΔZ errors under multiple baselines using standard step blocks; used error propagation to derive recommended micro-baseline (to meet target precision δZ). Also tested dimensional accuracy.

Learn

FCN was the most stable in boundary and dark-skin samples. YOLO had low latency but coarse boundaries. SAM required interactive annotation, suitable for guided labeling of difficult cases.

Future plan: adopt FCN as the backbone recognizer, YOLOv8-seg for ROI proposals/light backup, and SAM for semi-automatic annotation of difficult samples.

Precision and inference delay vary with different baseline cameras; in real applications, multiple trials are needed to achieve an engineering trade-off balance.


Cycle 2: More Stable "Robotic Arm Control + Anti-Jitter"

Design

Problem Discovery: ESP32 built-in IK and linear interpolation caused jitter and end-effector Z drift during multi-joint coordination, leading to occasional nozzle contact with the wound. Even after PID fine-tuning, residual error remained.

Goal: Improve end-effector trajectory precision and Z-axis stability, enabling repeatable and traceable drug application on low-cost robotic arms (RoArm-M3-S, 5–6 DOF).

Build

1. Developed parameterized IK (using DH parameters to generate numerical/analytical hybrid solutions), solved and time-parameterized on PC, then sent joint trajectories to the robot.

2. Polar Strip Path: Within one strip, primarily moved the base joint, while shoulder/elbow were locked or minimally adjusted, physically suppressing Z-axis disturbance.

3. Trajectory Shaping: Applied S-curve acceleration/deceleration and joint-level synchronization constraints; if necessary, added lead compensation/feedforward for key joints.

Test

Ran comparative tests between the old and new control algorithms. Using both camera and raw serial output from the end effector, jitter was significantly reduced and Z-axis disturbance disappeared, ensuring safety and accuracy in drug application.

Learn

Conclusion: Moving IK and trajectory calculation to the PC side and adoptingb polar strip paths significantly reduced Z jitter. Transitioning from “multi-joint synchronous straight-line” to “few-joint dominant” control proved to be the most cost-effective stabilization method.

Next Step: Add online error shaping at the trajectory level (via end-force/torque or visual feedback loop), and open API + parameterized IK generator to facilitate secondary development and reproducibility.

Model

Cycle 1: From Basic Retrieval to an Intelligent Information Assistant

Design

In the initial phase of the project, the core problem we faced was how to efficiently and accurately find candidate peptides that meet our wet lab requirements from a vast amount of unstructured antimicrobial peptide (AMP) literature and databases. Traditional keyword search or SQL query methods struggle to handle the complex semantics of natural language, leading to information omission or low retrieval efficiency.

To solve this problem, we initially designed a knowledge-based question-answering system based on Retrieval-Augmented Generation (RAG). The core idea is to combine the powerful natural language understanding capabilities of a Large Language Model (LLM) with the precision of an external knowledge base. The system first understands the user's natural language query, then retrieves the most relevant text fragments (Chunks) from our constructed AMP literature database, and finally provides these fragments as context to the LLM to generate an answer that is both accurate and comprehensive. We hoped this approach would avoid the "hallucination" problem of general-purpose large models while addressing the limitations of traditional retrieval methods.

Img.2 Diagram of the RAG Q&A System's Composition and Workflow


Build

We used PyPDF to process the collected AMP literature, splitting it into text chunks. Next, we utilized the Sentence-Transformers model to convert the text chunks into vectors and stored them in a FAISS vector database for fast retrieval. For the backend, we chose DeepSeek-R1 as the core language model and built a web service using the Flask framework. The front end was implemented with HTML and JavaScript for the interactive interface. We successfully built a prototype of the RAG system capable of answering questions about a single uploaded document.

Test

We designed two sets of tests to evaluate the system's performance.

1. Test Group One (Information Extraction Capability): We provided the system with queries targeting a single document, such as "Find me antimicrobial peptides that have inhibitory effects on Staphylococcus aureus." The results showed that the system could accurately identify keywords and, in conjunction with the context, summarize effective information, including the mechanism of action (Figure.3).

2. Test Group Two (Retrieval Accuracy): We conducted a quantitative retrieval quality assessment in a database containing multiple documents. We designed a specific query ("What is the mechanism of action of nisin against Staphylococcus aureus?") and manually annotated the relevant "gold standard" text chunks beforehand.

The test results revealed a serious problem: for this specific query, the system's hit rate, Mean Reciprocal Rank (MRR), and Normalized Discounted Cumulative Gain (nDCG) were all 0. The retriever returned irrelevant text chunks that contained the keyword "nisin." The reason was the high frequency of the keyword in the literature and a deviation in semantic matching, which led the retriever astray.

Learn

This failure gave us a profound understanding that the fixed, linear "Query -> Retrieve -> Generate" workflow of a standard RAG system has fundamental flaws. When faced with complex scientific questions, it lacks the ability to self-correct and iteratively optimize. If the retrieval fails, the entire process fails with it, unlike a human researcher who can dynamically adjust their retrieval strategy.

Based on this lesson, we decided to upgrade the system to an Agentic RAG paradigm. The core of the new design is a cyclical "Think-Act" framework. After retrieval, the Agent first evaluates the quality of the results. If the results are poor, it can autonomously reformulate the query, adjust its strategy, and retrieve again, forming a dynamic optimization loop. We named this more powerful and robust agent the AMP Research Agent. It successfully solved the retrieval problems encountered in the first phase and found the key candidate antimicrobial peptide, Pexiganan, for our wet lab experiments.

Img.3 Comparison of the mechanisms between a standard RAG system and an Agentic RAG system


Cycle 2: From "Black-Box" Generation to Interpretable Sequence Priority Reranking

Design

As our research deepened, we anticipated the need to design new antimicrobial peptides from scratch to meet more stringent therapeutic requirements. However, traditional machine learning or deep learning generation models have two major pain points: "low hit rate" (often below 10%) and "lack of interpretability." The latter makes the model a "black box," creating a huge cognitive gap between wet and dry lab researchers and hindering collaborative optimization.

Our design philosophy was to avoid "black-box" generation and shift to "white-box" ranking. Instead of having the model directly create new sequences, we developed an agent capable of intelligently reranking a set of candidate sequences. The core idea is to leverage the powerful reasoning ability of an LLM to learn and summarize explainable "design experiences" in natural language by comparing a large number of known AMP sequences and their properties. Then, using these accumulated experiences, it evaluates and ranks the candidate sequences, recommending the most promising ones for wet lab validation, thereby significantly reducing validation costs. This method combines the stability of machine learning with the "insightful" convergence advantage of LLM reasoning.

Img.4 Diagram showing the convergence effect of combining machine learning methods with experience-driven LLM reasoning methods


Build

We built the AMP Rerank Agent, which consists of three core modules:

Memory Module: Inspired by the memory-agent concept, we dynamically store and update the experiences learned by the Agent in the form of a knowledge graph.

Training Module: We processed the grampa.csv dataset, calculating and adding physicochemical properties. We innovatively used cosine similarity to cluster sequences (Table 4), ensuring that the LLM could learn the most effective experiences from comparing highly similar sequences. Through careful prompt engineering (Figure.7), we guided the LLM to autonomously learn and accumulate 1948 design experiences.

Reranking Module: We designed a rigorous "Sequence Analysis -> Experience Retrieval -> Final Ranking" workflow (Img.5). The Agent first independently analyzes the physicochemical properties of each sequence, then retrieves relevant experiences from its memory, and finally provides a final ranking with explanations based on all the information.

Img.5 Diagram of the Reranking Module's workflow


Test

We selected 9 antimicrobial peptide sequences as a "gold standard" test set and had the AMP Rerank Agent and a baseline machine learning model (based on ProtBERT) predict the ranking of these sequences according to their MIC values. We used the nDCG@9 metric to evaluate the quality of the ranking results.

The test results showed that the machine learning model achieved an nDCG@9 score of 0.916, while our AMP Rerank Agent scored 0.871 (Tab.1).

Tab.1. Table showing nDCG@9 results for the Machine Learning Method and the AMP Rerank Agent

Learn

Although slightly lower in quantitative score than the pure machine learning model, this test validated the core value of our design. The greatest success of the AMP Rerank Agent lies in its unparalleled interpretability. For example, it can provide clear decision rationales like "strong antimicrobial activity with good safety; stable amphipathic structure, easy to interact with membranes," completely breaking down the barrier of "black-box" models and bridging the cognitive gap between wet and dry labs.

We realized that the current performance is limited by the small amount of training data (only 134 groups). The future optimization path is very clear: through larger-scale training, we will allow the Agent's knowledge network to accumulate enough experience to eventually achieve a "knowledge phase transition," surpassing traditional models in both performance and interpretability.


Cycle 3: From Tedious Programming to Automated Wet Lab Data Insights

Design

During our wet lab experiments, we encountered common data analysis challenges. For example, rheometer data contained missing values due to instrument noise and physically impossible outliers (such as negative moduli). Handling this data required programming knowledge, which was not only time-consuming but also diverted researchers' valuable energy from scientific thinking.

To address this pain point, we designed an automated Data Analysis Agent. The goal of this Agent is to free scientists from tedious data processing and programming. Its core design is to be able to:

1. Understand high-level analysis requirements described by the user in natural language.

2. Autonomously break down the requirements into specific, executable data processing steps.

3. Automatically generate and execute Python code to perform data cleaning, statistical analysis, and model fitting.

Summarize the results into a structured analysis report.

Build

We used the LangGraph framework to build the workflow of the Data Analysis Agent. LangGraph's structure of nodes and edges is naturally suited for orchestrating a fixed workflow. The process was designed as: Planner -> Code Generator -> Code Executor -> Report Generator. When a user inputs a task, the Agent sequentially invokes these nodes, forming a complete automated pipeline from understanding the task to generating the final report. We integrated powerful Python data science libraries such as pandas, SciPy, and matplotlib, enabling it to handle complex analysis tasks.

Img.6 Diagram of the Data Analysis Agent's workflow


Test

We conducted two tests to verify the Agent's effectiveness:

1. General Task Test: We provided the Agent with data on the charge and MIC values of a set of antimicrobial peptides and asked it to analyze the correlation between them. The Agent successfully generated code, calculated the Pearson correlation coefficient, and output a complete report containing statistical analysis and visualizations.

2. Real-World Task Test: We gave the Agent a real challenge we faced in the wet lab—processing anomalous rheology data for the gelation temperature of L-HBC. We described the problem (negative modulus values) and asked the Agent to recommend a processing strategy. The Agent accurately identified the problem and suggested using Spline Interpolation, providing a reasonable explanation for choosing this method, as it best preserves the inherent patterns in data from continuous physical processes.

Learn

The test results demonstrated that the Data Analysis Agent can successfully translate high-level analysis requirements into specific, correct execution steps and code, greatly improving the efficiency and standardization of data processing. We learned that this type of Agent, which combines the planning capabilities of an LLM with the precise execution of code, is a powerful tool for solving repetitive technical tasks in scientific research. It not only solved our immediate data processing problems but also laid a solid technical foundation for our more ambitious future goals—for example, providing real-time online data analysis and decision support for our intelligent robotic arm. This significantly accelerates the conversion of raw data into scientific insights, allowing researchers to focus more on innovation and scientific discovery itself.


Cycle 4: Investigating the Microscopic Driving Forces of L-HBC's Thermo-sensitive Properties

Design

The starting point for this project's engineering design was to find a solid microscopic mechanism for the macroscopic thermo-sensitive phenomenon observed in L-HBC hydrogels in our wet experiments. The core question we faced was: How exactly does a change in temperature drive the material from a low-temperature liquid state to a body-temperature gel state at the molecular level? To address this, we proposed a core scientific hypothesis: this phase transition is the result of a dynamic competition between polymer-water interactions (enthalpy-driven) and polymer-polymer interactions (entropy-driven), dominated by temperature changes. To test this hypothesis, we chose molecular dynamics simulation as our core research tool because it provides an objective perspective on dynamic behavior at the atomic scale. At the outset of the design, we defined the approximations and simplifications in our system: a system composed of 6 L-HBC chains, each with a degree of polymerization of 100, would be considered a "minimal functional unit" capable of representing macroscopic behavior. Furthermore, the nanosecond time scale would be sufficient to capture the key molecular events necessary to validate our hypothesis, such as the rapid reorganization of the hydrogen bond network and the initial collapse trend of the chain conformation.

Build

The construction of the model was a multi-step, multi-tool collaborative precision process. We first used Chemdraw and Materials Studio to build a long L-HBC chain with the correct chemical structure and three-dimensional conformation. Then, using the Sobtop program, we automatically generated topology files based on the GAFF general force field and MMFF94 atomic charges for this complex, non-standard polymer system. This decision struck a key balance between ensuring internal force field consistency and computational efficiency. To obtain a relaxed initial conformation, we thoroughly pre-equilibrated a single chain.

Img.7 Pymol visualization of a single L-HBC chain structure after pre-equilibration


Finally, using Packmol software, we efficiently assembled 6 pre-equilibrated single chains into an initially uniform multi-chain aggregate. We solvated it and performed a pre-equilibration with position restraints. Ultimately, we used GROMACS software to run production simulations of the six-chain glycopolymers at 4°C and 37°C for 1 ns and 8 ns, respectively, as shown in the video (only the simulation at 37°C for 8 ns is shown):

Video.1 Dynamic Simulation Demonstration at 37 degrees Celsius for 8ns

Test

We placed the constructed systems at two different temperatures, 4°C (277.15 K) and 37°C (310.15 K), and ran production simulations for 8 nanoseconds each. We used the system's Root Mean Square Deviation (RMSD) and Radius of Gyration (Rg) as quantifiable success criteria to evaluate its conformational changes. The test results clearly showed that at 4°C, the system's RMSD reached a plateau after about 5 nanoseconds, and the Rg value remained stable. This demonstrated that the polymer chains formed a stable, extended conformation that fully interacted with water molecules. In stark contrast, at 37°C, the system's RMSD did not converge throughout the entire 8 nanoseconds, while the Rg showed a significant and continuous downward trend. This directly proved that the polymer chains were undergoing continuous dehydration and collapse.

Learn

The test results of this engineering cycle strongly validated our initial scientific hypothesis. We learned that the thermo-sensitive phase transition of L-HBC is indeed a dehydration and collapse process driven primarily by an increase in entropy initiated by the temperature change. This profound understanding from the atomic scale not only provides a solid theoretical explanation for the phenomena observed in the wet lab, but more importantly, it offers practical guidance for our project. The model's results clearly indicate that to ensure optimal efficiency and uniformity during the encapsulation of drugs or cells, all loading operations should be performed at low temperatures (e.g., 4°C), as the molecular network is most open and extended at this temperature, providing the best fluidity and loading space.


Cycle 5: Deconstructing the Affinity Mechanism between L-HBC and Biomolecules

Design

After successfully elucidating the material science foundation of L-HBC, we entered the second engineering cycle, aiming to solve a more advanced problem: What is the molecular basis for L-HBC's "gentle immobilization" of bioactive molecules when used as a cell encapsulation matrix? We proposed a new scientific hypothesis: this high affinity originates from the formation of a stable hydrogen bond network between L-HBC and peptide chains through an interfacial desolvation process, which is favorable in terms of both energy and entropy. In our design, we chose the antimicrobial peptide Pexiganan as a model for cell surface proteins. A key engineering decision was to continue using the GAFF force field to describe the entire composite system to ensure a physically consistent description of the polymer-peptide chain heterogeneous interface, thereby avoiding computational artifacts that could arise from mixing different force fields.

Build

The model construction in this phase reflected the continuity and reusability of our work. We first used AlphaFold3 to predict the three-dimensional structure of Pexiganan and again used Sobtop to generate a GAFF topology file compatible with our system. Subsequently, we took the equilibrated 37°C L-HBC six-chain glycopolymer from the first engineering cycle as the initial conformation and strategically placed 10 Pexiganan peptide chains around it using Packmol software to construct the final composite system. This approach not only efficiently reused existing results but also ensured that the starting point of the study was a well-relaxed and physically more reasonable polymer conformation.

Img.8 Diagram of the system with the six-chain glycopolymer and 10 antimicrobial peptides


Test

We conducted a 4-nanosecond production simulation on the constructed composite system. The core of the test was to precisely quantify the dynamics of the hydrogen bond network between L-HBC and Pexiganan. The analysis revealed key information on two levels: first, a macroscopically extremely stable hydrogen bond network of about 610 bonds was rapidly formed and maintained between them, which directly proves the existence of strong and extensive interactions.

Img.9 Total number of hydrogen bonds formed between polysaccharide and peptide chains over time


Second, by comparing the changes in the number of "solute-solute" hydrogen bonds versus "solute-water" hydrogen bonds, we observed a clear negative correlation: when polysaccharide-peptide hydrogen bonds were formed, the number of hydrogen bonds each formed with water molecules decreased significantly and simultaneously.

Img.10 Competitive relationship between 'solute-solute' and 'solute-water' hydrogen bonds


Learn

This test perfectly validated our desolvation hypothesis. We learned that the core thermodynamic driving force for the strong affinity between L-HBC and Pexiganan comes not only from the energy advantage of their direct interaction but also, to a large extent, from the favorable entropy increase resulting from the release of ordered water molecules from their surfaces. This profound molecular mechanism insight provides a solid theoretical foundation for our project's design of "encapsulating engineered bacteria in an L-HBC hydrogel and synergistically treating wounds with antimicrobial peptides." It clearly communicates the intrinsic mechanism of our solution at the atomic scale, proving that our chosen material system is not only macroscopically feasible but also possesses efficient and stable bio-affinity at the microscopic level.


Cycle 6: Spatiotemporal Pharmacodynamic Prediction of a "Living Therapy" Based on a Reaction-Diffusion Model

Design

This engineering cycle began with a clear clinical challenge: Can our designed engineered yeast-hydrogel system rapidly and effectively deliver the therapeutic factors IL-4 and VEGF in the complex pathological microenvironment of a chronic diabetic wounds? To avoid costly and time-consuming animal experiments upfront, we decided to first construct a high-fidelity computational model, a "digital twin" of the wound microenvironment. Our core design choice was to use reaction-diffusion partial differential equations (PDEs), as they are the gold standard for describing the spatiotemporal dynamics of substances in biological tissues. To transform a complex biological problem into a mathematically solvable model, we systematically established a series of key assumptions. These included simplifying the three-dimensional wound into a one-dimensional depth profile to capture the main concentration gradients, adopting a continuum assumption to describe collective molecular behavior, and using the quasi-steady-state approximation (QSSA) to simplify molecular binding kinetics. This approach maintained the core physical meaning while ensuring the computational feasibility of the model.

Build

The model construction process involved precisely translating biological concepts into mathematical language. We first divided the wound into two regions: a hydrogel domain (Ωg) and a tissue domain (Ωt), and established mass balance equations for four key species (IL-4, VEGF, MMPs, and Albumin). In the hydrogel domain, we meticulously derived the effective PDE, considering yeast production, diffusion, MMP degradation, and the dual buffering effects of chitosan and albumin. In the tissue domain, we established a general transport equation describing diffusion and cellular consumption. To ensure the model reflected real-world physicochemical properties, we assigned values to dozens of parameters (including diffusion coefficients, reaction rate constants, partition coefficients, etc.) based on extensive literature review and biophysical principles (such as the Stokes-Einstein equation), forming a "baseline scenario." Finally, we implemented this complex system of coupled, non-linear PDEs numerically in Python using the Finite Difference Method. We employed a Forward-Time Central-Space (FTCS) scheme, converting the model into a set of discrete algebraic equations that could be solved by time-stepping, thus completing the construction from theory to executable code.

Test

We ran a 36-hour numerical simulation of the constructed model under the defined "baseline scenario" to test the pharmacodynamics of our "living therapy" system. We established a clear and quantifiable success criterion: the "time to target concentration at the interface," defined as the moment when the concentrations of IL-4 and VEGF first reach their therapeutic thresholds (1.5 nM and 1.0 nM, respectively) at a depth of 20 micrometers below the wound surface (x=0.02 mm). The test results were clearly presented through spatiotemporal concentration heatmaps and time-series curves at the monitoring point. The results showed that VEGF first reached its target concentration in about 0.16 hours (9.6 minutes), while IL-4 reached its target in about 0.32 hours (19.2 minutes).

Img.11 Concentration change of IL-4 and VEGF at 0.02mm


Learn

This simulation test provided us with profound insights and learning outcomes. First, it strongly demonstrated the great potential of our designed yeast-hydrogel system for rapid onset of action, capable of establishing an effective dual-factor therapeutic concentration locally at the wound site within half an hour. This provides strong theoretical support for the feasibility of the therapy. Second, we learned a seemingly counter-intuitive yet crucial mechanism: although the VEGF molecule is larger, its onset of action is faster than that of IL-4. This is because IL-4 is subject to a stronger buffering effect from the gel matrix, which significantly reduces its effective diffusion rate. This finding revealed that the gel's buffering effect is a key bottleneck parameter controlling the drug release kinetics. Finally, this validated model provides clear guidance for subsequent wet lab experiments: it not only gives precise recommendations for sampling time points in animal studies (should be early, e.g., within 0.5-2 hours) but also points the way for formulation optimization (e.g., by modifying the chemical properties of chitosan to reduce its buffering effect on IL-4). Through this engineering cycle, we successfully transformed a complex biomedical problem into a predictive, quantitative tool, achieving efficient, simulation-guided experimental research and development.

Return to top