We focused on the core application scenario of "chronic wound care for diabetes" and developed an end-to-end integrated solution that encompasses material innovation, robot automation, and multi-dimensional monitoring. This approach directly addresses the key challenges in synthetic biology - non-standardized experimental operations, the adaptability of functional materials in clinical applications, and the absence of a closed-loop path for technology transfer. It also follows the principles of human-centered design, safety and reliability, low cost, and fully open-access accessibility requirements. Our work not only solves the bottlenecks of standardization and clinical transformation in synthetic biology experiments, but also lowers the threshold for wound care for people in remote areas and diabetic patients living alone. At the same time, it reduces the workload of medical staff and reflects mutual humanistic care between patients and medical providers. We will continue to pursue technological fairness, emphasizing cost-effectiveness, usability, and open-source development. At the same time, our project work emphasizes an engineering safety loop, designs simulation verification and conducts safety self-locking design, emphasizes aseptic and pollution-free practical operation, and prevents secondary harm to patients, embodying the safety concept of iGEM.
Our team conducted field visits and surveys and learned that elderly people with diabetes living in remote areas or alone lack access to wound-care medical resources. Owing to the hyperglycemic wound microenvironment, advanced glycation end-products (AGEs) accumulate, triggering inflammation and impeding healing, which can progress to tissue necrosis, gangrene, and even amputation. The resource shortage is reflected mainly in two aspects: on the one hand, traditional therapies require considerable time and manpower to assist with topical treatment; in severe cases, patients must visit qualified medical institutions (hospitals or clinics) daily to undergo debridement, medication, and dressing changes at substantial cost. On the other hand, elderly individuals living alone or patients with disabilities are unable to change dressings independently and often endure psychological stress when relatives or caregivers observe visually unappealing wounds, leading to anxiety and low self-esteem—factors detrimental to mental health and wound healing[1].
An integrated, end-to-end approach grounded in human-centered care, technological equity, low cost, and open-source principles: from optimizing wound dressings to a low-cost, easy-to-use robotic-arm–based system for wound recognition and topical medication application; a sensor suite for care monitoring and experimental data acquisition; and a materials-to-mechanics pipeline that enables full-process wound identification, classification, treatment, monitoring, and nursing.
This lowers the barrier to accessing wound care for people in remote regions and older adults with diabetes who live alone, while reducing the workload of healthcare providers and embodying two-way humanistic care for both patients and clinicians. We remain committed to affordability, usability, and open-source development to realize technological equity.
Our primary requirement on the hardware–materials side is for the "scaffold-function" hydrogel to both conform well to irregular wounds and efficiently transmit external induction signals to the yeast to trigger gene expression. Guided by this need, we selected, refined, and tested candidate hydrogel materials.
Taking into account the delivery modes of induction signals for cold-induced, heat-induced, sugar-induced, and light-induced promoters, the superiority of excellent wound conformity, induction methods that are harmless to both the wound and the hydrogel, the expandability of the robotic arm and the needs of the wet-lab team, the hardware group proposed using a well-studied thermosensitive hydrogel—hydroxybutyl chitosan (HBC)—as the base dressing[3]. This material is liquid at 4 °C and gradually gels at room temperature, satisfying the requirement of close conformity to wound geometry. Considering wound exudate and to ensure robust yeast growth, the wet-lab team chose heat and sugar induction. Testing showed that under infrared irradiation HBC warms relatively slowly, requiring about 30 min to reach to 39 °C, the temperature needed by typical heat-shock promoters. To avoid process/compatibility risks from switching systems and to save time, we considered two optimization strategies:
Our team carefully evaluated the cost and feasibility of the two strategies and sought guidance from Professor Ya Liu, an expert in the field. Professor Liu noted that, in this context, modifying HBC could potentially resolve the problem. She recommended not replacing or redesigning the hydrogel but instead directly functionalizing HBC to endow it with higher photothermal conversion efficiency—thereby saving project time—and, because the yeast and the hydrogel are not covalently crosslinked, improving the hydrogel's adhesiveness. Building on Professor Liu's suggestions, the hardware team pursued improvements to HBC's photothermal conversion and adhesion. Through in-depth discussions and literature review with Professor Liu, we found that enhancing both stickiness and heating rate via HBC modification was feasible and identified a concrete direction: leveraging dopamine polymerization to boost heating efficiency.
We ultimately selected strategy 2 for the hydrogel scaffold in the hardware module: design-oriented modification and optimization of the existing HBC material[4].
Because filling the wound and mixing yeast with the hydrogel both take time, we did not want the material's photothermal conversion to spike as immediately as it does with direct DOPA modification before medication is applied. Following expert advice, we therefore adopted L-DOPA modification of HBC to simultaneously enhance photothermal conversion and adhesion: in the wound's ROS-rich microenvironment,L-DOPA can be oxidized to DOPA and further polymerize, generating catechol groups that impart wet adhesion and photothermal responsiveness[5]. From a process standpoint, we referenced DOPA-type routes and used EDC/NHS amidation to graft L-DOPA onto amine sites of HBC[6]. Compared with direct DOPA, the precursor nature of L-DOPA yields a gentler heating ramp, allowing the robotic arm to complete "filling–mixing–positioning" before triggering. This approach was endorsed by Professor Liu and subsequently validated by our experimental results.
Img.1 HBC with a polymerization degree of 10
Img.2 The modified and optimized L-DOPA randomly modified 30% of L-HBC
Img.3 The gelation temperature of L-HBC is 20.3℃. The whole heating time from 4℃ to 50℃ is 557.314 seconds.
Due to the limited time allotted for the competition, we were unable to fully complete our experiments. We consulted our PI regarding biosafety and cytotoxicity characterization. Professor Ya Liu indicated that the literature we referenced already contains biosafety and cytotoxicity data[7] . Given that our preparation method is essentially consistent and that, after ROS-mediated conversion, we are very likely to obtain a similar modification outcome, further compositing chitosan onto HBC would not introduce additional effects on biosafety or cytotoxicity; those data can therefore be cited as supporting evidence.
Accordingly, on the hardware–materials side we have completed the optimization of the "scaffold-function gel": using thermosensitive hydroxybutyl chitosan (HBC) as the matrix, maintaining a process window in which the material remains liquid at low temperature during application and gradually gels near body temperature after conformal contact; implementing L-DOPA modification, per expert advice, to enhance photothermal conversion efficiency and wet adhesion; and, in our induction strategy, prioritizing tissue-friendly heat/sugar cues to achieve stable transmission of "external signal → gel medium → engineered yeast expression," while also ensuring conformal adhesion and payload stability in exudative wound environments.
To accommodate medication-application operations tailored to the properties of the hydrogel material and to address the background issues:
Therefore, our team is designing an automated system for wound recognition and topical medication application.
In the early concept phase of device design, based on the characteristics of the hydrogel material, we devised two medication-application schemes:
Img.4 A schematic diagram of a controllable valve array that uses a large number of valves (such as a 100×100 matrix) for direct matching
After discussions with experts, we determined that Scheme 2—the valve-array dot matrix—could, in principle, exploit the gel's phase behavior (liquid at low temperature, solidifying at body temperature): resistive heaters could locally switch solidification on/off to "draw" the wound shape. However, this approach requires continuous low-temperature cooling (the gel is liquid at 4 °C), and small-aperture valves tend to retain significant amounts of gel. For sterility reasons, such an array would be difficult to reuse, which would drastically increase per-use cost.
We therefore chose Scheme 1: move the gel extrusion nozzle with a robotic arm along the wound pattern so the gel covers the wound.
Extruding along the contour path enables close shape conformity.
Facilitates aseptic workflow design and maintenance.
Achieves a better balance among cost, adaptability, and scalability.
By reversing the suction of disinfectant through the gel pump to clean the pipeline, we have done our best to ensure sterility.
Accordingly, we finalized the design direction as an integrated system centered on a six-degree-of-freedom robotic arm.
To reduce practical risk and avoid consumable waste from suboptimal procedures, and after consulting industry practitioner Mr. Zaikun Xu, we chose to first validate the feasibility and safety of the scheme—carrying out an engineering workflow that progresses from simulation to real-world operation. Before deploying a physical robotic arm, we set up a ROS environment on Linux and established stable communication with the Gazebo simulator using the gazebo_ros_control plugin and gazebo_ros_pkgs[8], allowing us to iterate in a safe, controllable, and reproducible space. The codes of this simulation project are accessible in this link lensentropy/Robot-Operating-System-ROS-simulation-of-the-robotic-arm
Img.5 The drawn Blender model of a human body wound
To complete the wound-recognition and robotic medication-application workflow, we first perform mask segmentation on foot/hand wound images, generate a point cloud, and derive a polygonal edge–fitted path. After hand–eye calibration, we determine the base coordinate frame and output 3D coordinates of the relative path. Combining the arm's "Approach–Apply–Depart" phases, we export a path.txt file containing the 3D waypoint sequence.
Video 1. Robotic Arm Medication Delivery Simulation
In sum, this simulation closed loop transforms high-risk, high-uncertainty manual operations into programmatic procedures that can be rapidly iterated in a digital-twin environment. When migrating to the physical system, we only need to replace the Gazebo control interface with the real robot controller, keep the same TF calibration and path-generation pipeline, and substitute link_attacher with physical tooling and safety interlocks.
In terms of kinematics and workspace, the arm has a horizontal reachable diameter of 1,120 mm (the base provides 360° all-around coverage) and a maximum vertical working stroke of 798 mm. Under rated operating conditions, the effective payload is 0.2 kg at 0.5 m from the base, with a repeatability of approximately 5 mm. The mechanical ranges of the joints are: BASE 360°, SHOULDER 180°, ELBOW 225°, and HAND 135°/270°. Actuation is via TTL serial-bus servos in direct drive; each joint integrates a 12-bit, 360° magnetic encoder, yielding an angle-feedback resolution up to 0.088°. With no load and torque limiting disabled, the servo speed is 40 rpm. The default end-effector is a direct-drive gripper, and the gripping force can be precisely set in software to handle the grasping and placement of fragile materials[9]. For electrical and control hardware, a "General Driver for Robots" expansion board is used, featuring an ESP32-WROOM-32 as the main controller. The board provides Wi-Fi, Bluetooth, and ESP-NOW wireless communication, as well as wired USB and UART links. Onboard resources include a 9-axis IMU (for attitude and heading), an INA219 power monitor, a TF (microSD) card slot, an OLED display and LED indicators, serial-bus servo ports, and DC-motor interfaces (with and without encoder channels). Expansion headers are also available for radar modules, I²C peripherals, and more, laying the groundwork for future design extensions and system-level integration.
Video 2. Exploded View of Our Robotic Arm
Wound contour recognition is performed using a low-distortion visible-light camera mounted at the end effector of the robotic arm. In our initial implementation, we employed a computationally efficient RGB color-space thresholding and mask-based segmentation algorithm to identify and delineate wound regions. While this approach performed adequately on light skin tones (exhibiting white or yellow backgrounds), it demonstrated insufficient discrimination of red wound tissue against darker skin tones. This limitation likely arises from the fact that red hues occupy overlapping bands in RGB color encoding, reducing the discriminative power of simple thresholding when applied to darker or melanin-rich skin backgrounds.
To uphold the principles of diversity and equity central to the iGEM mission—ensuring that patients of all skin tones receive equal quality of care—we transitioned to machine-learning-based segmentation methods. We systematically evaluated multiple state-of-the-art architectures, including Fully Convolutional Networks (FCN), You Only Look Once version 8 segmentation (YOLOv8-seg), and the Segment Anything Model (SAM), quantifying their performance using mean Intersection over Union (mIoU) as the primary evaluation metric.
Img.6 Overall mloU comparison chart of the wound image recognition model
Img.7 Wound image recognition model Comparison chart by skin color (mloU)
Img.8 & Img.9 The wound recognition effect of the YOLO model
Through experimentation and study, we found that for segmenting wounds on feet/hands across different skin tones: FCNs, with encoder–decoder, pixel-level semantic modeling, are inherently robust in binary/multiclass segmentation. When paired with hard-example–oriented loss functions such as Dice+CE (cross-entropy) or Tversky/Focal, they markedly reduce boundary misses and under-segmentation on darker-skin samples while maintaining robustness to illumination changes and specular highlights. Moreover, after re-sampling or re-weighting the training set to balance skin-tone distribution, FCNs typically achieve the best between-group consistency.[10]
The advantages of YOLO-based segmentation lie in the high throughput and low latency of its single-stage architecture; the prototype mask + coefficient representation shortens the inference pipeline and benefits from a mature deployment ecosystem, making it well-suited for on-device, real-time, multi-instance scenarios. Its limitations arise from the mask representation and resolution: the spatial detail of prototype masks is constrained by input size and the number of prototypes, and strong specular boundaries are more prone to "stair-stepping" artifacts or small gaps. Moreover, segmentation quality depends heavily on detection quality—once the detector's recall drops for dark-skin or highly reflective wound samples, segmentation degrades accordingly. As a result, its accuracy profile is generally solid overall, with somewhat coarse boundaries[11].
Img.10 The wound recognition effect of the SAM model
As a promptable, general-purpose segmentation foundation model, SAM excels in zero-shot generalization and fine boundary delineation, showing clear advantages in interactive annotation and with complex long-tail objects. However, in fully automated pipelines, the stability of prompt generation and its relatively high inference overhead become core constraints, and it achieves its best performance under strong interactive guidance. Without high-quality automatic prompts or lightweight distillation, its overall precision and accuracy tend to be slightly inferior to FCN- and YOLO-based segmenters that are optimized for a single semantic target[12].
Img.11 The wound recognition effect of the FCN model
In terms of recognition performance, FCN achieved the best results. Going forward, we will optimize the recognition module by selecting a machine-learning semantic segmentation algorithm that balances computational cost and accuracy. One possible approach is to use FCN as the primary backbone, employ YOLOv8-seg for ROI proposals and as a lightweight fallback, and leverage SAM for semi-automatic annotation on a small number of challenging samples.
To obtain the wound's actual spatial position, we initially planned to use a ToF camera or LiDAR to acquire depth for hand–eye calibration. However, both options are relatively expensive—the former typically costs USD 30–150, and the latter exceeds USD 150—which does not align with our goal of reducing costs and promoting equitable access to medical resources in underdeveloped regions. To solve the spatial localization problem at low cost, we drew inspiration from human vision: by capturing images before and after horizontal and vertical movements of the robotic arm, we compute image disparity of the wound to estimate the height difference to the wound and determine the physical distance represented by a single camera pixel.
Objective: Estimate depth and scale without using ToF/LiDAR;
Method: Apply small horizontal/vertical micro-movements of the robotic arm and use parallax to compute the height difference from the camera to the wound and the physical scale per pixel.
The following video demonstrates the hand-eye calibration process. The robotic arm moves the camera to multiple positions to capture images from different perspectives. This process is crucial for accurately calculating the transformation between the camera's coordinate system and the robot's, minimizing errors through repeated measurements.
video 3. The demonstration of the hand-eye calibration process.
Mathematical Foundation: Micro-Baseline Parallax Method.
We derive the depth estimation framework based on the standard pinhole camera model with two-frame micro-translation. Consider a calibrated camera with intrinsic matrix[13]:
$$\mathbf{K} = \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix}$$
For the first frame, we assume the camera is at the origin with identity rotation and zero translation: $\mathbf{R}_1 = \mathbf{I}, \mathbf{t}_1 = \mathbf{0}$. A three-dimensional point $\hat{\mathbf{X}} = (X, Y, Z)^T$ in the scene projects onto the image plane according to the perspective projection equations:
$$u = \frac{f_x \cdot X}{Z} + c_x, \quad v =\frac{f_y \cdot Y}{Z} + c_y$$
In the second frame, we apply a known micro-translation $\mathbf{t} = (B_x, B_y, 0)^T$ to the camera position while maintaining the same orientation. The projection of the same three-dimensional point in this translated frame, expressed in the first-frame coordinate system, becomes:
$$u' = \frac{f_x \cdot (X - B_x)}{Z} + c_x, \quad v' = \frac{f_y \cdot (Y - B_y)}{Z} + c_y$$
Parallax-Depth Relationship
From the projection equations, we establish the fundamental relationship between observed image disparity and scene depth. For a purely horizontal micro-translation $\mathbf{B} = (B_x, 0, 0)$, the horizontal disparity $d_x = u - u'$ yields the depth relation $Z = f_x B_x / d_x$. Similarly, for vertical micro-translation $\mathbf{B} = (0, B_y, 0)$, we obtain $Z = f_y B_y / d_y$ from the vertical disparity $d_y = v - v'$.
When disparities in both directions are available, we fuse the two depth estimates using inverse-variance weighting to improve robustness. Assuming the disparity measurement noise is approximately homoscedastic with variance $\sigma_d^2$, standard error propagation analysis yields the variance for each depth estimate:
$$\text{Var}(Z_x) = \left(\frac{f_x B_x}{d_x^2}\right)^2 \sigma_d^2, \quad \text{Var}(Z_y) = \left(\frac{f_y B_y}{d_y^2}\right)^2 \sigma_d^2$$
The optimal fused depth estimate is then obtained by inverse-variance weighting:
$$\tilde{Z} = \frac{w_x Z_x + w_y Z_y}{w_x + w_y}, \quad \text{where} \quad w_x = \frac{d_x^4}{(f_x B_x)^2 \sigma_d^2}, \quad w_y = \frac{d_y^4}{(f_y B_y)^2 \sigma_d^2}$$
Application to Wound Topography: Relative Height Difference
To quantify wound depth characteristics, we compute the relative height difference between the wound surface and surrounding healthy tissue. Let $Z_{\text{skin}}$ denote the median depth of a nearby healthy-skin region and $Z_{\text{wound}}(u,v)$ represent the depth at a wound pixel located at image coordinates $(u,v)$. The relative height difference $h(u,v)$ is then defined as:
$$h(u,v) = Z_{\text{wound}}(u,v) - Z_{\text{skin}} = f_x B_x \left(\frac{1}{d_x^w} - \frac{1}{d_x^s}\right)$$
where $d_x^w$ and $d_x^s$ are the horizontal disparities measured at the wound pixel and healthy skin reference region, respectively. This formulation enables the construction of a relative topographic map that captures wound depression or elevation patterns relative to the surrounding healthy tissue baseline, without requiring absolute depth measurements.
Physical Pixel Scale Calibration
To convert pixel-based measurements into physical units, we derive the relationship between image-space pixel displacements and real-world distances. Starting from the projection equation $u = f_x X / Z + c_x$ and differentiating with respect to $X$ while treating depth $Z$ as approximately constant over a small surface patch, we obtain:
$$du \approx \frac{f_x}{Z} dX \quad \Rightarrow \quad s_x = \frac{dX}{du} \approx \frac{Z}{f_x}$$
By symmetry, the vertical pixel scale is $s_y = Z / f_y$. Thus, the physical area represented by a single pixel is $s_x s_y = Z^2 / (f_x f_y)$, which varies inversely with the square of depth.
Surface tilt correction: When the wound surface exhibits non-negligible tilt relative to the image plane, we refine the scale estimate via ray-plane intersection. Given the surface plane $\pi: \mathbf{n}^T \mathbf{X} + d = 0$ and the normalized viewing ray for pixel $(u,v)$:
$$\mathbf{r} = \mathbf{K}^{-1} \begin{bmatrix} u \\ v \\ 1 \end{bmatrix}$$
the intersection point $\mathbf{X}(u,v) = \lambda \mathbf{r}$ is determined by $\lambda = -d / (\mathbf{n}^T \mathbf{r})$. The physical pixel scales are then computed using numerical differences of neighboring intersection points:
$$s_x(u,v) \approx |\mathbf{X}(u+1,v) - \mathbf{X}(u,v)| \\ s_y(u,v) \approx |\mathbf{X}(u,v+1) - \mathbf{X}(u,v)|$$
Error Propagation and Micro-Baseline Selection
The choice of micro-baseline distance critically affects depth measurement precision. From the fundamental relation $Z = fB/d$, we analyze how disparity measurement uncertainty propagates to depth estimates. Differentiating with respect to disparity $d$ yields:
$$\frac{\partial Z}{\partial d} = -\frac{fB}{d^2} \quad \Rightarrow \quad \sigma_Z \approx \left|\frac{\partial Z}{\partial d}\right| \sigma_d = \frac{Z^2}{fB} \sigma_d$$
This result reveals that depth uncertainty grows quadratically with distance and is inversely proportional to the baseline. To achieve a target depth precision $\delta_Z$ given an achievable disparity precision $\sigma_d$ at working depth $Z$, the minimum required micro-baseline is:
$$B_{\text{min}} = \frac{Z^2}{f} \cdot \frac{\sigma_d}{\delta_Z}$$
This design equation guides the selection of the robotic arm's translation step size to balance measurement precision against mechanical constraints and processing time.
Img.12 Derivation example diagram
Inflamed wounds typically exhibit higher temperatures than healthy skin. We use a thermal array sensor to acquire per-pixel temperatures and analyze the degree of inflammation. The pixel-to-real-world scale is determined based on the sensor's field of view and its distance to the skin, allowing for the conversion of "hotspot pixel count" to a physical area. For a given wound ROI and a nearby reference ring of healthy tissue, we calculate the temperature difference and the area exceeding a certain threshold. Furthermore, we estimate the temperature gradient along the boundary of the wound using central differences to quantify the thermal transition. This is achieved using a 3D-printed quick-release bracket module mountable on the robotic arm, which holds a thermal array sensor (e.g., 32×24) and an ESP32 to generate a temperature field, assisting in determining the presence of inflammation and its spatial extent.
Img.13 Schematic diagram of thermal array heat map
Img.14 A standard load-bearing bracket of ours and an internal thermal lattice sensor are used to achieve inflammation identification analysis and evaluation.
Through our experiments and trials, we found that recognition using thermal array sensing is greatly affected by ambient temperature, leading to substantial errors in practical operation and a relatively high misclassification rate for inflammation. Therefore, we considered improving this sensing module by using alternative indicators or a combination of multiple indicators for comprehensive judgment. Inspired by discussions with experts Prof. Guanglei Liu and Prof. Ya Liu, we learned that ROS is closely related to inflammatory responses. The ROS level at the wound site is strongly correlated with the degree of inflammation; high ROS levels often indicate that the wound is likely to exhibit more severe inflammatory reactions. Compared with temperature-based thermal sensing, this quantitative relationship is less susceptible to environmental influences. Consequently, we considered integrating ROS sensor data into the module via a serial port. However, due to competition time constraints and limitations of available materials and technology, we have not yet produced a ROS sensor with satisfactory performance; we have reserved the serial and code interfaces, and will continue to explore and research to develop a suitable ROS sensor that can better support wound-inflammation assessment and ROS-related measurements in biological experiments. Finally, we designed an ESP8266[14] + temperature–humidity sensor + ROS sensor (reserved) integrated system for multi-dimensional data acquisition and IoT data transmission.
To accommodate different yeast strains, we implemented an automated feeding and mixing system. The robotic arm positions the nozzle into the selected yeast storage-solution tank and activates the gel pump in reverse to aspirate the desired yeast culture into the gel reservoir. The robotic arm's precise control enables seamless switching among multiple yeast tanks through pump reversal for aspiration, followed by thorough homogenization using a magnetic stirrer to ensure uniform distribution of yeast cells throughout the gel matrix. To facilitate system integration and ensure compatibility across components, we designed and 3D-printed standardized storage tanks with uniform capacity and port diameters for tubing interfaces, incorporating side-mounted sensor interfaces for future expansion.
Img.15 Standard storage tank 3D model diagram
Img.16 Standard storage tank in real form
Img.17 The storage tank positioned on the magnetic stirrer for homogenization.
Video 4. Video of the operation of the peristaltic pump
Video 5. Example of Using Magnetic Stirrer
Maintaining the hydrogel at 4 °C is critical to preserve its liquid state prior to application. We implemented a closed-loop thermal management system comprising a thermoelectric (Peltier) cooling module with heat sinking, a peristaltic pump for continuous circulation, and a DS18B20[15] temperature sensor for real-time monitoring. The peristaltic pump actively circulates the gel between the chiller and the storage tank, preventing premature solidification while ensuring uniform temperature distribution. All components are interfaced with an ESP32 microcontroller, which implements a negative-feedback control algorithm to maintain stable temperature regulation and enables real-time data logging for quality assurance.
Img.18 DS18B20 temperature sensor 3D circuit diagram
Img.19 DS18B20 Temperature Sensor Circuit Diagram
Img.20 Thermoelectric (Peltier) cooling module with heat sink
Img.21 Photo of peristaltic pump
The peristaltic pump module is designed with bidirectional operation capability, enabling multiple functions beyond circulation. In forward mode, it synchronizes with the robotic arm to perform automated gel dispensing for wound treatment. In reverse mode, it facilitates pipetting, cleaning, and disinfection workflows, thereby supporting wet-lab automation and maintaining system sterility. To accommodate the voltage and current reversal required for bidirectional operation while providing sufficient suction power, we designed and implemented an H-bridge dual-motor driver based on the DRV8701 chip. This custom driver circuit provides robust forward and reverse control with precise speed regulation, enabling the peristaltic pump to fulfill its multifunctional role effectively.
Img.22 3D design drawing of dual-drive motor based on DRV8701 chip
Img.23 Drv8701 dual-motor drive
Initial implementation and observed challenges. We initially employed the ESP32 controller's built-in inverse kinematics for the robotic arm, executing linear trajectories to spread the gel over the wound. However, this baseline implementation exhibited severe oscillations and poor positioning accuracy, as demonstrated in the following video. Although tuning the PID control parameters for each joint partially alleviated the oscillations, the fundamental positioning accuracy issues persisted.
Video 6. Severe oscillations observed with the initial control implementation
Root cause analysis and solution. Through systematic experimentation, we identified the ESP32 controller's limited computational capability as the primary bottleneck for real-time inverse-kinematics solving. To address this, we developed a custom inverse-kinematics solver based on the arm's kinematic parameters as modeled in SolidWorks, significantly improving coordinate-control accuracy. Furthermore, we implemented an independent supervisory control layer that provides per-servo override capability, enabling fine-tuned velocity and acceleration profiles. These improvements substantially enhanced the system's motion precision and stability.
Video 7. Improved motion stability after implementing the optimized control algorithm
Path Planning Optimization
We introduced our products to our previous patients and allowed them to try using them at their own discretion. They pointed out that the stability of the robotic arm was crucial for their confident use. Only a device that is as stable and non-shaking as possible can they feel at ease and not worry about secondary injuries or errors during the operation process. Guided by feedback from physicians and user-experience research, we identified Z-axis stability as a critical safety requirement. Excessive Z-axis jitter poses the risk of direct nozzle-wound contact, potentially causing secondary mechanical injury and increasing infection risk. To mitigate this hazard, we redesigned the motion planning strategy, transitioning from linear Cartesian trajectories to polar-coordinate curved paths. In this refined scheme, the majority of the motion is accomplished through BASE joint rotation alone, while the ELBOW and SHOULDER joints—which have direct coupling to the Z-axis position—remain largely stationary. This architectural change substantially reduces Z-axis displacement variability and enhances patient safety.
Img.24 Illustration of polar coordinate path planning. By primarily using the base rotation, Z-axis movement is minimized, ensuring patient safety.
Iterative improvements summary. Our motion control development progressed through multiple iterations. The initial version employed the controller board's built-in inverse kinematics with straight-line trajectories, which resulted in significant jitter and positioning deviations. Through systematic refinement, we implemented an in-house IK solver based on precise structural parameters, substantially improving coordinate accuracy. We then added PC-side independent supervisory control for each joint servo to refine velocity and acceleration profiles. Finally, guided by physician feedback emphasizing the need for enhanced Z-axis stability, we redesigned the path planning to use polar-coordinate trajectories, ensuring that the BASE joint performs the majority of motion while the SHOULDER and ELBOW joints remain largely stationary, thereby significantly suppressing Z-axis jitter. Detailed code logic and implementation notes are available in our GitHub repository.
Figure 25 presents a comparative analysis of path-error measurements (in millimeters) between the T104 configuration (full joint-angle control with the original encapsulated solver) and the T102 configuration (individual joint control with our custom PC-side solver based on structural parameters), demonstrating the substantial improvement in positioning accuracy achieved through our iterative approach.
Img.25 Comparison of error data measured by the new and old control methods of the group of figures
Video 8. Physical operation demonstration of the robotic arm (Since the amount of guel is limited we did not run the guel pump)
The robotic arm's end effector integrates a UV disinfection lamp and a spraying module, enabling a multi-functional workflow. Post-gel application, the system can execute pre-programmed disinfection routines, enhancing wound care efficacy and minimizing operator intervention. To reduce costs and simplify the control architecture, we used a single ESP32 to manage all subsystems, including the robotic arm, pumps, sensors, and peripherals, via GPIO-controlled relays and MOSFETs, eliminating the need for an expensive PLC.
Img.26 UV-C disinfection lamp with relay control system
System integration challenges and solutions. During initial operational testing, we identified several architectural deficiencies: the wiring was disorganized, the overall structure lacked clarity, and the distributed power management posed safety risks while hindering reproducibility. To address these shortcomings, we designed an integrated control board that consolidates all pre- and post-use disinfection functions and peripheral actuation onto a unified platform. The design adheres to rigorous electrical engineering principles: high/low-voltage partitioning, unified power distribution, single-point grounding, and hardware interlocks for fail-safe operation. The mains 220 VAC input passes through EMI filtering and fusing into an isolated AC–DC converter producing 24 V, which is then stepped down via synchronous buck converters to 12 V and 5 V rails. These supply the pumps, fans, thermoelectric cooler, and solenoids (24 V/12 V), as well as the microcontroller, logic circuits, and USB hub (5 V). This architecture enables the entire system to operate with a single AC power cable and one host-MCU serial connection, executing the complete operational sequence: power-on self-test, pre-disinfection, gel dispensing, alcohol flushing, and cooling.
Img.27 Initial wiring layout showing distributed components and unstructured cabling
Img.28 Integrated circuit board schematic diagram
Img.29 PCB diagram of the integrated circuit board
Img.30 3D design drawing of integrated circuit board
Design an integrated control board with plug-and-play interfaces and a clear layout for easy user connection, while incorporating protective circuitry to proactively prevent hazards.
The UVC lamp uses a 24 V coil relay to switch 220 VAC directly on-board; its control path forms a hardware interlock via a switch, and no software command can turn on the UVC before the interlock is closed. Spraying and rinsing are handled by two 12 V DC pumps driven separately by the two H-bridges of the on-board L298N (later improved to a DRV8701 dual-motor driver board), with enable and PWM speed control; together with the robotic arm, this enables fully automatic gel dispensing along the path and reverse pumping of alcohol disinfectant. Fans, the thermoelectric cooler, and the electromagnet are on independent MOSFET channels, with a default 20–25 kHz PWM to reduce audible noise; the selected TEC supports current monitoring and current limiting.
In the electrical architecture, power ground and signal ground are star-connected at a single point; motor return and switching-edge paths are kept as short and straight as possible and cross sensitive signals orthogonally. Adequate creepage and clearance distances are maintained between high- and low-voltage regions, supplemented by copper keep-out slots and shielding parts, with the relay contact area fenced off and marked by silkscreen warnings. At the interfaces, keyed connectors and clear silkscreen labeling are provided for the pumps, fans, TEC, UVC, and magnetic door switch, to reduce assembly errors; an on-board MCU female header is compatible with an external ESP (or an STM32 with the same pinout), connecting to the host computer and sensors via UART/I²C, retaining rapid-integration capability while reserving headroom for future expansion.
The firmware sequence starts with a power-on self-test, sequentially confirming the states of the interlock (magnetic door switch), cover, and emergency stop, and then allowing entry into standby; in the pre-disinfection phase, the UVC lamp is turned on and timed to complete surface sterilization, followed by switching off the UVC, alcohol flushing, and an optional short post-disinfection before returning to standby. The overall design aims to achieve a robust power tree, clear drive channels, and standardized interfaces to make assembly and debugging predictable, while leaving upgrade paths in component selection and layout.
In real clinical settings, different wound conditions call for different treatment and care measures; at the same time, our thermosensitive gel dressing has requirements regarding environmental factors such as ambient temperature. Therefore, it is necessary to monitor and integrate parameters such as temperature, pH, humidity, and ROS. By combining a microcontroller/ESP with various sensors, we built an integrated wound-monitoring and environmental data-sensing system, and we use IoT technology for data collection and transmission. This system greatly assists pre-treatment assessment and post-treatment recovery monitoring in our project, and the same integrated, miniaturized "bsensing + IoT" concept can also be applied to synthetic-biology laboratory assays and to environmental monitoring in industrial workflows.
For our project, the core purpose of wound-condition monitoring is to convert superficially "similar-looking" appearances into measurable, traceable micro-environmental signals, thereby identifying the wound's trajectory earlier and more objectively.
To gain an initial understanding of wound inflammation—and given the compact size of the thermal array sensor, which makes rigid mounting difficult—we designed and 3D-printed a multi-interface, detachable module to conveniently secure the ESP32 and the sensor. In our preliminary experiments, it performed well with stable fixation.
Img.31 Modular quick-disassembly 3D printed support + thermal array sensor + ESP32 physical diagram
This module is built around the ESP8266 (ESP-12F). It uses a DHT11[16] to acquire ambient/wound temperature and humidity, and reports via the ESP's Wi-Fi–based IoT stack. When "possible water contact" or abnormal temperature/exudate trends are detected, it triggers an alert. Two key on-board expansions are reserved: a serial interface for an ROS/hydrogen-peroxide electrochemical probe, and a photothermal-promoter heater interface. This reserves hardware and code paths to link future inflammation assessment with therapy, enabling closed-loop control. Although we have not yet built an ideal ROS sensor, based on literature review and discussions with our advisors, we have considered two possible technical routes: obtaining inflammation-related indicators via electrochemical biosensing; and obtaining indicators via optical fluorescence/colorimetry.
Electrochemical method (using H₂O₂ as a representative ROS marker)
This route targets wearable, low-consumable, continuous monitoring. A three-electrode system is constructed in the dressing (Prussian Blue (PB)-modified carbon working electrode + Ag/AgCl reference + carbon counter electrode). H₂O₂ is selectively reduced by low-potential chronoamperometry (0 to −0.05 V vs Ag/AgCl), effectively suppressing coexisting interferents such as glucose and uric acid. The analog front end (AFE) can be an integrated electrochemical AFE (LMP91000 / ADuCM355 / AD5940) or a discrete "three-op-amp" potentiostat: a driver amplifier maintains the WE–RE potential; a TIA converts 10 nA–5 μA currents to voltage with RfR_fRf set to 100 kΩ / 1 MΩ / 10 MΩ (switched via ADG704), and a parallel CfC_fCf limits the front-end bandwidth to 5–10 Hz to suppress skin and lead noise. The analog ground is islanded, and high-impedance nodes use guard rings; the I/V output feeds the MCU ADC. Firmware executes step chronoamperometry: after 1–2 s of polarization stabilization, an average is taken over a 0.5–1.0 s window and linearized to μM-level H₂O₂; three-point calibration (10/100/1000 μM) is stored in Flash, with diffusion/kinetics compensation using the on-board temperature sensor. Power consumption is low, enabling 24/7 continuous acquisition; the only disposables are the microelectrode/dressing. Data are merged into the host module via the existing STM32 + I²C peripheral framework, outputting unified ROS_uM and a quality score q_ros, and DAC-C can be mapped 0–3.0 V (e.g., 0–300 μM → 0–3.0 V) to provide an analog alarm[17][18][19].
Optical method (single-use reagents convert ROS to fluorescence/colorimetry)
This route is easier to quantify and interference-resistant, suitable for point measurements during dressing changes or short-term continuous reads. A disposable microchannel/sponge pad is pre-coated with HRP + Amplex Red/UltraRed; upon reaction with H₂O₂ it yields fluorescein/resorufin with emission at 580–600 nm. The main board uses a green LED (≈530–560 nm) as a modulated source + a narrow bandpass filter + a photodiode; a timer modulates the LED at 1–2 kHz, the ADC samples synchronously, and lock-in/synchronous detection at the same frequency greatly suppresses ambient light and 50/60 Hz interference. The photodiode TIA employs a low-leakage RRIO op-amp (OPA379/OPA333), with RfR_fRf = 50–500 kΩ depending on brightness, followed by a 2nd-order low-pass (f_c ≈ 10 Hz). A read time of 1–2 minutes yields concentration, interpreted alongside pH/temperature. If a non-fluorescent option is preferred, TMB colorimetry (λ ≈ 650 nm) can be used with dual-wavelength ratiometry to cancel substrate differences. Hardware connects via I²C to a photodiode front end with ADC, or directly as an analog input to the ADC; firmware keeps a unified data interface: outputs ROS_uM / q_ros and supports time alignment with pH and T.[17][18][19]
Img.32 Schematic diagram of temperature and humidity sensor
Img.33 3D design drawing of the temperature and humidity sensor
Img.34 ESP-8266 schematic diagram
Img.35 ESP-8266 3D Design Drawing
This module is a miniaturized wound-environment monitoring board that measures pH and temperature, converting the sensor signals into voltage or current outputs for easy integration with existing controllers/data loggers (e.g., STM32/ESP). The board amplifies and conditions the low-level electrode signals, and the MCU performs basic filtering and temperature compensation, so the host only needs to read a stable analog quantity to complete the measurement.
The pH channel supports common micro-electrodes and implements a high-impedance input buffer with zero-point/slope calibration. Its output can be configured as a voltage (e.g., 0–Vref mapped to the full pH range) or as a current (e.g., 4–20 mA mapped to pH 0–14, as an example). The temperature channel uses a skin-contact probe to provide a standalone temperature reading and to compensate the pH reading for temperature variations, keeping the result stable when skin temperature changes. Both channels are sampled and processed by the on-board MCU at low, continuous rates, and the outputs remain smooth voltage/current signals.
In use, the module is placed with the dressing/probe near the wound and begins recording after a brief calibration. Over the hours to days following medication, the host continuously reads the pH-related voltage/current and the temperature-related voltage/current and observes their trends: sustained alkaline pH together with elevated or more volatile temperature often indicates inflammation risk, while a declining, stable pH and temperature returning to physiological levels suggest good recovery. In this way, the device provides a quantitative basis for deciding whether to reapply medication, when to change the dressing, or whether additional intervention is needed, enabling continuous and objective post-application monitoring.
Img.36 Integrated and miniaturized PH and temperature monitoring module schematic diagram
Img.37 Integrated and miniaturized PH and temperature monitoring module 3D design drawing
In the care workflow, continuous monitoring informs exudate and dressing management—avoiding both overtreatment and undertreatment—while quantifying the effect of each intervention. It supports patient-specific thresholds and follow-up cadence, reduces unnecessary in-person visits and costs, and generates time-stamped trend curves and event logs for medical recordkeeping and quality control. Importantly, these data are decision-supporting rather than diagnostic; if red-flag symptoms arise—such as escalating pain, spreading erythema, malodorous drainage, or fever—patients should seek medical attention promptly in accordance with clinical protocols.
In parallel, we are integrating the existing monitoring board with an ESP module to transmit wound pH and temperature data securely and reliably to both clinicians and patients, enabling truly actionable remote follow-up. Because wound status can change abruptly and is time-sensitive, converting local readings into cloud-based continuous curves with timely alerts allows clinicians to detect anomalies early during post-discharge care or home monitoring, enabling timely adjustments to dressing change schedules or medication regimens. For patients, clear trend visualizations and concise action prompts reduce unnecessary clinic revisits and treatment delays. From an engineering perspective, leveraging the proven ESP platform accelerates development cycles, reduces BOM costs, and offloads connectivity and encryption tasks from the main controller, ensuring dependable and privacy-preserving data delivery.
By decoupling these functions from the STM32 host MCU, the microcontroller can focus on high-precision acquisition and signal interpretation, while the ESP module handles network connectivity, resumable transmission, and over-the-air firmware updates, resulting in enhanced overall system reliability and maintainability.
Advanced wound state classification. Future iterations will leverage continuous pH and temperature data streams to automatically classify wound states into distinct categories: "healing" (characterized by declining pH and stable or decreasing temperature), "exudate retention" (elevated pH without marked temperature increase), and "inflammation/infection" (concurrent pH and temperature elevation). This algorithmic classification transforms subjective visual observations into objective, quantifiable trends and discrete clinical events.
Intelligent alert generation. We plan to implement adaptive signal processing using moving baseline estimation combined with low-pass and median filtering to reject noise and artifacts. The system will jointly analyze changes in pH (ΔpH), temperature (ΔT), and signal variability to generate context-aware nursing recommendations, such as "recommend dressing change," "schedule clinical recheck," or "consider specialist referral." These prompts are intended solely for clinician decision support and do not constitute medical diagnosis.
Secure IoT integration. The STM32 microcontroller and ESP module will communicate via UART using robust framing (COBS) and error detection (CRC-16). Cloud connectivity will employ encrypted protocols (MQTTS/HTTPS with TLS 1.2/1.3 and mutual TLS authentication), supplemented by offline data buffering, quality-of-service guarantees, last-will-and-testament messaging, network time synchronization, and cryptographically signed over-the-air firmware updates to ensure data integrity and system security.
Clinical data integration and visualization. The envisioned data flow follows a device-to-cloud-to-user architecture with time-series archiving and FHIR-compliant data mapping. The clinician-facing web portal will display 24-hour and 7-day trend curves with automatically generated alert markers, while the patient-facing mobile application will provide simplified trend summaries and actionable care recommendations tailored to lay users.
To ensure accessibility for end users, particularly elderly patients with limited technical proficiency, we developed an intuitive graphical user interface (GUI) that abstracts the underlying complexity of the robotic system. The interface design prioritizes simplicity and clarity, enabling complete operational workflows through minimal user interaction.
Img.38 Graphical user interface for hand-eye calibration and robotic arm positioning
The GUI provides several key features to facilitate user interaction: (1) a real-time camera feed for direct wound visualization and monitoring; (2) adjustable detection sensitivity controls to accommodate varying wound characteristics and imaging conditions; (3) manual positioning controls accessible via both on-screen buttons and keyboard shortcuts (WASDQE keys) for fine-tuned robotic arm placement; and (4) streamlined treatment initiation requiring only a single confirmation action once the wound is correctly identified. This design ensures an efficient, user-friendly workflow that minimizes operational complexity.
Img.39 System architecture: hierarchical organization of upstream sensing and downstream actuation modules
Img.40 Operational workflow: complete treatment cycle from wound recognition to gel application
To further reduce barriers to adoption, we have packaged the software as standalone executables, eliminating the need for complex installation procedures or dependency management. Additionally, based on extensive feedback from clinicians and end users, we have iteratively refined the user experience design and implemented comprehensive fault-tolerant messaging to gracefully handle error conditions and guide users through recovery procedures.
We conducted a comprehensive cost analysis to ensure our system remains accessible to resource-constrained healthcare facilities and research institutions. The following bill of materials (BOM) represents the minimum configuration required for a fully functional wound-care robotic system:
Part Name | Qty | Unit Price (USD, approx.) | Total (USD, approx.) | Notes |
---|---|---|---|---|
ESP32 microcontroller | 1 | 25 | 25 | Main controller; choose a model with PSRAM for image handling. |
Ro-ARM-M3 open-source robotic arm | 1 | 350 | 350 | 5-DOF robotic arm body. |
L298N motor driver module | 2 | 15 | 30 | Drives the arm's joint motors. |
4-channel relay module | 1 | 15 | 15 | Switches higher-current loads (TEC, pumps, UV lamp, etc.). |
GYMCU90640 thermal array sensor | 1 | 80 | 80 | Non-contact temperature sensing for wound thermal field. |
USB camera | 1 | 50 | 50 | Vision-assisted positioning / wound imaging. |
Brushed DC peristaltic pump | 1 | 40 | 40 | Precise gel delivery. |
Brushed DC diaphragm pump | 1 | 35 | 35 | Suction or liquid transfer. |
40×40 mm TEC (Peltier) module | 1 | 20 | 20 | Maintains gel at ~4 °C with the cooling system. |
Aluminum heatsink with fan | 1 | 25 | 25 | Heat dissipation for the TEC. |
Aluminum radiator (cold plate) | 1 | 30 | 30 | Optional; improves cooling-loop efficiency. |
Silicone tubing | Various | 2/m | 10 | For fluid transport (estimate based on several meters). |
3D-printed parts | Various | — | 50 | Magnetic mounts, sensor brackets, enclosure, etc. (material estimate). |
2020 aluminum extrusions & connectors | Various | — | 60 | Frame construction (optional but improves stability). |
12 V switching power supply | 1 | 40 | 40 | Size by total load (e.g., 10 A). |
Cost Summary: The hardware components total approximately USD 860. Adding PCB prototyping and assembly costs (estimated at USD 100 for first-run open-source boards in small batch quantities), the grand total is approximately USD 960. This represents an economical yet fully functional baseline configuration that balances performance, reliability, and accessibility for resource-constrained settings.
To promote accessibility, reproducibility, and community-driven innovation, we have designed our robotic system with comprehensive open-source support and extensibility in mind. Our implementation encompasses three key aspects: parameterized control algorithms, standardized hardware interfaces, and complete documentation for secondary development.
We have decoupled the link and kinematic parameters from the control algorithm, enabling universal applicability across different robotic arm configurations. Users only need to input their robot's body and kinematic parameters to generate appropriate control commands, eliminating the need for algorithm modification or recompilation. This design philosophy significantly lowers the barrier to adoption and facilitates rapid prototyping with alternative robotic platforms.
To ensure modularity and extensibility, we have implemented standardized hardware interfaces throughout the system:
We provide comprehensive resources to support community reuse and derivative work:
Detailed Documentation: Complete hardware wiring diagrams, PCB schematics, firmware flashing tutorials, software API reference, and inverse-kinematics parameter configuration guides are provided to ensure users can fully understand and modify the system.
Example Code: Sample projects are offered for basic joint control, trajectory planning, sensor data acquisition, and data upload. Simulation source code and models are also provided to enable virtual testing before physical deployment.
Open-source 3D Models: All custom mechanical parts, including the magnetic mount, sensor brackets, and enclosures, are released as editable 3D model files (e.g., STEP format), allowing others to print, adapt, or redesign components to suit their specific requirements.
GitHub Repository: All resources—including wiring diagrams and schematics, firmware flashing tutorials, simulation source code, software API documentation, IK parameter configuration guides, example code (joint control, trajectory planning, sensor reading, data upload), and open-source 3D models—are centrally hosted on GitHub for convenient access by future iGEM teams, healthcare professionals, and related organizations. This comprehensive repository facilitates reuse, modification, and collaborative improvement of our platform.
We have uploaded complete source code, documentation, and 3D models to GitHub for the open source community to use:
ROS Robotic Arm Simulation System
Robot Operating System simulation of the robotic arm
Visit RepositoryOpen Source Firmware Library
Collection of open source firmware libraries for hardware development
Visit RepositoryIncludes complete source code, documentation, 3D models, and usage tutorials. Contributions and feedback are welcome!
The bidirectional peristaltic pump module represents a key innovation for wet-lab automation. In forward mode, synchronized with the robotic arm, it enables quantitative and reproducible coating of dressings or delivery of experimental reagents. In reverse mode, it facilitates waste back-aspiration, solvent transfer, cleaning, and disinfection. This dual-functionality establishes a comprehensive safety-oriented workflow for synthetic biology applications, directly addressing common wet-lab bottlenecks including high human contact requirements, elevated contamination risks, and poor inter-operator repeatability.
By combining Peltier thermoelectric cooling with DS18B20 temperature sensing, our system implements a negative-feedback thermal control loop that maintains precise temperature setpoints (e.g., 4 °C for hydrogel storage). Continuous gel circulation between the cooling element and storage reservoir via the peristaltic pump ensures uniform thermal distribution and prevents localized temperature gradients. This integrated sensing and actuation architecture provides genuine closed-loop thermal management for temperature-sensitive synthetic biology protocols, delivering stable and repeatable environmental conditions during pre-induction stages.
The robotic arm's ability to autonomously switch among multiple 3D-printed standardized yeast reservoirs, combined with automated aspiration and magnetic stirring for homogenization, enables efficient parallel experimentation. Standardized vessel dimensions and interface specifications provide a unified physical framework for modular expansion, including integrated plumbing and sensor connections. This transforms the traditionally manual and error-prone "construct—mix—dose/store" workflow into reproducible, automated, and standardized procedures that align with iGEM's emphasis on systematic experimental design, parallel testing, and rigorous control groups.
Our hardware platform integrates multiple sensor interfaces—including DS18B20 temperature probes and thermal imaging arrays (GYMCU90640)—to capture environmental parameters as structured, time-stamped operational data. These measurements provide quantitative feedback for optimizing induction strategies, assessing expression phenotypes, and refining dosing protocols, thereby closing the loop of evidence-based synthetic biology engineering. The open-source firmware implements precise temporal synchronization of all sensor streams, ensuring that each experimental batch's environmental conditions are immutably bound to execution traces and automatically archived. A web-based interface enables timeline-based review of culture conditions and operation logs, establishing standardized data provenance that enhances collaborative reproducibility and transparency in synthetic biology research.
With a total system cost of approximately USD 960, our platform demonstrates that sophisticated robotic automation for synthetic biology is achievable within modest budgets. We are committed to fully open-sourcing all critical components—including the parameterized inverse kinematics solver, hardware interface specifications, example code, mechanical 3D models, and comprehensive documentation. This commitment directly lowers the barrier to entry for teams seeking to build experimental platforms, fostering widespread reuse and community-driven secondary development within the iGEM ecosystem and beyond.