
The detection of pesticides in our environment and food supply is a vital challenge for either public health or ecological safety.
While the current traditional methods is accurate, they are often expensive, time-consuming, and require specific laboratory equipment and professional labours.
Our project aims to overcome these issues, by developing an integrated biosensing system that is portable, rapid, and accessible.
The core of our system is through engineering Escherichia coli (E.coli), designed to produce a quantifiable bioluminescent signal in the presence of specific pesticides.
However, our biosensor is only half our solution. To make the system both practical and useful, we also need a reliable method to read and quantify the given output.
The minor changes in the light intensity produced by our biosensor are probably difficult for the human eye to quantify it accurately.
Thus, this critical gap is going to be bridged by our software, the PestiGuard Biosensor Analysis Platform.
With just a smartphone, or any capturable devices, this platform is designed sophisticated yet user-friendly web application to empower users to perform on-site pesticide analysis.
Just by capturing an image of our biosensor sample through the test kid, our software automatically process image analysis using Computer Vision (CV), translating the captured bioluminescence into an accurate pesticide concentration.
Figure 1.1: PestiGuard Biosensor Analysis Platform
Video 1.1: Software Introduction Video
To let users understand our software better, we have created a software documentation. In the following document, user guide, and technical documentation are included.
After viewing the software documentation, try our software by scanning the QR code below! (test case can be downloaded in our app)
Figure 2.1: QR code of our app
In software development, we strictly follows the Software Development Life Cycle(SDLC) so as to create a reliable and scientific software.
In the planning phase, we need to define clear focuses for our software.
Our goal was to create a practical, user-friendly tool to analyze images from our PestiGuard biosensor and accurately calculate pesticide levels.
Therefore, our scope focused: To build a self-contained web application that runs wholly on a user's device, like a smartphone, without any complicated backend.
The appβs core functions should include capturing or uploading an image, then it will automatically analyze the glowing test zones, and display a quantitative result.
Our main objectives focused on accuracy, accessibility, and usability. We aim to create a reliable tool for rapid analysis that everyone can use it easily. This means a simple workflow (Capture β (Align) β Analyze β Result). The workflow ensures the app works offline, creating an user-friendly interface, and providing different analysis modes for the diverse needs of users.
Figure 3.1: Our aim
In the analysis phase, we defined the specific functions required for the software and the constraints that would guide the design.
The core function is to transform an image of our biosensor into a quantitative pesticide measurement. This was broken down into a clear workflow:
1. Image Input: The user should be able to either capture a live photo with their device camera or upload a pre-existing image.
2. Automated Processing: The software should guide the user to align and crop the image, isolating the specific regions of interest (ROIs) where the E. coli samples are located and are brightening.
3. Analysis algorithm: An algorithm then calculates the average brightness within the ROIs is needed. Using our pre-set standard curve for different pesticides, our app converts this brightness value into accurate pesticide concentration, supporting both preset standard curve and calibration strip.
4. Data Management: The final result should be shown clearly. The analysis of result, including the image and data needs to be saved in a local history for userβs future reference.
Figure 3.2: Software analysis aim
This design was decided by several constraints. The most significant one was that the application need to be fully client-side, which means all the data processing workflow will occur only in the user's device. This required us creating efficient algorithms that can run smoothly in a web browser, which also ensures the tool works offline and protects user privacy. Furthermore, the software had to be very user-friendly for non-tech users, letting us to create a simple, guided, step-by-step interface. Finally, as a web application, it needs to be platform-independent. It should be functioning consistently in different browsers and devices.
Figure 3.3: Software design constraints
Our previous design focus is to create a clean, maintainable, and accurate app.
Thus, we chose React, which is a common technical stack and a straightforward software architecture to achieve our goals mentioned in Phase 1 and Phase 2.
Our app is built in a component-based architecture, where the user interface is assembled from small, reusable components.
This makes the code easier to manage and update for us developers. The entire application runs on the client-side (the user's browser), with no need for a backend server.
This approach provides benefits such as protects user privacy since all kinds of data only stays on usersβ own device, enables offline functionality which is important for field use, and also reduces server costs and maintenance.
Therefore, this platform will be more scalable and easy to distribute to all users.
Below are our detailed main technical design choices. They are chosen due to their speed, reliability, and a better user experience.
1. React: We used React as our core framework because its component-based model will be great for us building complex and interactive interfaces efficiently.
2. TypeScript: We wrote the code in TypeScript instead of plain JavaScript. This adds static typing, which helps us catch errors early and makes the code more reliable β a critical feature for a scientific tool.
3. Material-UI (MUI): For our appβs visual designs and UI components, we chose MUI. It provides a library of high-quality, pre-built components like buttons and layouts, which allowed us to build a good-looking and responsive UI in a short time.
4. Zustand & localStorage: To manage the app's data (like the current image or user settings), we used Zustand, a simple and lightweight state manager. For long-term data storage, like saving the analysis history, we used the browser's built-in loaclStorage. This combination is powerful and simple, with a good fit with our client-side-only approach.
The implementation is the phase where we transform our plans into a usable application.
By following our React, component-based architecture, we developed the software in a structured way. We focused on building the core logic first, then the user interface.
First, we developed the core analysis engine (Fig 2.1). This involved writing the TypeScript functions for all the complex calculations:
1. Image Processing: We created utilities for cropping, alignment, and detecting the test kit's regions of interest (ROIs).
2. Brightness Analysis: We implemented algorithms to accurately measure the average pixel brightness within each ROI.
3. Concentration Calculation: We coded the mathematical models for calibration mode to convert brightness data into a final pesticide concentration.
Figure 3.4: Utilities function implementation
With the analysis logic in place, we built the user interface (UI) using React and Material-UI.
We mainly created reusable components for reuse, from simple buttons to complex screens.
This can keep our code organized and make sure a consistent look of the whole application.
Finally, we integrated the UI with the analysis engine using Zustand for state management.
This allowed user actions, like taking a picture, to trigger the analysis functions and display the results seamlessly. We also implemented the persistence layer, using localStorage to automatically save settings and analysis history. Throughout the process, we used Git for version control to track changes and collaborate effectively.
To ensure our app is reliable, accurate, and user-friendly, we have conducted several comprehensive and multi-phase testing process. We focused on verifying the analytic correctness and ensuring a seamless user experience.
a) Internal testing (Unit & Integration)
Before public release, our development team performed several internal testing.
b) User Acceptance Testing (UAT)
The final phase of testing is the User Acceptance Testing (UAT), is designed to evaluate our appβs real-world usability.
We have conducted a survey where 319 users were tasked with completing the full analysis workflow.
Figure 3.5: User Acceptance Testing survey
The feedback form them was mostly positive:
Figure 3.6: Process intuitive - 4 or above
Figure 3.7: Easy to navigate β 4 or above
Figure 3.8: Obvious icons β 4 or above
Figure 3.9: Overall feedback β 4 or above
The results from our testing process shows that our Biosensor Analysis Platform is not only functionally robust but is also achieving the goal to be an accessible and easy-to-use scientific tool or even a marketing product.
The maintenance phase is important for ensuring the appβs long-term reliability and on-going improvement of our platform after our initial launch. We focused on providing continues debugging support, adapting to new technology and enhancing existing features based on userβs feedback.
Figure 3.10: Project GitHub repository
Capture workflow
Step 1: Initialize camera & request user permission
The camera capture will begin if the βUse Cameraβ button is pressed, which triggers the app to initialize the userβs camera.
The app will then request permission (Fig. 4.1a) to access the camera from the user's device.
If the user refuses it, an error is shown and the capture process stops.
Step 2: Display live camera
If the camera is permitted, the app will display the live camera with control bar for user to capture the test kid.
Step 3: User actions (Capturing)
The app then waits for the user to make decision choice. (Fig. 4.1b)
Figure 4.1: Camera permission and user action button
Figure 4.2: Camera capture workflow
Analyze workflow
Step 1: Load and Validate Image
The workflow begins with the Start Analysis action (Fig. 4.3), where it will be triggered when a user submits an image (either captured via the camera or uploaded from the device).
The app then loads and validates the image.
If the image is invalid, an error is shown, and the process stops.
Step 2: Analyze image
For a valid image, the app's image processing engine is activated.
It attempts to Detect test areas & sample pixels, identifying the specific regions of interest (ROIs) on the biosensor test strip(Fig. 4.4a).
This involves locating the distinct control and test lines and the calibration strips if present.
If the detection fails, the process stops with an error.
Step 3: Display result
If the user selected preset mode previously in the setting page (Fig 4.4b), Our app will follow the predefined standard curve to estimate the concentration.
Else if the user selected the strip mode, the app will calculate the concentration by using the imageβs calibration strips for calculation.
Figure 4.3: Analysis user action button
Figure 4.4: ROI Detection and analysis mode
Figure 4.5: Analysis workflow
Step 1: Save to history
The history workflow will start when an analysis is completed. The results will immediately save to the history automatically.
Step 2: Save history
The user can then navigate to the history section to view history analysis.
The app's HistoryScreen.tsx component is responsible for this, and it displays a list of analysis records in a chronological order.
Step 3: Load history
User can perform diverse actions to manage their history data.
a) View details: User can select a record to expand and check for results. This action provides detailed information about that single analysis, such as the calculated concentration, the date, and the original image.
b) Manage records:
β’ Delete records: Remove the single record.
β’ Edit record name: Double click the result name to rename that analysis.
β’ Export/import all data: Allows user to back up their data.
β’ Clear all data: Delete all saved history at once.
Figure 4.6: History management interface
Figure 4.7: History management workflow
To manage the different part of app data efficiently, Zustand stores is created to logically separate the data.
Here are the details of each store.
1. Theme store: Manages the application's theme, letting user switch between Light and Dark theme.
2. Mode store: Handles the Detection mode selection, it determines what method is the analysis performed, where the stored mode (Preset/Strip) is an important part for the Analysis logic
3. History store: Responsible for storing Analysis records, includes all previous analysis results, which will be displaying and managed within the History screen.
4. Calibration store: Manage the userβs calibrations data, allowing users to define or change some specific concentration curves for each pesticide.
5. Pesticide store: Includes the main pesticide data and predefined concentration curves. The curve is for calibration analysis and to provide default values for the calibration store.
Figure 4.8: State management architecture
To provide codebase transparency and help developers to understand our software file structure & architecture, we present the complete code file structure of our PestiGuard Biosensor Analysis Platform. This structure follows React best practices and maintains clear separation of concerns.
Main file components:
This comprehensive structure shows our professional software engineering & developing practices , with clear separation of concerns, feature-based organization, and extensive modularity for future development. Where the architecture also supports complex image processing workflows, sophisticated state management, and maintainable code organization that facilitates easy collaboration and future development.