SOFTWARE
Imagine trying to introduce students to EEG analysis while struggling with messy spreadsheets, mysterious error messages, or complicated installation steps. Our team designed EEG QuickLab precisely to prevent those frustrations. It is a compact, opensource platform aimed at making the first five minutes of EEG inspection smooth and productive.
At its core, EEG QuickLab recognizes that data acquisition hardware imposes nonnegotiable limits: electrode–skin impedance fluctuates, reference leads pick up 50/60 Hz mains noise, and eye blinks or facial muscles contaminate signals with low and highfrequency bursts. Instead of ignoring these constraints, we embed them into the default algorithms, turning hardware “truths” into software “defaults”.

Starting From Hardware and Literature
Every electrophysiologist knows that high impedance and slight grounding differences convert commonmode powerline noise into a stubborn spike in the EEG spectrum. Textbooks tell us that averaging around event markers (the ERP) boosts signaltonoise ratio, and that the brain’s cognitive rhythms live mostly between 1 and 40 Hz. These insights inspired our default signal processing pipeline: a linearphase 1-40 Hz bandpass filter, biquad notch filters at 50/60 Hz (with an adaptive mode that searches ±1 Hz for real peaks), common average referencing (CAR) for multichannel recordings, and time–frequency analyses via shorttime Fourier transforms and Morlet wavelets.
Why focus on these? Because in most teaching labs, those steps address 90% of the challenges: they reduce line noise, reveal alpha and theta bands, flag blinking or jawclenching artifacts, and make eventrelated potentials visible.



Designing for the “First Five Minutes”
A recurring theme in our design is “first success within five minutes.” We know that staring at a blank plot or cryptic error kills curiosity. To combat that, EEG QuickLab requires only three actions to get a valid waveform: load the file, confirm column roles (Auto Guess → Apply Mapping), and pick a channel. If the data lack a sampling rate, a default fs override (250 Hz) can be typed directly into the GUI. If a column isn’t numeric or a sampling index is missing, the program doesn’t crash; instead, the plot area shows a humanreadable message explaining what to correct. This literateerror approach not only educates newcomers, it shortens debugging time for seasoned researchers.



CodeLevel Philosophy: Contracts and Pure Functions
At the code level, the guiding principle is a minimal data contract. Rather than supporting every proprietary export format, we standardize the input to a “long table” schema: each row must have a channel_name and value, plus one of fs, sample_index, or timestamp. The standardize.py module guesses these columns with keyword matching (e.g., “chan”, “electrode”, “rms”, “pp”), applies the mapping, and validates the result. This contract simplifies the rest of the pipeline: every processing function—whether notch filtering, PSD estimation, or ERP averaging—takes a standardized DataFrame and returns either a NumPy array or a Matplotlib Figure. We avoid hidden state or side effects; if an argument is missing, the function returns a placeholder with an explanatory message.
This pure function design affords three benefits. First, it makes unit testing trivial—each function’s output depends only on its inputs. Second, it allows the GUI and commandline interfaces to call the same logic, ensuring that classroom demonstrations and batch pipelines behave identically. Third, it eases integration: researchers can import eegtool.core.dsp in a Jupyter notebook and apply adaptive notch filtering or Welch PSD to their own arrays without touching the GUI.

Turning Constraints into Friendly Interfaces
A novice often doesn’t know whether a dataset is an event summary (mean RMS per trial) or a fullresolution waveform. EEG QuickLab automatically loads the first worksheet in an Excel file, guesses the roles of columns, and shows the guess in a dropdown. If the guess is wrong, you can fix it manually. Because channel names in real data sometimes appear as “01”, “Fp1” or “ fp1 ”, we canonicalize display names (trim spaces, uppercase, strip leading zeros) but always preserve the original name for indexing.
Useroriented defaults are carefully chosen. We use a 401tap Hamming window for FIR bandpass filtering to ensure linear phase; Welch PSD employs Hann windows with 50 % overlap to smooth variance; the adaptive notch searches only a narrow band around 50 or 60 Hz to avoid removing actual brain frequencies. All these choices came from reading primary literature and practical manuals, then distilling them into robust, reproducible code.
Implementation Layers
Behind the scenes, the repository is split into clear modules:
1.io.py handles reading and writing; it always returns a DataFrame, never a dict, even for multisheet Excel files.
2.standardize.py defines the data contract, guesses mappings, applies them, and checks compliance.
3.dsp.py implements lightweight digital signal processing: notch filters, bandpass FIR design, Welch PSD, bandpower integration, adaptive notch, CAR, ERP epoch extraction, artifact candidates using robust zscores, and time–frequency transforms.
4.visualize.py wraps these computations into figures and uses an “empty text figure” to display helpful messages instead of crashing.
5.The ui folder contains a Tkinter app that orchestrates the workflow, leaving numerical logic to the core modules.
The toplevel run_app.py merely instantiates the GUI and starts the Tk event loop. There’s no magic state—version numbers are stored in __init__.py, dependencies are minimal, and the license is MIT.


Value Beyond Our Team
Our aim was not to build the most sophisticated EEG platform, but to provide a consistent and approachable starting point for many projects. By embracing the electrical and physiological constraints early on, and by implementing all processing as pure functions, we created a tool that newcomers can learn from and that experts can extend. The same pipeline can feed into MNEPython for independent component analysis or source modeling; the same standardized CSV or Parquet export can be analyzed in R. For iGEM and other openscience competitions, the repository structure—complete with README, license, tests and a clear API—makes it straightforward for others to fork, improve, and contribute.
In future iterations, we plan to add batch scripts for largescale QA, optional integrations with ICA libraries, and examples of building simple classifiers on top of the cleaned data. Yet, even now, EEG QuickLab demonstrates how careful attention to physical reality, software architecture, and humanfriendly messaging can transform a jumble of raw recordings into a coherent, teachable, and reusable tool.
Overview
To help epilepsy patients achieve early warning of seizure risk, our Dry Lab Open Software Team developed two software tools based on the hardware. The ES Detection, powered by model algorithms, analyzes data processed by EEG QuickLab. Specifically, it helps epilepsy patients accurately reflect their daily EEG readings, thereby improving the accuracy of medication refills.
Background
Whether predicting medication cycles or preventing illness, routine EEG monitoring and early warning are crucial. Therefore, we developed this software to calculate and warn patients of their risk of illness.
This software addresses the challenges of proactive prevention, as traditional EEG monitoring methods are inconvenient and often only provide real-time feedback. By receiving EEG data transmitted by the hardware, the software predicts the probability of illness at the current stage and issues an alarm when thresholds are exceeded, providing timely alerts to patients.
This software has effectively improved patients' daily well-being.
Introduction
Based on this background, we developed EEGKit and ES:detection. ES:detection is a local computing software built using MATLAB, Python, and MySQL. With its simple, intuitive interface and easy-to-use options, it saves users valuable time while providing accurate results.
Our software is not only user-friendly but also highly error-tolerant, ensuring a smooth user experience. Its visualization capabilities also enable patients to intuitively understand their current health status.
Furthermore, we adhere to open source principles, meaning anyone can contribute to and expand the software's functionality, enriching the ecosystem.
Methods
MATLAB Main System:
Data Import Module: Processes .xlsx files cleaned in EEG QuickLab
Predictive Analysis Module: Runs epileptic seizure prediction models
Visual Display Module: Generates waveforms and displays results
Log Management Module: Records operation logs and displays them on the interface
Data Processing Engine: Responsible for data preprocessing and format conversion
User Interface Control: Manages the GUI and user interaction flow
Python Database Interface Layer:
DB Connection Module: Manages connections to MySQL
Data Operation Module: Executes SQL queries and transactions
Result Storage Module: Specializes in storing test results
MySQL Database:
Test Result Table: Stores detailed test results
System Log Table: Records system operation logs

Installation
0 Clone the repository
git clone https://gitlab.igem.org/2025/software-tools/hainanu-china.git
cd hainanu-china
cd ES:Detection
1 Run the install.bat file
Used to configure the local environment with one click.
2 Run ES_Detection.exe
After successful startup, you can start using.
*Note: When no longer in use, run uninstall.bat to delete it with one click
Usage
Click the 'Select Excel File' button to select the xlsx file after data cleaning:
Click the 'Start Prediction' button to start risk detection:
(Optional) After the test is complete, click 'Draw characteristicwaveform' to generate a frequency band abnormal timing waveform:
Conclusion
Serving as a bridge between hardware, software, and models, ES Detection successfully meets the daily detection and prevention needs of patients while protecting their privacy and data.