Categories
1R High School Outreach Program Neuroscience

High School Student Develops Machine Learning–Assisted Brain-Computer Interface for Seizure Detection and Mobility Support

Editor’s Note

This article is written from a first-person perspective as part of the OneResearch (1R) High School Outreach Program. The content reflects the author’s individual research experiences and interpretations and is intended to showcase early-stage scientific exploration by student researchers.

Background

Amyotrophic lateral sclerosis (ALS) affects an estimated 1.9 – 6 out of every 100,000 people worldwide and is characterized by progressive degeneration of motor neurons, ultimately resulting in full-body paralysis and loss of speech. ALS has a devastating personal and societal impact, with the average survival time post-diagnosis being just 2 to 5 years. While assistive technologies like eye-tracking systems and speech synthesizers have helped patients maintain communication, full-body mobility tools such as exoskeletons and neural control systems remain largely inaccessible. 

Epilepsy, on the other hand, affects over 50 million people globally, according to the World Health Organization. Out of these individuals, around 80% live in low to middle-income countries. The unpredictability of seizures can pose life-threatening risks and severely impair quality of life. While treatment through antiepileptic drugs exists, nearly 30% of patients are drug-resistant, leaving them vulnerable to sudden seizure onset. Current seizure alert systems are often reactive, not predictive. They rely on post-ictal signals such as convulsions or muscle contractions rather than underlying neural precursors.

These two conditions, one progressive, one episodic, both represent critical gaps in biomedical support systems: a need for more intuitive, personalized, and cost-effective neurological interfaces.

Design & Implementation

Brain Computer Interfaces (BCIs)

Brain Computer Interfaces (BCIs)

A brain-computer interface (BCI) is a system that allows direct communication between the brain, or associated electrical activity, and an external device, such as a computer or robotic system. BCIs are designed to bypass traditional neuromuscular pathways, enabling individuals to interact with technology using only brain signals or subtle bioelectrical inputs.

In my system, for example, the BCI identifies specific patterns related to eye blinks and muscle twitches. These blinks produce distinct electrical signals that are detected by ECG electrodes placed near the forehead and temples. The raw signals are streamed to a microcontroller, then passed into a Python-based environment where a trained Support Vector Machine (SVM) classifier determines whether a blink has occurred. If a blink is detected, the BCI sends a command to activate a robotic arm, allowing the user to control movement through neural input alone.

Experimental Design

I began my research journey in 7th grade by testing the capabilities of a commercial EEG device. Initial experiments used 20–80 second interval training periods to distinguish between “blink” and “not blinking” states through time-segmented neural input. These signals, specifically artifacts related to eye and slight muscle movement, were processed by a trained classifier and translated into robotic arm motion, enabling basic control through voluntary blinks and facial twitches.

In 8th grade, I expanded this framework into a more robust, early-stage brain-computer interface (BCI) system for mobility and seizure detection. BCIs interpret brain activity and convert it into executable commands, offering new pathways for communication and control without traditional neuromuscular input. To prototype a BCI using accessible materials, I employed consumer-grade ECG sensors placed at frontal and temporal electrode positions (T9, T10, AF7, and AF8) to capture blink-associated electrical signals. Although these sensors are typically used for cardiac monitoring, they were repurposed to provide low-cost alternatives to clinical EEG systems and embedded into a 3D-printed head frame for consistent placement and reduced motion artifacts.

Signal acquisition was handled by a microcontroller and ADC, with data streamed to a Python-based processing environment. I developed a training dataset of blink vs. no-blink intervals, applying basic noise filtering and extracting features such as peak amplitude, and time-domain signal changes. A Support Vector Machine (SVM) classifier was trained using an 80/20 train-test split, achieving classification accuracies exceeding 97% across trials. The model’s output activated a servo-driven robotic arm in real time, converting neural input into directional movement.

Building on this system, I adapted the architecture to begin exploring seizure prediction using EEG-style data. By extending the time windows and incorporating frequency-based features, the model was trained to identify pre-ictal patterns associated with epilepsy. While this application remained in early-stage testing, it demonstrated the potential for low-cost, ML-assisted BCIs to enhance both mobility and early neurological event detection for underserved populations.

Broader Impact

Neurological disorders like ALS and epilepsy disproportionately affect individuals in low-resource settings, where access to advanced diagnostics and assistive technologies is limited or nonexistent. Clinical-grade EEG systems can cost upwards of $10,000, and mobility-support devices like robotic exoskeletons often exceed $50,000. This puts them far beyond reach for the average patient. As a result, millions of individuals are left without the tools needed to communicate, move, or receive real-time neurological care.

This project demonstrates the potential of low-cost, machine learning–assisted brain-computer interfaces to bridge that gap. By leveraging sub-$10 ECG sensors, open-source software, and 3D-printed hardware, the system offers a scalable alternative to traditional neurotechnology. Studies have shown that brain-computer interface (BCI) systems can significantly improve functional independence in individuals with motor impairments, with some interventions leading to improvements in upper limb motor function and daily task performance of up to 70%. Additionally, seizure forecasting using wearable devices has demonstrated predictive accuracy with area under the curve (AUC) scores reaching 0.77, offering the potential to reduce seizure-related injuries for 4/6 patients through timely risk awareness.

The dual functionality of this BCI prototype, enabling robotic movement via neural input and identifying pre-ictal EEG-like patterns, addresses critical needs in response delays, safety, and improvements to day-to-day autonomy for individuals living with epilepsy or paralysis.

Looking ahead, this framework could be further scaled into wearable headsets for daily use, integrated with cloud-based monitoring platforms, or expanded to detect a wider range of neural events. With further development, this research could serve as the foundation for accessible, AI-powered neurotools that democratize healthcare access for millions worldwide.

Categories
1R High School Outreach Program COVID-19

How Telemedicine Impacts Healthcare Post-COVID

Introduction

Throughout the COVID-19 pandemic, our systems and institutions were tasked with quickly adapting their services to an increasingly virtual audience. This is particularly true of the medical world, where telemedicine rapidly became the go-to solution for socially-distanced medical needs.

Telemedicine, or telehealth, is the delivery of healthcare services through digital platforms, allowing patients to connect with their healthcare providers remotely. The ability to meet with physicians from the comfort of your home became a necessity during the COVID-19 pandemic, and since then, telemedicine has remained a key aspect of nationwide healthcare. According to Stephanie Watson at Harvard Health, “76 percent of hospitals in the U.S. connect doctors and patients remotely via telehealth, up from 35 percent a decade ago.”

While it promotes accessibility and convenience for patients and physicians alike, the drawbacks of telemedicine include quality concerns and potential technological barriers for certain demographics. This raises the question: Is telemedicine obsolete in a post-COVID world, or do the benefits of remote healthcare outweigh the costs?

Benefits

The primary benefit of telemedicine post-COVID is the increased accessibility to healthcare for populations in rural areas, those without reliable transportation, and immobile or busy patients. Patients without the privileges required to attend regular, in-person medical visits are much better accommodated by a virtual model. This system also increases convenience for the vast majority of patients, whether or not they fall into one of these demographics. 

During virtual visits, clinicians are also less likely to be exposed to infection or disease, further maximizing the care they are able to provide long-term. 

Additionally, telemedicine provides support for patients’ continuity of care, offering easier opportunities for follow-up appointments and check-ins for those with chronic conditions. Further, the implementation of telemedicine can reduce “medication misuse, unnecessary emergency department visits, and prolonged hospitalizations.” 

Drawbacks

Although telemedicine provides increased accessibility to healthcare, this doesn’t mean patients are taking advantage of it. According to a Stanford study, “increased telemedicine access is associated with a modest, 3.5% increase in the utilization of primary care.” While 3.5% translates to a large number of patients, it still represents a smaller population than expected.

One of the largest concerns regarding the wide implementation of telemedicine is the quality of care. The Institute of Medicine (US) Committee on Evaluating Clinical Applications of Telemedicine has identified three key quality issues: “overuse of care (e.g., unnecessary telemedicine consultations); underuse of care (e.g., failure to refer a patient for a necessary consultation); and poor technical or interpersonal performance (e.g., incorrect interpretation of pathology specimen or inattention to patient concerns).” 

Further, telehealth creates a digital divide, which causes particular difficulty in regard to older and low-income demographics. According to a Mayo Clinic study, the concordance of diagnoses between in-person and virtual appointments was 86.9%, and “for every 10-year increase in the patients’ age, the odds of receiving a concordant diagnosis by video telemedicine decreased by 9%.”

Doctor typing on computer. // Unsplash.com/National Cancer Institute

Physician’s Perspective

Dr. Maryam Kashi, a gastroenterologist with AdventHealth in Central Florida, operates on a hybrid model in providing patient care. Since 2020, she has run 2 days of in-person clinic and 3 days of virtual clinic each week. 

According to her own experience, Dr. Kashi believes that quality of care is held to the same standard in both in-person and virtual visits. She says that her hybrid model allows her to ensure that all patients with issues requiring physical exams or other in-person needs are able to be accommodated. Meanwhile, patients who only need a brief post-op check-in are able to meet with Dr. Kashi virtually at their convenience. 

Dr. Kashi contends that her current hybrid model, which includes a majority of virtual visits, elicits appreciative and receptive responses from patients as they experience greater convenience and access to healthcare. 

Conclusion

Telemedicine offers an accessible and efficient alternative to in-person care. While there are concerns regarding the quality of care and an obvious digital barrier, the great benefits of the service make a case for its continued usage beyond COVID restrictions. Hybrid models, like Dr. Kashi’s, ensure that patients are able to receive the care they need, regardless of physical or virtual limitations. Ultimately, adopting an inclusive system that includes telemedicine guarantees that the greatest number of patients acquire appropriate medical care.

References

Bart M. Demaerschalk, MD. “Clinician Diagnostic Concordance with Video Telemedicine at Mayo Clinic from March to June 2020.” JAMA Network Open, JAMA Network, 2 Sept. 2022, jamanetwork.com/journals/jamanetworkopen/fullarticle/2795871. 

Gajarawala, Shilpa N, and Jessica N Pelkowski. “Telehealth Benefits and Barriers.” The Journal for Nurse Practitioners : JNP, U.S. National Library of Medicine, 17 Feb. 2021, www.ncbi.nlm.nih.gov/pmc/articles/PMC7577680/#bib3. 

Institute of Medicine (US) Committee on Evaluating Clinical Applications of Telemedicine. “Evaluating the Effects of Telemedicine on Quality, Access, and Cost.” Telemedicine: A Guide to Assessing Telecommunications in Health Care., U.S. National Library of Medicine, 1 Jan. 1996, www.ncbi.nlm.nih.gov/books/NBK45438/. 

Watson, Stephanie. “Telehealth: The Advantages and Disadvantages.” Harvard Health, 12 Oct. 2020, www.health.harvard.edu/staying-healthy/telehealth-the-advantages-and-disadvantages. 

Zeltzer, Dan, et al. “The Impact of Increased Access to Telemedicine.” Stanford, 2023, web.stanford.edu/~leinav/pubs/JEEA2018.pdf.