Senior Design Projects

ECS193 A/B Winter & Spring 2022

Build a human-in-the-loop psychophysics speech synthesis simulator for a brain-computer interface to restore speech

Email **********
Sergey Stavisky
Department of Neurological Surgery (affiliated with GGCS, BMEGG)

Project's details

Build a human-in-the-loop psychophysics speech synthesis simulator for a brain-computer interface to restore speech
People who lose the ability to speak due to neurological conditions including ALS or stroke have an urgent and unmet need for therapies that would restore their ability to speak. The UC Davis Neuroprosthetics Lab (PIs: Sergey Stavisky and David Brandman) and the Auditory Neuroengineering and Speech Recognition Lab (PI: Lee Miller) are developing a new way to help such patients using an implanted electronic medical device called a ‘brain-computer interface’ (BCI) which measures the person’s brain activity with very high precision as they try to talk, and then outputs their intended words. A speech synthesis BCI will introduce a latency between a user’s desire to speak and a computer’s synthesized interpretation of their neural signals. This project focuses on establishing key design specifications regarding what latency can be tolerated.
The goal of this project is to quantify the effects of speech latency and errors on speech production. This project will begin by building a low-latency audio feedback system (i.e., a combination of physical headset + mic + masking speakers and software which together allow a healthy subject to speak, not hear their own voice, and then hear what they just said digitally replayed under the experimenter’s control). This will entail identifying existing components and what needs to be custom-built, and then integrating it into a complete system. Next, the team will prototype that they can use the platform to measure peoples’ ability to speak under different latency and alteration conditions. A stretch goal would be to not only delay the auditory feedback, but to introduce specific types of errors (ranging from simply adding noise to potentially more specific perturbations).
• A working prototype of the speech synthesis Simulator • Several example datasets of the team using the Simulator (as if they are subjects) under different feedback latency and error conditions. • Students will open-source the source code and prepare clear documentation of how to assemble the components and run the system, so that future researchers can use the Simulator to perform more extensive experiments with latency and error parameter sweeps to optimize design specifications for a speech BCI
• Someone on the team will need to be comfortable with programming in a low-level language (e.g., C/C+), particularly for implementing a closed-loop experiment system. • Someone on the team should have high-level programming experience for data visualization and analysis (e.g., Python or MATLAB). • Helpful skills could include: signal processing, audiology, audio electronics (e.g., from music performing or music production hobbies).
**********
30-60 min weekly or more
Open source project
Attachment Click here
No
Team members N/A
N/A
N/A