Brain-computer interfaces (BCIs) are able to decode user commands from brain signal recordings. In an ideal and intuitive system, the user would only need to think a command word, such as ‘on’ to switch on a TV, or ‘up’ to raise a robotic arm. These internal commands are known as speech imagery or imagined speech.
Speech imagery has already been decoded with impressive accuracy in systems where sensors are placed in direct contact with the brain through a surgical intervention. However, these systems come with the associated surgical risks and permanent changes to the body, which limits their practicality. An alternative brain signal recording approach is electroencephalography (EEG), which is non-invasive and involves wearing a cap with sensors that record electrical activity on the scalp. Although easier to acquire, these signals are less clear than those recorded using invasive sensors, making speech imagery more challenging to decode. The reason these signals are less clear is because as they travel from their source in the brain to the scalp, they can mix together and accumulate irrelevant signal elements which we call ‘noise’. The skull, being a poor conductor of electricity, also diminishes signal quality.
The EEG recording setup being used in this project.
Other EEG-based BCIs have achieved greater accuracies than speech imagery using different paradigms. Popular alternative paradigms involve the use of flickering lights, which produce distinct patterns in brain signals known as steady-state visually evoked potentials (SSVEPs), which are signals in the brain that oscillate at the same rate as the flickering light the subject is looking at. Thus, SSVEP-based systems typically involve menus where each menu option has a light flickering at a different rate. Although these systems have good decoding accuracies, they can be visually tiring for the user. Another popular alternative to speech imagery is motor imagery, which involves imagining movements in limbs. This paradigm produces distinct changes in motor-associated brain signals. Although motor imagery commands can sometimes be intuitive, such as imagining a left-hand movement to turn a wheelchair to the left, imagined tongue and leg movements to increase or decrease the speed of the wheelchair are highly counter-intuitive. Thus, there is a growing interest in improving the decoding of speech imagery to produce the next generation of intuitive EEG-based BCIs.
The SIDec project - ’Enhancing Speech Imagery Decoding for EEG-based Brain-Computer Interface Systems - is focused on exploring methods of improving the decoding of speech imagery from EEG data. The scope of the project is to use signal processing, machine learning, and deep learning to improve the classification of speech imagery EEG data. Primarily, it will investigate the following questions:
Which set of words are most discriminative through EEG? Which words can be distinguished most accurately from the resting state, which is when the subject is not executing speech imagery? Why are these words distinguished more easily? What signal patterns are these words producing in the signal?
Studies use a variety of techniques for decoding speech imagery, but how do the leading techniques compare to each other?
Are there ways to improve the quality of the data we use to train the classification algorithms, such that more accurate decoding is achieved?
We know that there is an overlap between the brain regions for speech imagery generation and audio processing, so what is the impact of background noise disturbances (e.g. music) on the quality of speech imagery? How can the impact of background noise be decreased?
The findings from these sub-studies will then be joined together to develop an online BCI which the user can interact with through speech imagery.
The SIDec project involves the Centre for Biomedical Cybernetics and the Department of Systems and Control Engineering at the University of Malta. It is a two-year collaborative project with the Computer Science Department at Hangzhou Dianzi University in China, running from January 2024 till December 2026. In Malta, the project is being led by Prof. Ing. Kenneth Camilleri, and co-investigator Prof. Ing. Tracey Camilleri, together with Dr Natasha Padfield as Research Support Officer. In China, the project is being led by Prof. Yong Peng, with assistance from Dr Yuhang Ming and other academics and researchers. Project SIDec received funding from Xjenza Malta and the Ministry for Science and Technology of the People’s Republic of China (MOST), through the SINO-MALTA Fund 2023 Call (Science and Technology Cooperation).
In July 2024, Prof. Yong. Peng and Dr Yuhang Ming from Hangzhou Dianzi University visited the University of Malta for a three-day consortium meeting. This meeting involved discussions about the respective progress made in SIDec, as well as conversations about adjacent research areas to facilitate possible future collaborations.
Commentaires