Bakgrunn og aktiviteter

Project proposals for ITK 2021-2022 (Molinas)

All the projects offered are based on a new concept of low-density EEG, FlexEEG, for source localization and EEG signal classification, described below:

David and Goliath - FlexEEG: a new concept of reduced channel EEG system with brain imaging capabilities

This project is the foundation for all the projects offered by my team. It is a multidisciplinary  project in collaboration with the Department of Electronics Systems and the Developmental Neuroscience Laboratory of NTNU and the Human Sleep Lab of the University of Tsukuba in Japan.

All projects listed require two students.

Traditionally EEG signals are recorded by several wet electrodes placed in a regular pattern on the scalp. To ensure stable and reliable readings a type of gel is usually applied to the scalp. The resulting procedure is both time consuming, expensive and uncomfortable for the patient.

As an alternative to the traditional method, reduced-channel EEG with dry, wireless, electrodes is proposed with FlexEEG. This simplified approach requires advanced electronics, signal processing, inverse problem solving and system identification competences. Our team is currently working towards such solutions, with several Master and PhD students with background in electronics, signal processing, machine learning, artificial intelligece and embedded technologies.

 Here is an artist vision of the FlexEEG concept.

Currently one PhD student is engaged with the development of an “in-house” EEG headset with flexible wireless dry electrodes that can move accros the scalp.

The main supervisor of these project is  Marta Molinas, marta.molinas@ntnu.no

The co-supervisors at NTNU are:

Lars Lundheim: NTNU IES, 

Trond Ytterdal: NTNU IES, 

Audrey Van Der Meer: NTNU Developmental Neuroscience Lab, 

                                                          List of Projects

1. FlexEEG in Human Sleep Research: within this project, 3 different topics are offered:

1.1 Rapid Eye Movement (REM) onset detection during REM sleep.

1.2 Automatic sleep stage classification based on minimally invasive EEG.

1.3 Automatic emotion recognition based on minimally invasive EEG.

2. FlexEEG for Brain-Ventilator interface: within this project, 2 different topics are offered:

2.1 EEG based onset detection of spontaneous breathing during mechanical ventilation

2.2 Extubation under guidance of activated-respiratory associated-region detected by EEG

3. FlexEEG headset prototype development: Within this project, 2 different topics are offered:

3.1 Design and control of a robotic system for EEG measurements

3.2 EEG signal quality analysis with moving electrodes

4. Design of an EEG based communication system for patients with Locked-in Syndrome.

5. Flying a Drone with your Mind: FlexEEG motor imagery

6. FlexEEG based BCI system for ADHD Neurofeedback

7. FlexEEG based Lie Detector

8. FlexEEG based Biometric System for Subject Identification

9.The Augmented Human: Development of BCI for RGB colour-based automation

10. FlexEEG based Alcohol Detector

11. Automatic detection of attentional states based on Webcam eye-tracking. 

                                                          Description of Projects

1. FlexEEG in Human Sleep Research: within this project, 3 different topics are offered:

1.1 Rapid Eye Movement (REM) onset detection during REM sleep.

REM and NREM sleep studies have revealed much about the physiology of sleep and its disorders and about the pathophysiology of mental illness. Electrooculography (EOG) is used in Polysomnography (PSG) studies to capture the electrical activity generated by the human eye, which could be regarded to behave like an electric dipole, having the positive and negative poles at the cornea and retina respectively. Phasic REM sleep parameters include the number and incidence of rapid eye movements. This project will use EOG signals from PSG studies to detect the exact number and timing of rapid eye movements during REM sleep. The algorithm will be implemented by modelling the peak-gradient relationship and duration of the movements in EOG signals starting with the parameters proposed in [1]. In general, human physiological signals are qualitatively similar but not quantitatively identical. For example, a sawtooth wave (STW) is one of the characteristic EEG patterns of REM sleep. However, the density, duration, and frequency of STWs are individually dependent. The algorithm will focus on solving the problem of interpersonal EOG variability in order to make it more sensitive to these differences.

This project will be in collaboration with the IIIS Human Sleep Lab of the University of Tsukuba, from where the sleep data will be obtained. The EEG ITK team is currently collaborating with the Human Sleep lab in several other projects related to sleep and sleep data is already available to initiate the project.  The student in this topic will collaborate with the student in Topic 3.

[1] Kazumi Takahashi and Yoshikata Atsumi, Sleep and Sleep States, Precise Measurement of Individual Rapid Eye Movements REM Sleep of Humans. Sleep, 20(9):743-752, 1997 American Sleep Disorders Association and Sleep Research Society

Main supervisor: Marta Molinas, marta.molinas@ntnu.no 

Co-supervisor: Andres Felipe Soler Guevara, andres.f.soler.guevara@ntnu.no

Contact person in Japan: Takashi Abe, IIIS- University of Tsukuba

1.2 Automatic sleep stage classification based on minimally invasive EEG.

Sleep staging is a process typically performed by sleep experts and is a tedious and lengthy task, during which labels are manually assigned to polysomnographic (PSG) recording epochs. In order to reduce the time and effort required to accomplish this, a large number of automatic sleep staging methodologies have been developed. This is typically achieved through a series of steps which in general include: signal pre-processing, feature extraction and classification. The methods used for each of these steps vary greatly among the proposed approaches, with the final result providing varying degrees of accuracy. In this project, the students will develop a pipeline for sleep staging using data obtained from the IIIS Human Sleep Lab of the University of Tsukuba.

The project will be focused on the use of a single type of signal (EEG) and reduced electrode count, as these can potentially provide minimally invasive sleep monitoring solutions. The sleep stage classification will be conducted following widely accepted rules i.e., American Academy of Sleep Medicine (AASM) and Rechtschaffen & Kales (R&K) rules.

This project will be in collaboration with the IIIS Human Sleep Lab of the University of Tsukuba, from where the sleep data will be obtained. The EEG ITK team is currently collaborating with the Human Sleep lab in several other projects related to sleep and sleep data is already available to initiate the project.

Main supervisor: Marta Molinas

Co-supervisor: Luis Alfredo Moctezuma, luis.a.moctezuma@ntnu.no 

Contact person in Japan: Takashi Abe, IIIS- University of Tsukuba

1.3 Automatic emotion recognition based on minimally invasive EEG.

Every day, our emotional states influence our behavior, decisions, relationships and health condition. Affective states play an essential role in decision-making, they can facilitate or hinder problem-solving. Emotion self-awareness can help people manage their mental health and optimize their work performance. Automatic detection of emotion dimensions can increase our understanding of emotions and promote effective communication among individuals and human-to-machine information exchanges. Besides, automatic emotion recognition will play an essential role in emotion monitoring in health-care facilities (The WHO estimates that depression, as an emotional disorder, will soon be the second leading cause of the global burden of disease), gaming and entertainment, teaching-learning scenarios, optimizing performance in the workplace, and in artificial intelligence entities designed for human interaction.

To decode EEG signals and relate them to specific emotion is a complex problem. Affective states do not have a simple mapping with specific brain structures because different emotions activate the same brain locations, or conversely, a single emotion can activate several brain structures. A neural model of human emotions would be beneficial for building an emotion recognition system and developing applications in emotion understanding and management.

The objective of this project is to develop neural models that can decode human emotions through EEG signal analysis by learning from training data from experiments designed for emotion elicitation. The process will involve signal analysis, pre-processing, feature extraction and selection, design of classification algorithms and performance evaluation.

This project will be in collaboration with the Innovation Medical Research Institute of the University of Tsukuba (Japan), from where the dataset for the study will be obtained.

Main Supervisor: Marta Molinas

Co-supervisor: Luis Alfredo Moctezuma, luis.a.moctezuma@ntnu.no

Contact person in Japan: Takashi Abe, IIIS, University of Tsukuba

2. FlexEEG for Brain-Ventilator interface: within this project, 2 different topics are offered:

2.1 EEG based onset detection of spontaneous breathing during mechanical ventilation

Mechanical ventilation (MV) is used in the treatment of acute respiratory distress syndrome (ARDS) to secure ventilation and oxygenation and is based on positive-pressure delivery of air into the lungs. Although life sustaining, mechanical ventilation can also cause damage in the form of ventilator-induced lung injury. Inadequate and asynchronous ventilation with superimposed spontaneous breathing has the potential to aggravate lung injury. Synchronizing spontaneous breathing to mechanical ventilation reduces its invasiveness, can enable earlier discharge from the ICU and has shown to lead to higher survival rates.Well timed synchronization of spontaneous breathing with ventilator parameters will require the detection of the onset of spontaneous breathing, which today is achieved by detecting the onset of diaphragm movement (invasive and prone to inaccuracies) or by respiratory flow signal analysis (non-invasive and offline so far). These two methods are not accurate enough in detecting the exact onset timing and when synchronizing with the ventilator a certain delay will be introduced. The high time resolution of EEG can be exploited to instantly detect the onset of spontaneous breathing at its roots in the brain. Unconscious and conscious breathing activate different parts of the brain. Unconscious breathing is controlled by the medulla oblongata in the brain stem, while conscious breathing comes from the more evolved areas of the brain in the cerebral cortex.

The students will develop algorithms to detect the onset of spontaneous breathing in EEG data during weak-up stage from anesthetized patients (from conscious-unconscious-conscious states) and from healthy subjects during sleep (from conscious-unconscious-conscious states). The brain source activation at the onset of spontaneous breathing in weak-up stage will be detected and compared in the two cases. Together, exhalation flow detection will be used to verify spontaneous breathing activation from those subjects. This finding may give the opportunity to optimize the synchronization between the ventilator and the patient breathing and can be an important possible factor for improved survival in ICU.

The project will be carried in collaboration with the 175 Military Hospital in Ho Chi Min (Vietnam), that will provide the EEG data and the Company METRAN (Japan) which manufactures mechanical ventilators.

The project will require two students working in collaboration, one for the EEG sleep data and the other for the EEG anesthetized patients. There will be one Engineer at METRAN that will be working together with the students on this project.

Main Supervisor. Marta Molinas, marta.molinas@ntnu.no

Contac person at METRAN: Shinichi Nakane, nakane@metran.co.jp

Contact person at 175 Military Hospital: Van Ho, hovan@metran.co.jp

2.2 Extubation under guidance of activated respiratory-associated region detected by EEG

Mechanical ventilation prolongs ICU stay and it is associated to high mortality rate. Shortened period of mechanical ventilation and right timing of extubation is key to improve sequential invasive ventilation and mortality rate.

This project will work on signaling to the ventilator the exact timing of detected activation of the respiratory associated brain region via EEG and Glasgow coma scale, to guide the attending physician on extubation and respiratory management.

This topic depends on the effective outcome of topic 2.1.

3. FlexEEG headset prototype development: Within this project, 2 different topics will be offered:

3.1 Design and control of a robotic system for EEG measurements

The purpose of this project is to design a robotic system for the EEG electrodes. The prototype will be realized with a 3D printer. 

3.2 EEG signal quality analysis of the new prototype

In this project the objective is to analyze the EEG signal and compare the quality of the measurements with standard EEG equipment.

Supervisor: Marta Molinas, marta.molinas@ntnu.no

Co-supervisor: Andres Soler, PhD candidate, andres.f.soler.guevara@ntnu.no

4. Design of an EEG based communication system for patients with Locked-in Syndrome.

Locked-in syndrome (LIS) is a state of complete paralysis except for ocular movements in a conscious individual, normally resulting from brainstem lesions. These are patients who are conscious and aware of their environment but are physically disabled. There are some available solutions nowadays to help them communicate but the downside is the requirement for physical training which can be both time and money consuming. The main objective of this project is to help these patients communicate and engage more effectively in their daily life by using electroencephalogram (EEG)-based communication system to facilitate communication of these patients with their caretakers.

The project design will include hardware and software. The software will be a client-server architecture with the necessary preprocessing and classification algorithms for EEG signal processing. The server will include endpoints to manage eye movements classification, primary colours exposure, and mental imagery tasks (motor imagery). The classification models and the EEG data will be stored in a database.

The hardware-/client-side will use the OpenBCI EEG headset and a Raspberry Pi to show the possible options, collect the EEG data, and send it to the server. The OpenBCI will continuously read the patients’ scalps brainwaves while the Raspberry pi in front of them displays six basic needs, namely, food, water, washroom, help, sleep and entrainment.

The system (client-server) will act as a two-way communication link between patients and their caregiver, who will receive a notification via SMS or a basic Android application installed on the caregivers' phones.

Once the system is implemented and tested, the second phase of the project will consist on reducing the necessary EEG channels and thus reduce the headset and increase the portability.

The project will start using code already developed by previous students (available on GitHub). It is suitable for two students who can work in collaboration.

The project will be in collaboration with Sunnaasstiftelsen. They will convey the users' / patient's needs and wishes adn establish contact between specialists in various related disciplines. 

Supervisor: Marta Molinas, marta.molinas@ntnu.no

Co-supervisors: Anders Fougner, anders.fougner@ntnu.no, Luis A. Moctezuma, luis.a.moctezuma@ntnu.no 

Contact person: Anne Wie-Groenhof, anne.wie-groenhof@sunnaasstiftelsen.no

5. Flying a Drone with your Mind: FlexEEG motor imagery

This project is about an experimentation on actuation of unmanned vehicles directly with signals from the brain. This will be done through the use of an open source Electroencephalography (EEG) headset that records brain activity and translates it into commands for actuation of devices in real-time.

The task will consist on developing a Brain Computer Interface (BCI); that can directly give flying/landing commands from the brain to the drone. The Open BCI EEG headset will be used (http://www.openbci.com/) for recording the brain signals to be processed into commands that will be sent wirelessly to actuate the drone. Motor imagery will be used as command to fly the drone. Using this command, this project will fly a drone and develop a trajectory control directly from the motor imagery commands without using any manual actuation system in between.

The students in this project will have access to the open-source softwares, EEG headset and BCI already developed within this task by previous year students.

This project is suitable for two students to work in a team.

Supervisor: Marta Molinas, marta.molinas@ntnu.no 

Co-supervisor: Luis Alfredo Moctezuma, PhD candidate, luis.a.moctezuma@ntnu.no

6. FlexEEG based BCI system for ADHD Neurofeedback

Applying machine learning, brain mapping and EEG recording techniques with an open source Brain-Computer-Interface (BCI) system, the master student will work in the development of a system for classifying the brain activity in imaginary motor tasks, with the objective of an user can navigate in a 3D virtual maze using his brain for treatment of Attention-Deficit Hyperactivity Disorder (ADHD).

Supervisor: Marta Molinas, marta.molinas@ntnu.no

Co-supervisor: Andres Soler, PhD candidate, andres.f.soler.guevara@ntnu.no

7. FlexEEG based Lie Detector

The aim of this project is to identify when a user is providing deceptive information using EEG signals. The master student will work in the development of a lie detection system, based on the analysis of EEG measuments, where he/she would combine knowledge of Event-Related Potentials, Signal Processing, and Machine Learning using an open source Brain-Computer-Interface (BCI) system, that will be provided for the project by the company Mentalab.

This project is in collaboration with Kirklareli University, Turkey and Mentalab

Supervisor: Marta Molinas,marta.molinas@ntnu.no

Co-supervisor: Talha Burak Alakus, Kirklareli University, Turkey, talhaburakalakus@klu.edu.tr

Contact person at Mentalab: Eduard Deneke, eduard.deneke@mentalab.com 

8. FlexEEG based Biometric System for Subject Identification

The aim of this project is to implement a real-life Subject Identification System in real-time using brain signals collected from 100 recruited volunteers in Norway. The first prototype is already developed and will be the starting point for this larger implementation. The system consist of a client-server architecture using python and Django. The server side consist on feature extraction and machine learning techniques to add new persons to the system and save it in a database. The final implementation will be tested in a real environment, and it will be able to detect persons already in the system and reject the intruders.

This project is carried out in collboration with the Defense Department in Norway (FFI). 

Supervisor: Marta Molinas, mailto:marta.molinas@ntnu.no

Contact person FFI: Øyvind Albert Voie, Oyvind-Albert.Voie@ffi.no 

Co-supervisor: Luis Alfredo Moctezuma, PhD candidate, luis.a.moctezuma@ntnu.no

9. The Augmented Human: Development of BCI for RGB colour-based automation

Allowing to control devices using EEG signals is one of the main applications of brain-computer interfaces. Nevertheless, two of the most common paradigms of control are based on an external flickering stimulator (SSVEP and P300). This project assesses the feasibility of using only the EEG responses to primary color exposure. This project has two scenarios (offline and online), which imply the application (or even the proposal) of algorithms for artifact removal, signal processing, machine learning, and their parameters optimization. In the offline analysis, we will design a recording protocol and the main focus will be on assessing if a machine learning algorithm is able to effectively distinguish between colors. Whereas in the online scenario, the main task will be to develop the framework of the real-time control of some devices, for example, an automatized door.

This project is suitable for two students to work in a team.

The project is in collaboration with the Department of Neuroscience and Biomedical Engineering of Aalto University, Finland, with Dr Veikko Jousmaki.

Main Supervisor: Marta Molinas, mailto:marta.molinas@ntnu.no

Co-supervisor: Andres Felipe Soler, andres.f.soler.guevara@ntnu.no

10. FlexEEG: EEG-based Alcohol Detector

The measurement of alcohol in the human body is a public health's relevant task for avoiding accidents and deaths. Currently, we can do that through devices that analyse the alcohol content in breath (Breathanalysers). However, it is not clear if a machine learning algorithm could help to get a better estimation of the effect of booze by directly using EEG signals. Previous works have only approached the automatic distinction (under a machine learning approach) between EEG signals from alcoholic vs non-alcoholic subjects with relatively good outcomes.This project aims to assess the feasibility of using EEG signals for detecting when a person has drunk alcohol, i.e, an initial design towards an EEG-based alcohol detector (Alcotest device). The research in this project will imply the design of an EEG recording protocol from several subjects during and after the administration of alcohol and the main focus will be on assessing if a machine learning algorithm is able to effectively distinguish between the two states. Furthermore, it will involve the design of a scheme with the following stages: signal preprocessing, artifact removal, feature extraction, and classification.

Supervisors: Marta Molinas, marta.molinas@ntnu.no

Luis Alfredo Moctezuma, PhD candidate, luis.a.moctezuma@gmail.com

11. Automatic detection of attentional states based on Webcam eye-tracking. 

Professions like truck drivers, air traffic controllers, health professionals and researchers rely on the ability to maintain attention constant throughout long periods of time. These professions could greatly benefit from the development of a real-time alerting system that will call subjects back to task even before attention lapses occur or shortly after they happened. Attention levels have been shown to relate to the properties of the eye movements. Visual signs of reduced alertness are, among others, longer blink duration, slow eyelid movement, small degree of eye opening, decreased speed of eye movement, and disappearance of micro saccades. These have been previously captured by state-of-the-art vision-based approaches. State-of the-art eye tracking utilizes high performance industry cameras that detect near-infrared spectrum light and create a highly accurate system for eye tracking. Webcams, or consumer application cameras, only detects light in the visible spectrum and because of that they pose some limitations for eye tracking. A webcam eye-tracker is a technology which all devices with embedded cameras could support; virtually all personal computers and mobile devices. Using webcams is possible to conduct remote eye tracking studies and enjoy the benefits of remote testing, such as low cost, fast data collection, and global reach. However, one of the severe limitation of webcams is the low rate of images per second compared to infrared cameras. The webcam frame rate is somewhere between 5 and 30 Hz. Eye lid movement has been detected with a frame rate of 10 Hz [1,2], which makes feasible attempting it with a webcam. Fixations and saccades can be challenging but possible to explore as they will be used as additional information to enhance accuracy.

This project will develop computer vision-based algorithms to detect fluctuating levels of attention based on webcam eye-tracking. Eye movement analysis will focus on the basic eye movement types such as fixation, saccades, blinks, eye lid movement, micro saccades, and degree of eye opening. Other approaches such as the ones reported in [3] will also be explored.

[1] Takashi Abe, Kazuo Mishima, Shingo Kitamura, Akiko Hida, Yuichi Inoue, Koh Mizuno, Kosuke Kaida, Kyoko Nakazaki, Yuki Motomura, Kazushi Maruo, Toshiko Ohta, Satoshi Furukawa, David F Dinges, Katsuhiko Ogata, Tracking intermediate performance of vigilant attention using multiple eye metrics, Sleep, Volume 43, Issue 3, March 2020, zsz219

[2] Abe, T., Furukawa, S., Ogata, K., Mishima, K., Kitamura, S.&nbsp;(2017.11.22). System, method, and program for predicting diminished attentiveness state, and storage medium on which program is stored. PCT/JP2017/042081</p>

[3] C. Meng and X. Zhao, &quot;Webcam-Based Eye Movement Analysis Using CNN,&quot; in IEEE Access, vol. 5, pp. 19581-19587, 2017, doi: 10.1109/ACCESS.2017.2754299.

The project will be in collaboration with the International Institute of Integrative Sleep Medicine of the University of Tsukuba.

Main supervisor: Marta Molinas

Co-supervisor: Annette Stahl, annette.stahl@ntnu.no 

Contact person in Japan: Takashi Abe

 

 

 

Vitenskapelig, faglig og kunstnerisk arbeid

Et utvalg av nyere tidsskriftspublikasjoner, kunstneriske produksjoner, bok, inklusiv bokdeler og rapport-del. Se alle publikasjoner i databasen

2020