We use cookies and other tools to enhance your experience on our website and to analyze our web traffic.
For more information about these cookies and the data collected, please refer to our Privacy Policy.

Can we still use existing sleep stage classification models in the label-scarce scenarios?

Overview

We aim to provide a realistic assessment of sleep stage classification (SSC) models in real-world label-scarce scenarios. Most of the proposed deep learning-based models assume access to a vast amount of labeled data, which is not usually achievable.

What was the approach to solving the problem?

We explored the efficacy of self-supervised learning (SSL) algorithms in scenarios with limited labeled data. We examined existing sleep stage classification models, evaluated their performance under the few-labeled data settings, and explored the efficacy of different SSL algorithms (pretext and contrastive methods) to improve their performance.

What NSRR data were used?

We used the Sleep Heart Health Study, where we randomly chose 20 subjects from the patients during the first visit (SHHS-1 dataset). We selected 1 EEG channel, i.e., C4-A1 with a sampling rate of 125 Hz.

What were the results?

The results suggest that the performance of existing SSC models degrades in the few-labels regime. However, self-supervised pretraining with contrastive methods ensures improved performance against supervised training under the same settings. In addition, SSL algorithms improved the models’ capacity to learn temporal information in EEG data. Notably, fine-tuning the SSL-pretrained models with 5 or 10% of labels can achieve very close performance to the supervised training with 100% of labels. In addition, contrastive SSL algorithms are more robust to dataset imbalance, and have better transferability when a domain shift exists.

Are there any tools available?

The code of this evaluation is available at https://github.com/emadeldeen24/eval_ssl_ssc. The code is also generic and can be easily customized for other sleep stage classification models.

Author

Dr. Emadeldeen Eldele, Nanyang Technological University and Centre of Frontier AI Research, A*STAR

Resources

Eldele, E., Ragab, M., Chen, Z., Wu, M., Kwoh, C., & Li, X. (2023). Self-Supervised Learning for Label Efficient Sleep Stage Classification: A Comprehensive Evaluation. IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 31, 1333–1342. https://doi.org/10.1109/TNSRE.2023.3245285

graph

  0
By szhivotovsky on July 17, 2023 Jul 17, 2023 in Guest Blogger
no comments
· sorted by
Write a Reply