A pre-trained machine learning model that has been tuned for sleep stages classification employing wearable originated data.
Service Description

The Sleep Stage Classifier is a deep learning architecture that attempts to assist in detecting various sleep stages during one’s sleep. This service is provided as a REST api endpoint and can be directly integrated into multiple applications. It is recommended to be paired with software that accumulates metrics from biometric sensors (i.e. Heart Rate and Sp02), but can also be operated as is. 

 

Note: using API services may require some degree of technical background.

Background information

Sleep is a key aspect affecting health, cognitive functionality, and human psychology on all occasions. Therefore, on the one hand, sleep greatly impacts the quality of life, while on the other hand poor health and/or psychology often deteriorate the quality of sleep. Sleep assessment via wearable sensors in the individual's own home environment is of profound importance in order to have personalized and continuous monitoring. Wearables with the use of AI can provide efficient solution towards this aim. Thus, the provided service employs the data from wearables towards minimum invasive continuous sleep quality assessment, while also provides the sleep patterns of each sleep section (provided as input). Those patterns can further be exploited towards personalized sleep pattern recognition for monitoring an individual’s sleep behavior.

 

The proposed solution is a service that can assess sleep pattern via wearable sensors data, producing outputs that reflect the corresponding hypnogram and several metrics (sleep quality, sleep stages classification and quantification, etc.). It is based on a CNN Deep Learning architecture trained with an unbiased large cohort data from NSRR database (https://sleepdata.org/). The outcome of the service can be used in various health related research domains (since sleep and its accompanying metrics are considered as a variable affecting not only pathological conditions but also a person’s quality of life) while also promotes the progression of personalized monitoring, reporting and treatment of individuals.

 

The offered classifier is a extension (in preparation to be submitted) of our previous work [1] on sleep stage classification introducing a novel approach of feature engineering incorporating the time-related information concerning the transition of the sleep stages via a Long Short-Term Memory (LSTM) encoding of the accelerometer data from smartwaches. The extension is based on ResNet models while the performance was evaluated by the Cohen kappa loss vs. the golden standard sleep analysis approach; the polysomnography analysis. Apart from the stages of the sleep several other metrics of sleep quality have been explicitly assessed and evaluated, as those are based and thus can be calculate via the resulting hypnogram from the output of the sleep classifier. 

References:

[1] Zografakis, D., Tsakanikas, P., Roussaki, I., and Giannakopoulou, K., (2023). AI and IoT Enabled Sleep Stage Classification. In Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies - BIOSIGNALS, pages 155-161. DOI: 10.5220/0011631300003414.


 

Input & Output

Service expects an input of a 5 minute window sampled at 30 seconds, therefore a list of 10 elements with the following variables are expected.

 

SpO2 (List<float>): Oxygen Saturation, obtained from sensors.

HR (List<float>): Heart Rate, obtained from sensors.

HRV (List<float>): Heart Rate Variability, obtained by using the RMSSD algorithm.

activity (List<float>): activity counts, calculated as in MESA dataset.

Datasets & Samples

The Multi-Ethnic Study of Atherosclerosis (MESA) [1-4], a comprehensive longitudinal data repository, was inaugurated in July 2000 under the auspices of the National Heart, Lung, and Blood Institute (NHLBI), a division of the U.S. National Institutes of Health (NIH). The primary aim of the MESA study is to explore and elucidate the factors that initiate and expedite the progression of cardiovascular disease within a diverse and multifaceted ethnic spectrum.

This dataset comprises an amalgamation of data collected from more than 6,000 individuals aged between 45 and 84, representing four distinct racial and ethnic groups: Caucasians, African Americans, Hispanics, and Chinese. Initially devoid of clinically evident cardiovascular disease, these individuals were selected from six disparate regions across the United States, allowing for an in-depth investigation into the factors contributing to the onset and advancement of this ailment.

The MESA dataset encompasses an extensive array of information, encompassing participants' demographic profiles, lifestyle choices, dietary habits, medical backgrounds, findings from thorough physical examinations, outcomes of laboratory tests, and results from imaging studies. This comprehensive dataset facilitates in-depth inquiries into the associations and predictors of cardiovascular disease, thus offering invaluable insights that can inform the development of preventive strategies and interventions.

 

References: 

 

[1] MESA Dataset (Chen et al., 2015) is utilized to train a Residual Network (ResNet) (K. He et al., 2016) variant model using Lion (Chen, Xiangning et al. 2023) optimizer. Due to parallelism in convolution layers, this model is extremely fast and can be used in portable devices. Maximizing inference times while providing robust predictions.

 

[2] Chen, X., Wang, R., Zee, P., Lutsey, P. L., Javaheri, S., Alcántara, C., Jackson, C. L., Williams, M. A., & Redline, S. (2015). Racial/ethnic differences in sleep disturbances: the Multi-Ethnic Study of Atherosclerosis (MESA). Sleep, 38(6), 877-888.

 

[3] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.

 

[4] Chen, Xiangning et al. “Symbolic Discovery of Optimization Algorithms.” ArXiv abs/2302.06675 (2023): n. pag.

 

0
No votes have been submitted yet.