New Machine Learning Method Helps Wearables Diagnose Sleep Disorders

Scott Freitas
3 min readNov 26, 2020

--

As many as 70 million Americans suffer from sleep disorders that affects their daily functioning, long-term health and longevity. The long-term effects of sleep deprivation and sleep disorders include risk of hypertension, diabetes, obesity, depression, heart attack, and stroke. The cost of undiagnosed sleep apnea alone is estimated to exceed 100 billion in the US.

A central tool in identifying sleep disorders is the hypnogram — which documents the progression of sleep stages (REM stage, Non-REM stages N1 to N3, and Wake stage) over an entire night (see Figure 1, top). The process of acquiring a hypnogram from raw sensor data is called sleep staging, which is the focus of this work.

Figure 1. Top: we generate hypnograms for a patient in the SHHS test set. In the presence of Gaussian noise,
our REST-generated hypnogram (blue) closely matches the contours of the expert-scored hypnogram (green). Hypnogram generated by a state-of-the-art (SOTA) model by Sors et al. is considerably worse (gray). Bottom: we measure energy consumed (in Joules) and inference time (in seconds) on a smartphone to score one night of EEG recordings. REST is 9X more energy efficient and 6X faster than the SOTA model.

Traditionally, to reliably obtain a hypnogram the patient has to undergo an overnight sleep study — called polysomnography (PSG) — at a sleep lab while wearing bio-sensors that measure physiological signals, which include electroencephalogram (EEG), eye movements (EOG), muscle activity or skeletal muscle activation (EMG), and heart rhythm (ECG). Unfortunately, the process of obtaining a PSG report is costly and invasive to patients, reducing their participation, which ultimately leads to undiagnosed sleep disorders.

What is stopping better sleep disorder diagnosis?

One promising direction to reduce undiagnosed sleep disorders is to enable sleep monitoring at the home using commercial wearables (e.g., Fitbit, Apple Watch, Emotiv). However, we must first solve 2 challenging problems:

  1. Robustness to Noise. Deep neural networks (DNN) are highly susceptible to environmental noise (Figure 1, top). In the case of wearables, noise is a serious consideration since bioelectrical signal sensors (e.g., electroencephalogram “EEG”, electrocardiogram “ECG”) are commonly susceptible to Gaussian and shot noise, which can be introduced by electrical interferences (e.g., power-line) and user motions (e.g., muscle contraction, respiration).
  2. Energy and Computational Efficiency. Mobile deep learning systems have traditionally offloaded compute intensive inference to cloud servers, requiring transfer of sensitive data and assumption of available Internet. However, this data uploading process is difficult for many healthcare scenarios because of — (1) privacy: individuals are often reluctant to share health information as they consider it highly sensitive; and (2) accessibility: real-time home monitoring is most needed in resource-poor environments where high-speed Internet may not be reliably available. Directly deploying a neural network to a mobile phone bypasses these issues. However, due to the constrained computation and energy budget of mobile devices, these models need to be fast in speed and parsimonious with their energy consumption.

How can we better diagnose sleep disorders?

We developed REST, the first framework for developing noise-robust and efficient neural networks for home sleep monitoring (Figure 1).

Robust and Efficient Neural Networks for Sleep Monitoring: by integrating a novel combination of three training objectives, REST endows a model with noise robustness through (1) adversarial training and (2) spectral regularization; and promotes energy and computational efficiency by enabling compression through (3) sparsity regularization (Figure 2).

Figure 2. REST Overview: (from left) When a noisy EEG signal belonging to the REM (rapid eye movement) sleep stage enters a traditional neural network which is vulnerable to noise, it gets wrongly classified as a Wake sleep stage. On the other hand, the same signal is correctly classified as the REM sleep stage by the REST model which is both robust and sparse. (From right) REST is a three step process involving (1) training the model with adversarial training, spectral regularization and sparsity regularization (2) pruning the model and (3) re-training the compact model.

We deploy a REST model onto a Pixel 2 smartphone through an Android application performing sleep staging. Our experiments reveal REST achieves 17x energy reduction and 9x faster inference on a smartphone, compared to traditional models.

In addition, we evaluate REST on two real-world sleep staging EEG datasets — Sleep-EDF from Physionet and Sleep Heart Health Study (SHHS). We find that REST produces highly compact models that substantially outperform the original full-sized models in the presence of noise, achieving almost 2x better sleep scoring accuracy compared to the previous state-of-the-art model in the presence of Gaussian noise, with 19x smaller models and 15x less computational demand.

Want to learn more?

REST was published at WWW 2020. You can read about all the details of on arXiv. We also open source the code for REST on Github.

--

--

Scott Freitas
Scott Freitas

Written by Scott Freitas

PhD student @ Georgia Tech. I work at the intersection of applied and theoretical machine learning.

No responses yet