New Machine Learning Method Helps Wearables Diagnose Sleep Disorders
As many as 70 million Americans suffer from sleep disorders that affects their daily functioning, long-term health and longevity. The long-term effects of sleep deprivation and sleep disorders include risk of hypertension, diabetes, obesity, depression, heart attack, and stroke. The cost of undiagnosed sleep apnea alone is estimated to exceed 100 billion in the US.
A central tool in identifying sleep disorders is the hypnogram — which documents the progression of sleep stages (REM stage, Non-REM stages N1 to N3, and Wake stage) over an entire night (see Figure 1, top). The process of acquiring a hypnogram from raw sensor data is called sleep staging, which is the focus of this work.
Traditionally, to reliably obtain a hypnogram the patient has to undergo an overnight sleep study — called polysomnography (PSG) — at a sleep lab while wearing bio-sensors that measure physiological signals, which include electroencephalogram (EEG), eye movements (EOG), muscle activity or skeletal muscle activation (EMG), and heart rhythm (ECG). Unfortunately, the process of obtaining a PSG report is costly and invasive to patients, reducing their participation, which ultimately leads to undiagnosed sleep disorders.
What is stopping better sleep disorder diagnosis?
One promising direction to reduce undiagnosed sleep disorders is to enable sleep monitoring at the home using commercial wearables (e.g., Fitbit, Apple Watch, Emotiv). However, we must first solve 2 challenging problems:
- Robustness to Noise. Deep neural networks (DNN) are highly susceptible to environmental noise (Figure 1, top). In the case of wearables, noise is a serious consideration since bioelectrical signal sensors (e.g., electroencephalogram “EEG”, electrocardiogram “ECG”) are commonly susceptible to Gaussian and shot noise, which can be introduced by electrical interferences (e.g., power-line) and user motions (e.g., muscle contraction, respiration).
- Energy and Computational Efficiency. Mobile deep learning systems have traditionally offloaded compute intensive inference to cloud servers, requiring transfer of sensitive data and assumption of available Internet. However, this data uploading process is difficult for many healthcare scenarios because of — (1) privacy: individuals are often reluctant to share health information as they consider it highly sensitive; and (2) accessibility: real-time home monitoring is most needed in resource-poor environments where high-speed Internet may not be reliably available. Directly deploying a neural network to a mobile phone bypasses these issues. However, due to the constrained computation and energy budget of mobile devices, these models need to be fast in speed and parsimonious with their energy consumption.
How can we better diagnose sleep disorders?
We developed REST, the first framework for developing noise-robust and efficient neural networks for home sleep monitoring (Figure 1).
Robust and Efficient Neural Networks for Sleep Monitoring: by integrating a novel combination of three training objectives, REST endows a model with noise robustness through (1) adversarial training and (2) spectral regularization; and promotes energy and computational efficiency by enabling compression through (3) sparsity regularization (Figure 2).
We deploy a REST model onto a Pixel 2 smartphone through an Android application performing sleep staging. Our experiments reveal REST achieves 17x energy reduction and 9x faster inference on a smartphone, compared to traditional models.
In addition, we evaluate REST on two real-world sleep staging EEG datasets — Sleep-EDF from Physionet and Sleep Heart Health Study (SHHS). We find that REST produces highly compact models that substantially outperform the original full-sized models in the presence of noise, achieving almost 2x better sleep scoring accuracy compared to the previous state-of-the-art model in the presence of Gaussian noise, with 19x smaller models and 15x less computational demand.