New Machine Learning Method Helps Wearables Diagnose Sleep Disorders

Figure 1. Top: we generate hypnograms for a patient in the SHHS test set. In the presence of Gaussian noise,
our REST-generated hypnogram (blue) closely matches the contours of the expert-scored hypnogram (green). Hypnogram generated by a state-of-the-art (SOTA) model by Sors et al. is considerably worse (gray). Bottom: we measure energy consumed (in Joules) and inference time (in seconds) on a smartphone to score one night of EEG recordings. REST is 9X more energy efficient and 6X faster than the SOTA model.

What is stopping better sleep disorder diagnosis?

  1. Robustness to Noise. Deep neural networks (DNN) are highly susceptible to environmental noise (Figure 1, top). In the case of wearables, noise is a serious consideration since bioelectrical signal sensors (e.g., electroencephalogram “EEG”, electrocardiogram “ECG”) are commonly susceptible to Gaussian and shot noise, which can be introduced by electrical interferences (e.g., power-line) and user motions (e.g., muscle contraction, respiration).
  2. Energy and Computational Efficiency. Mobile deep learning systems have traditionally offloaded compute intensive inference to cloud servers, requiring transfer of sensitive data and assumption of available Internet. However, this data uploading process is difficult for many healthcare scenarios because of — (1) privacy: individuals are often reluctant to share health information as they consider it highly sensitive; and (2) accessibility: real-time home monitoring is most needed in resource-poor environments where high-speed Internet may not be reliably available. Directly deploying a neural network to a mobile phone bypasses these issues. However, due to the constrained computation and energy budget of mobile devices, these models need to be fast in speed and parsimonious with their energy consumption.

How can we better diagnose sleep disorders?

Figure 2. REST Overview: (from left) When a noisy EEG signal belonging to the REM (rapid eye movement) sleep stage enters a traditional neural network which is vulnerable to noise, it gets wrongly classified as a Wake sleep stage. On the other hand, the same signal is correctly classified as the REM sleep stage by the REST model which is both robust and sparse. (From right) REST is a three step process involving (1) training the model with adversarial training, spectral regularization and sparsity regularization (2) pruning the model and (3) re-training the compact model.

Want to learn more?

--

--

--

PhD student @ Georgia Tech. I work at the intersection of applied and theoretical machine learning.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Is This the Future Of Music? GPT3-Powered Musical Assistant

Work & Kids at home? Geek out together!

OpenAI’s Charlotte L Robinson On The Future of Robotics Over the Next Few Years

How AI Invoice Processing Works

PayTech Trends To Look Out For In 2022

3 COMMON USE CASES A CHATBOT CAN HELP YOUR BUSINESS/WORK EASILY RESOLVE

A Short History of Deepfakes

Ethics of Facial Recognition: How to Make Business Uses Fair and Transparent

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Scott Freitas

Scott Freitas

PhD student @ Georgia Tech. I work at the intersection of applied and theoretical machine learning.

More from Medium

Is Google search results ranking mitigated or influenced  in any way by the user’s query?

How to generate music using Deep Learning

Pipeline in Sklearn: an efficient method to assemble multiply steps and configure parameters

Iris Dataset, But Make It Interesting!