Automatic Sleep Stage Classification with Cross-modal Self-supervised Features from Deep Brain SignalsThe detection of human sleep stages is widely used in the diagnosis and
intervention of neurological and psychiatric diseases. Some patients with deep
brain stimulator implanted could have their neural activities recorded from the
deep brain. Sleep stage classification based on deep brain recording has great
potential to provide more precise treatment for patients. The accuracy and
generalizability of existing sleep stage classifiers based on local field
potentials are still limited. We proposed an applicable cross-modal transfer
learning method for sleep stage classification with implanted devices. This
end-to-end deep learning model contained cross-modal self-supervised feature
representation, self-attention, and classification framework. We tested the
model with deep brain recording data from 12 patients with Parkinson's disease.
The best total accuracy reached 83.2% for sleep stage classification. Results
showed speech self-supervised features catch the conversion pattern of sleep
stages effectively. We provide a new method on transfer learning from acoustic
signals to local field potentials. This method supports an effective solution
for the insufficient scale of clinical data. This sleep stage classification
model could be adapted to chronic and continuous monitor sleep for Parkinson's
patients in daily life, and potentially utilized for more precise treatment in
deep brain-machine interfaces, such as closed-loop deep brain stimulation.
arxiv.org