19th AIAI 2023, 14 - 17 June 2023, León, Spain

Fusion of Learned Representations for Multimodal Sensor Data Classification

Lee B. Hinkle, Gentry Atkinson, Vangelis Metsis

Abstract:

  Time-Series data collected using body-worn sensors can be used to recognize activities of interest in various medical applications such as sleep studies. Recent advances in other domains, such as image recognition and natural language processing have shown that unlabeled data can still be useful when self-supervised techniques such as contrastive learning are used to generate meaningful feature space representations. Labeling data for Human Activity Recognition (HAR) and sleep disorder diagnosis (polysomnography) is difficult and requires trained professionals. In this work, we apply learned feature representation techniques to multimodal time-series data. By using signal-specific representations, based on self-supervised and supervised learning, the channels can be evaluated to determine if they are likely to contribute to correct classification. The learned representation embeddings are then used to process each channel into a new feature space that serves as input into a neural network. This results in a better understanding of the importance of each signal modality as well as the potential applicability of newer self-supervised techniques to time-series data.  

*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.