|Deploying deep learning (DL) models onto low-power devices for Human Activity Recognition (HAR) purposes is gaining momentum because of the pervasive adoption of wearable sensor devices. However, the outcome of such deployment needs exploration not only because the topic is still in its infancy, but also because of the wide combination between low-power devices, deep models, and available deployment strategies.We have investigated the outcome of the application of three compression techniques, namely lite conversion, dynamic quantization, and full-integer quantization, that allow the deployment of deep models on low-power devices.This paper describes how those three compression techniques impact accuracy and energy consumption on an ESP32 device. In terms of accuracy, the full-integer technique incurs an accuracy drop between 2% and 3%, whereas the dynamic quantization and the lite conversion result in a negligible accuracy drop. In terms of power efficiency, dynamic and full-integer quantization allow for saving almost 30% of energy. The adoption of one of those two quantization techniques is recommended to obtain an executable network model, and we advise the adoption of the dynamic quantization given the negligible accuracy drop.
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.