With an increasing number of applications for deep learning models in recent years, the need for their training and corresponding energy consumption have grown. This work investigates the energy consumption reduction of Multilayer Perceptron (MLP) models with Fully Connected (FC) layers during training. Based on that, three methods with novel implementation were used to reduce the model size including matrix factorization, matrix dimensional reduction, and Structured Pruning (SP), all with the aim of energy consumption reduction of models with FC layers during training. The methods were tested on various cases and datasets. The results show that all methods are able to reduce the size of the model and its energy consumption during training with no or small effect on final accuracy. The SP method has better results with small datasets which need less amount of epochs and energy consumption to reach the desired accuracy. The matrix dimension reduction-based method shows better results with large datasets. However, the efficiency of all methods gets close to each other in large epochs. |
*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.