The goal of this work is to combine existing convolutional layers (CLs) to design a computationally efficient Convolutional Neural Network (CNN) for image classification tasks. The current limitations of CNNs in terms of memory requirements and computational cost have driven the demand for a simplification of their architecture. This work investigates the use of two consecutive CLs with 1-D filters to replace one layer of full rank 2-D set of filters. First we provide the mathematical formalism, derive the properties of the equivalent tensor and calculate the rank of tensor's slices in closed form. We apply this architecture with several parameterizations to the well known AlexNet without transfer learning and experiment with three different image classification tasks, which are compared against the original architecture. Results showed that for most parameterizations, the achieved reduction in dimensionality, which yields lower computational complexity and cost, maintains equivalent, or even marginally better classification accuracy. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.