Machine learning, especially deep learning, has become essential in many application domains. However, deep learning relies on artificial neural networks that often face resource-related limitations. For instance, data is often proprietary, model training can be costly, and using these models may be constrained by limited computational or storage resources. Transfer learning offers a solution to overcome these constraints by "transferring" a model from a source domain to a target domain, potentially in a different context. This transfer takes various forms: models can be adapted with minor structural changes (e.g., "fine-tuning"), reduced in size (e.g., "knowledge distillation"), or retrained with modified training and testing datasets (e.g., "domain adaptation"). This paper first motivates the need for a generic definitional framework and implementation support for transfer learning through a literature review. We then introduce Generic Transfer Learning (GTL), our proposal of such a framework. GTL supports the declarative definition of transfers through network transformations and dataset manipulations and includes corresponding Python implementation support. We finally present a case study demonstrating how to define and implement a transfer using GTL in the health domain. |
*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.