Incorporating incremental learning into a model’s training process is crucial to ensure reliable performance in real-world conditions, especially if it was trained on a different data distribution than what it encounters after deployment. In this paper, we address the domain adaptation challenge in deep learning models for olive disease detection. Specifically, we focus on adapting a deployed model to real-world conditions that differ from the original training environment. However, the lack of available, and realistic datasets poses a significant challenge. To address this gap, we create a synthetic out-of-domain dataset that closely mimics data collected from real-world scenarios. This dataset enables us to assess a model's adaptation capability. Additionally, we leverage fine-tuning for incremental learning. Thus, we conduct an experimental evaluation of three fine-tuning strategies applied to our incremental learning approach. Next, we identify the most suitable strategy and evaluate the performance of the deep learning models involved at each stage of the incremental learning scheme. Our experiments demonstrate the impact of the identified fine-tuning strategy across each stage of the incremental learning process for every model involved. |
*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.