Employee satisfaction surveys are crucial tools for assessing the well-being and engagement of a company's employees. However, in large organizations, analyzing surveys is time-consuming, costly, and requires domain expertise. Large language models (LLMs) have demonstrated strong performance on numerous natural language processing tasks, making them a promising solution for automating employee feedback analysis. In this paper, we investigate the potential of using LLMs for emotion analysis in employee satisfaction surveys. Our approach leverages various in-context learning methods to adapt LLMs for this task. In-context learning approaches provide a cost-effective alternative for LLM adaptation as they require fewer computational and data resources compared to training or fine-tuning. Our experiments encompass ten LLMs, various in-context learning techniques, and four datasets for emotion analysis. Our results demonstrate that while in-context learning does not outperform fine-tuning, it offers an efficient and practical solution for organizations lacking annotated data. |
*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.