Online platforms such as Reddit have become significant spaces for public discussions on mental health, offering valuable insights into psychological distress and support-seeking behaviors. Large Language Models (LLMs) have emerged as powerful tools for analyzing these discussions, enabling the identification of mental health trends, crisis signals, and potential interventions. This work develops an LLM-based topic modeling framework tailored for domain-specific mental health discourse, uncovering latent themes within user-generated content. Additionally, an interactive and interpretable visualization system is designed to allow users to explore data at various levels of granularity, enhancing the understanding of mental health narratives. This approach aims to bridge the gap between large-scale AI analysis and human-centered interpretability, contributing to more effective and responsible mental health insights on social media. |
*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.