References on AI in Mental Health
-
Chen, Y. et al. SoulChat: Improving LLMs’ empathy, listening, and comfort abilities through fine-tuning with multi-turn empathy conversations. In Findings of the Association for Computational Linguistics: EMNLP 2023 (eds Bouamor, H. et al.) 1170–1183 (Association for Computational Linguistics, 2023).
-
Lawrence, H. R. et al. The opportunities and risks of large language models in mental health. JMIR Ment. Health 11, 59479 (2024).
-
Li, H., Zhang, R., Lee, Y.-C., Kraut, R. E. & Mohr, D. C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digit. Med. 6, 236 (2023).
-
He, Y. et al. Conversational agent interventions for mental health problems: systematic review and meta-analysis of randomized controlled trials. J. Med. Internet Res. 25, 43862 (2023).
-
Yang, Y., Viranda, T., Van Meter, A. R., Choudhury, T. & Adler, D. A. Exploring opportunities to augment psychotherapy with language models. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI EA ’24 (Association for Computing Machinery, 2024).
-
Li, H. & Zhang, R. Finding love in algorithms: deciphering the emotional contexts of close encounters with AI chatbots. J. Comp. Mediat. Commun. 29, 015 (2024).
-
Schäfer, L. M., Krause, T. & Köhler, S. Exploring user characteristics, motives and expectations and therapeutic alliance in the mental health conversational AI Clare®: a baseline study. Front. Digital Health 7, 1576135 (2025).
-
Tanaka, H., Negoro, H., Iwasaka, H. & Nakamura, S. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders. PLoS ONE 12, 0182151 (2017).
-
Rathnayaka, P. et al. A mental health chatbot with cognitive skills for personalised behavioural activation and remote health monitoring. Sensors 22, 3653 (2022).
-
Fitzpatrick, K. K., Darcy, A. & Vierhile, M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment. Health 4, 7785 (2017).
Chen, Y. et al. SoulChat: Improving LLMs’ empathy, listening, and comfort abilities through fine-tuning with multi-turn empathy conversations. In Findings of the Association for Computational Linguistics: EMNLP 2023 (eds Bouamor, H. et al.) 1170–1183 (Association for Computational Linguistics, 2023).
Lawrence, H. R. et al. The opportunities and risks of large language models in mental health. JMIR Ment. Health 11, 59479 (2024).
Li, H., Zhang, R., Lee, Y.-C., Kraut, R. E. & Mohr, D. C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digit. Med. 6, 236 (2023).
He, Y. et al. Conversational agent interventions for mental health problems: systematic review and meta-analysis of randomized controlled trials. J. Med. Internet Res. 25, 43862 (2023).
Yang, Y., Viranda, T., Van Meter, A. R., Choudhury, T. & Adler, D. A. Exploring opportunities to augment psychotherapy with language models. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI EA ’24 (Association for Computing Machinery, 2024).
Li, H. & Zhang, R. Finding love in algorithms: deciphering the emotional contexts of close encounters with AI chatbots. J. Comp. Mediat. Commun. 29, 015 (2024).
Schäfer, L. M., Krause, T. & Köhler, S. Exploring user characteristics, motives and expectations and therapeutic alliance in the mental health conversational AI Clare®: a baseline study. Front. Digital Health 7, 1576135 (2025).
Tanaka, H., Negoro, H., Iwasaka, H. & Nakamura, S. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders. PLoS ONE 12, 0182151 (2017).
Rathnayaka, P. et al. A mental health chatbot with cognitive skills for personalised behavioural activation and remote health monitoring. Sensors 22, 3653 (2022).
Fitzpatrick, K. K., Darcy, A. & Vierhile, M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment. Health 4, 7785 (2017).
… [and more]
Exploring the Role of Language Models in Mental Health Support
In recent years, large language models (LLMs) have emerged as powerful tools in various domains, including mental health. A slew of studies and papers have explored how these models can be fine-tuned and utilized to offer empathetic and effective support. This article delves into some key research findings in this area, highlighting their implications for mental health practices and the ethical considerations they raise.
Enhancing Empathy through Fine-tuning
Chen et al. (2023) present a compelling argument for improving LLMs’ abilities in empathy and comfort through fine-tuning with multi-turn empathy conversations. Their research, titled "SoulChat," emphasizes the importance of empathetic dialogue in therapeutic settings, suggesting that tailored training can enhance an AI’s responsiveness and emotional intelligence. By incorporating real-world empathetic interactions into their training datasets, LLMs can learn to respond in ways that feel more supportive and understanding to users seeking mental health assistance.
Opportunities and Risks in Mental Health
A comprehensive review by Lawrence et al. (2024) addresses the dual aspect of LLMs in mental health, recognizing both their potential benefits and inherent risks. They argue that while these technologies can provide accessible mental health support, they can also lead to ethical dilemmas, such as the accuracy of the information provided and the risk of dependency on AI for emotional support.
AI-Based Conversational Agents: A Meta-Analysis
Li et al. (2023) conduct a systematic review and meta-analysis of AI-based conversational agents aimed at promoting mental health and well-being. Their findings reinforce the potential of these tools in delivering tailored support and behavioral nudges, but they also point to the need for rigorous evaluations to ensure therapeutic efficacy. Furthermore, He et al. (2023) provide additional evidence via their systematic review and meta-analysis of randomized controlled trials, reinforcing the clinical applicability of conversational agents.
The Augmentation of Traditional Psychotherapy
Shifts in therapy practices are on the horizon, as Yang et al. (2024) explore opportunities to augment traditional psychotherapy with LLMs. Their work discusses the integration of AI into therapeutic settings, thus enhancing therapist capabilities and enabling personalized treatment strategies.
Understanding AI in Mental Health Contexts
As we venture deeper into the integration of AI technologies in mental health, measuring user experiences, as highlighted by Schäfer et al. (2025), becomes crucial. Understanding user motives, expectations, and therapeutic alliances can help design AI systems that foster stronger connections between users and AI support agents.
Conversely, Li and Zhang (2024) investigate the emotional contexts of engagements with AI chatbots, probing how these interactions can influence users’ emotional states and perceptions of the technological companionship.
Ethical Considerations
The ethical implications of utilizing AI in mental health support cannot be overlooked. As highlighted by Mohr and Woodhouse (2024), direct-to-consumer AI applications raise questions surrounding accountability, data protection, and informed consent. The balance between innovative support and ethical integrity remains a pivotal concern for developers and mental health professionals alike.
Conclusion
As large language models continue to evolve, their application in mental health offers promising avenues for support and intervention. However, embracing this technology requires a careful consideration of ethical implications and a commitment to ensuring user welfare. Continuous research and meta-analysis in this emerging field can guide responsible practices that prioritize user well-being while leveraging the full potential of AI.
The future of mental health support may be intricately tied to advancements in AI, but the human touch, empathy, and ethical grounding will always remain essential components of effective care.