Your browser does not fully support modern features. Please upgrade for a smoother experience.
Submitted Successfully!
Thank you for your contribution! You can also upload a video entry or images related to this topic. For video creation, please contact our Academic Video Service.
Version Summary Created by Modification Content Size Created at Operation
1 Luke Balcombe -- 1287 2026-02-27 09:51:59 |
2 formatted Perry Fu Meta information modification 1287 2026-02-28 03:38:46 |

Video Upload Options

We provide professional Academic Video Service to translate complex research into visually appealing presentations. Would you like to try it?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Balcombe, L. Digital Mental Health Post COVID-19: The Era of AI Chatbots. Encyclopedia. Available online: https://encyclopedia.pub/entry/59565 (accessed on 28 February 2026).
Balcombe L. Digital Mental Health Post COVID-19: The Era of AI Chatbots. Encyclopedia. Available at: https://encyclopedia.pub/entry/59565. Accessed February 28, 2026.
Balcombe, Luke. "Digital Mental Health Post COVID-19: The Era of AI Chatbots" Encyclopedia, https://encyclopedia.pub/entry/59565 (accessed February 28, 2026).
Balcombe, L. (2026, February 27). Digital Mental Health Post COVID-19: The Era of AI Chatbots. In Encyclopedia. https://encyclopedia.pub/entry/59565
Balcombe, Luke. "Digital Mental Health Post COVID-19: The Era of AI Chatbots." Encyclopedia. Web. 27 February, 2026.
Peer Reviewed
Digital Mental Health Post COVID-19: The Era of AI Chatbots

Digital mental health resources have expanded rapidly in the wake of the COVID-19 pandemic, offering new opportunities to improve access to mental healthcare through technologies such as AI chatbots, mobile apps, and online platforms. Despite this growth, significant challenges persist, including low user retention, limited digital literacy, unclear privacy regulations, and insufficient evidence of clinical effectiveness and safety. AI chatbots, which act as virtual therapists or companions, provide counseling and personalized support, but raise concerns about user dependence, emotional outcomes, privacy, ethical risks, and bias. User experiences are mixed: while some report enhanced social health and reduced loneliness, others question the safety, crisis response, and overall reliability of these tools, particularly in unregulated settings. Vulnerable and underserved populations may face heightened risks, highlighting the need for engagement with individuals with lived experience to define safe and supportive interactions. This review critically examines the empirical and grey literature on AI chatbot use in mental healthcare, evaluating their benefits and limitations in terms of access, user engagement, risk management, and clinical integration. Key findings indicate that AI chatbots can complement traditional care and bridge service gaps. However, current evidence is constrained by short-term studies and a lack of diverse, long-term outcome data. The review underscores the importance of transparent operations, ethical governance, and hybrid care models combining technological and human oversight. Recommendations include stakeholder-driven deployment approaches, rigorous evaluation standards, and ongoing real-world validation to ensure equitable, safe, and effective use of AI chatbots in mental healthcare.

mental health suicide prevention emotionally intelligent AI chatbots challenges solutions risk regulation

Context

The mental health treatment gap has worsened since the COVID-19 pandemic, which triggered a digital transformation of mental healthcare due to changing social dynamics, the widespread use of smartphones, and the proliferation of digital mental health tools [1][2][3][4][5]. This technological shift has fundamentally altered how support is delivered by broadening accessibility and expanding reach for those seeking personalized mental health support.

Proliferation

Digital mental health tools—particularly those underpinned by Artificial Intelligence (AI) chatbots—are attractive because of their low cost, accessibility, and anonymity. However, the growing interest in AI chatbots has not yet translated into clinical benefits and improved outcomes for users (patients) with anxiety and depression [6]. Questions remain with regard to the safe, engaging and effective integration into existing models of care [7][8][9][10][11].
Despite there being more than 10,000 digital mental health resources globally, low user retention rates appear persistent in combination with a lack of digital literacy, clear privacy guidelines and proven clinical efficacy/integration, as well as human support in the app [5][6]. AI chatbots operate in a largely unregulated environment, which exposes vulnerabilities, especially in underserved populations.

Mediators

Consequently, there are renewed calls for trained digital navigators to assist in the safe integration of technology into mental healthcare settings, driving engagement and supporting both the needs of the clinician and the patient [5][12]. Digital navigators are professionals who support the integration and effective use of technology in mental healthcare by improving digital literacy and ensuring that these tools are integrated into clinical practice. A recent trial found that changes in the level of support provided by digital navigators directly affect how effective schizophrenia apps are, meaning that standardized training is essential to reliably evaluate these tools [13].

Advanced Platforms

Generative AI (GenAI)-powered platforms, especially those using advanced Large Language Models (LLMs), are increasingly sought out for consulting about mental healthcare. However, these platforms often lack ongoing engagement and fall short of emotional intelligence and trauma-informed objectives in practice [14][15][16][17][18]. LLMs like GPT-4 have shown that they can generate coherent text, maintain conversational context, and perform advisory or counseling tasks, making them suitable for various mental health applications [19].
Increasingly, AI is being deployed as “agents” and “assistants” through “therapist” and “companion” types via mobile apps, web platforms and social robots [20][21]. AI chatbots for mental healthcare generally come in rule-based, machine learning, and/or LLM systems. Functioning as autonomous agents, they assist with screening, prevention, monitoring, clinical assessment, treatment, emotional support and companionship.

Challenges

There is a lack of clinical evidence supporting AI-based therapy due to the limited conclusions regarding their efficacy and safety in clinical practice. For example, a narrative review of recent clinical studies on AI chatbots for anxiety and depression found them to be feasible and acceptable, but there is insufficient evidence of AI effectiveness, small and narrow samples, weak controls, and unexamined risks such as emotional dependence and parasocial relationships [6]. The first randomized controlled trial (RCT) of a GenAI therapy chatbot (Therabot) demonstrated moderate symptom improvement for major depressive disorder, generalized anxiety disorder, and eating disorders [22].
Public reactions to these technologies are mixed: while some appreciate the accessibility and affordability of AI mental health tools, others remain skeptical about their effectiveness, ethics, and safety [23]. This uncertainty echoes broader frustrations with current mental health systems as well as cautious optimism about the potential of AI chatbots as complementary resources. There are complex issues that require exploration, notably algorithmic bias and errors, privacy risks, and the challenge of integrating AI chatbots into existing care structures. Notably, there is user perception and an ongoing ethical discussion where AI chatbots appear to exhibit consciousness—a phenomenon referred to as “Seemingly Conscious AI” [24]. This is particularly important to understand for sensitive settings like elder care, where user safety and meaningful, evidence-based support are critical [25][26].
While reviews and meta-analyses highlight the potential of technological innovation in mental health chatbots to improve outcomes across diverse settings, these tools are still largely untested in rigorous clinical efficacy trials [27][28][29][30][31][32][33]. The integration of AI chatbots into clinical practice remains inadequately studied, with limited evidence supporting their effectiveness, safety, and capacity to deliver nuanced, meaningful support. Most research to date relies on small samples and lacks rigorous evaluation of real-world risks. As a result, it is still unclear whether AI chatbots can reliably meet the complex needs of mental healthcare, especially in sensitive or high-risk scenarios.
The global prevalence of AI chatbot use for mental health support remains uncertain; however, data from Australian samples showed that 28% of community members and 43% of mental health professionals reported using AI for mental health purposes [34]. A 2025 survey of U.S. residents with ongoing mental health conditions found that nearly half used LLMs for psychological support in the past year—primarily for anxiety, personal advice, and depression—with most reporting improved mental health and high satisfaction [35]. Some users rated LLMs more beneficial than traditional therapy, though a minority experienced harmful responses.
Following input from 171 mental health experts, OpenAI shared findings that a significant number of ChatGPT-5 users displayed signs of psychosis, mania, or suicidal planning and intent [36]. The data also showed substantial improvements in the chatbot’s responses during crisis situations, with the most serious cases—such as suicide, psychosis, and over-reliance—being managed appropriately, and reliable support generally maintained in extended conversations. However, these results also highlight that there are still notable shortcomings in the performance of AI chatbots, underscoring the ongoing need to address challenges related to user engagement, safety, and effectiveness when integrating AI chatbots into healthcare and support systems.

Aim and Objectives

The aim of this review is to critically synthesize the current empirical and grey literature on AI chatbots in mental healthcare, evaluating their effectiveness, safety, and user engagement while identifying key challenges around clinical integration, ethical considerations, regulation, and the roles of digital navigators. By examining both the benefits and limitations of these technologies—including issues of access, digital literacy, and the management of potential risks—this review provides a foundation for understanding how AI chatbots can be responsibly developed and implemented to support diverse and vulnerable populations. In doing so, it clarifies the present landscape and outstanding questions, setting the stage for a focused discussion of the core problem underlying the use of AI chatbots in mental health.

References

  1. Balcombe, L.; De Leo, D. Digital Mental Health Amid COVID-19. Encyclopedia 2021, 1, 1047–1057.
  2. Lehtimaki, S.; Martic, J.; Wahl, B.; Foster, K.T.; Schwalbe, N. Evidence on Digital Mental Health Interventions for Adolescents and Young People: Systematic Overview. JMIR Ment. Health 2021, 8, e25847.
  3. Balcombe, L.; De Leo, D. The Potential Impact of Adjunct Digital Tools and Technology to Help Distressed and Suicidal Men: An Integrative Review. Front. Psychol. 2022, 12, 796371.
  4. Fischer-Grote, L.; Fössing, V.; Aigner, M.; Fehrmann, E.; Boeckle, M. Effectiveness of Online and Remote Interventions for Mental Health in Children, Adolescents, and Young Adults After the Onset of the COVID-19 Pandemic: Systematic Review and Meta-Analysis. JMIR Ment. Health 2024, 11, e46637.
  5. Choudhary, S.; Mehta, U.M.; Naslund, J.; Torous, J. Translating Digital Health into the Real World: The Evolving Role of Digital Navigators to Enhance Mental Health Access and Outcomes. J. Technol. Behav. Sci. 2025.
  6. Bodner, R.; Lim, K.; Schneider, R.; Torous, J. Efficacy and risks of artificial intelligence chatbots for anxiety and depression: A narrative review of recent clinical studies. Curr. Opin. Psychiatry 2026, 39, 19–25.
  7. Balcombe, L.; De Leo, D. Digital Mental Health Challenges and the Horizon Ahead for Solutions. JMIR Ment. Health 2021, 8, e26811.
  8. Denecke, K.; Abd-Alrazaq, A.; Househ, M. Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In Multiple Perspectives on Artificial Intelligence in Healthcare; Springer: Berlin, Germany, 2021; pp. 115–128.
  9. Balcombe, L.; De Leo, D. Human-Computer Interaction in Digital Mental Health. Informatics 2022, 9, 14.
  10. Smith, K.A.; Blease, C.; Faurholt-Jepsen, M.; Firth, J.; Van Daele, T.; Moreno, C.; Carlbring, P.; Ebner-Priemer, U.W.; Koutsouleris, N.; Riper, H.; et al. Digital mental health: Challenges and next steps. BMJ Ment. Health 2023, 26, e300670.
  11. Siddals, S.; Torous, J.; Coxon, A. “It happened to be the perfect thing”: Experiences of generative AI chatbots for mental health. Npj Ment. Health Res. 2024, 3, 48.
  12. Wisniewski, H.; Gorrindo, T.; Rauseo-Ricupero, N.; Hilty, D.; Torous, J. The Role of Digital Navigators in Promoting Clinical Care and Technology Integration into Practice. Digit. Biomark. 2020, 4, 119–135.
  13. Ben-Zeev, D.; Tauscher, J.; Sandel-Fernandez, D.; Buck, B.; Kopelovich, S.; Lyon, A.R.; Chwastiak, L.; Marcus, S.C. Implementing mHealth for Schizophrenia in Community Mental Health Settings: Hybrid Type 3 Effectiveness-Implementation Trial. Psychiatr. Serv. 2025, 76, 1091–1098.
  14. Borghouts, J.; Pretorius, C.; Ayobi, A.; Abdullah, S.; Eikey, E.V. Editorial: Factors influencing user engagement with digital mental health interventions. Front. Digit. Health 2023, 5, 1197301.
  15. Boucher, E.M.; Raiker, J.S. Engagement and retention in digital mental health interventions: A narrative review. BMC Digit. Health 2024, 2, 52.
  16. Auf, H.; Svedberg, P.; Nygren, J.; Nair, M.; Lundgren, L.E. The Use of AI in Mental Health Services to Support Decision-Making: Scoping Review. J. Med. Internet Res. 2025, 27, e63548.
  17. Rahsepar Meadi, M.; Sillekens, T.; Metselaar, S.; van Balkom, A.; Bernstein, J.; Batelaan, N. Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review. JMIR Ment. Health 2025, 12, e60432.
  18. Yeh, P.-L.; Kuo, W.-C.; Tseng, B.-L.; Sung, Y.-H. Does the AI-driven Chatbot Work? Effectiveness of the Woebot app in reducing anxiety and depression in group counseling courses and student acceptance of technological aids. Curr. Psychol. 2025, 44, 8133–8145.
  19. Ni, Y.; Jia, F. A Scoping Review of AI-Driven Digital Interventions in Mental Health Care: Mapping Applications Across Screening, Support, Monitoring, Prevention, and Clinical Education. Healthcare 2025, 13, 1205.
  20. Balcombe, L. AI Chatbots in Digital Mental Health. Informatics 2023, 10, 82.
  21. Kabacińska, K.; Dosso, J.A.; Vu, K.; Prescott, T.J.; Robillard, J.M. Influence of User Personality Traits and Attitudes on Interactions With Social Robots: Systematic Review. Collabra Psychol. 2025, 11, 129175.
  22. Heinz, M.V.; Mackin, D.M.; Trudeau, B.M.; Bhattacharya, S.; Wang, Y.; Banta, H.A.; Jewett, A.D.; Salzhauer, A.J.; Griffin, T.Z.; Jacobson, N.C. Randomized Trial of a Generative AI Chatbot for Mental Health Treatment. NEJM AI 2025, 2, AIoa2400802.
  23. Khazanov, G.; Poupard, M.; Last, B.S. Public Responses to the First Randomized Controlled Trial of a Generative Artificial Intelligence Mental Health Chatbot. Psychiatr. Serv. 2025, 77, 84.
  24. Scammell, R. Microsoft AI CEO Says AI Models That Seem Conscious Are Coming. Here’s Why He’s Worried. Business Insider via MSN. 2025. Available online: https://www.msn.com/en-au/news/techandscience/microsoft-ai-ceo-says-ai-models-that-seem-conscious-are-coming-here-s-why-he-s-worried/ar-AA1KSzUs (accessed on 21 August 2025).
  25. De Freitas, J.; Uğuralp, A.K.; Oğuz-Uğuralp, Z.; Puntoni, S. Chatbots and mental health: Insights into the safety of generative AI. J. Consum. Psychol. 2023, 34, 481–491.
  26. Moylan, K.; Doherty, K. Expert and Interdisciplinary Analysis of AI-Driven Chatbots for Mental Health Support: Mixed Methods Study. J. Med. Internet Res. 2025, 27, e67114.
  27. Li, H.; Zhang, R.; Lee, Y.-C.; Kraut, R.E.; Mohr, D.C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. Npj Digit. Med. 2023, 6, 236.
  28. Casu, M.; Triscari, S.; Battiato, S.; Guarnera, L.; Caponnetto, P. AI Chatbots for Mental Health: A Scoping Review of Effectiveness, Feasibility, and Applications. Appl. Sci. 2024, 14, 5889.
  29. Guo, Z.; Lai, A.; Thygesen, J.H.; Farrington, J.; Keen, T.; Li, K. Large Language Models for Mental Health Applications: Systematic Review. JMIR Ment. Health 2024, 11, e57400.
  30. Olawade, D.B.; Wada, O.Z.; Odetayo, A.; David-Olawade, A.C.; Asaolu, F.; Eberhardt, J. Enhancing mental health with Artificial Intelligence: Current trends and future prospects. J. Med. Surg. Public Health 2024, 3, 100099.
  31. Hua, Y.; Siddals, S.; Ma, Z.; Galatzer-Levy, I.; Xia, W.; Hau, C.; Na, H.; Flathers, M.; Linardon, J.; Ayubcha, C.; et al. Charting the evolution of artificial intelligence mental health chatbots from rule-based systems to large language models: A systematic review. World Psychiatry Off. J. World Psychiatr. Assoc. (WPA) 2025, 24, 383–394.
  32. Tamrin, S.I.; Omar, N.F.; Ngah, R.; Bakhodirovich, G.S.; Absamatovna, K.G. The Applications of AI-Powered Chatbots in Delivering Mental Health Support: A Systematic Literature Review. In Conference on Internet of Things and Smart Spaces, International Conference on Next Generation Wired/Wireless Networking; Springer: Cham, Switzerland, 2026; Volume 15555.
  33. Mayor, E. Chatbots and mental health: A scoping review of reviews. Curr. Psychol. 2025, 44, 13619–13640.
  34. Cross, S.; Bell, I.; Nicholas, J.; Valentine, L.; Mangelsdorf, S.; Baker, S.; Titov, N.; Alvarez-Jimenez, M. Use of AI in Mental Health Care: Community and Mental Health Professionals Survey. JMIR Ment. Health 2024, 11, e60589.
  35. Rousmaniere, T.; Zhang, Y.; Li, X.; Shah, S. Large language models as mental health resources: Patterns of use in the United States. Pract. Innov. 2025.
  36. OpenAI. Strengthening ChatGPT’s Responses in Sensitive Conversations. 2025. Available online: https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/ (accessed on 2 December 2025).
More
Upload a video for this entry
Information
Subjects: Psychology
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : Luke Balcombe
View Times: 8
Online Date: 27 Feb 2026
Notice
You are not a member of the advisory board for this topic. If you want to update advisory board member profile, please contact office@encyclopedia.pub.
OK
Confirm
Only members of the Encyclopedia advisory board for this topic are allowed to note entries. Would you like to become an advisory board member of the Encyclopedia?
Yes
No
${ textCharacter }/${ maxCharacter }
Submit
Cancel
There is no comment~
${ textCharacter }/${ maxCharacter }
Submit
Cancel
${ selectedItem.replyTextCharacter }/${ selectedItem.replyMaxCharacter }
Submit
Cancel
Confirm
Are you sure to Delete?
Yes No
Academic Video Service