A Modern Village: AI and Suicide Prevention
Dr. Jared Ng and Dr. Sharmili Roy
- Dr. Jared Ng is a senior consultant and Medical Director at Connections MindHealth. As the founding Chief of the Department of Emergency & Crisis Care when he was at the Institute of Mental Health, he has deep expertise and experience in suicide risk assessment and suicide prevention.
- Dr. Sharmili Roy is PhD in AI and co-founder of Zoala. Zoala is a mental health tech startup which develops wellness solutions centred around adolescents using AI/mobile/digital technologies.
The role of artificial intelligence (AI) in mental health care, particularly in suicide prevention, has garnered significant attention in recent years. Various AI chat tools, such as Woebot, Wysa, and even ChatGPT, are available in the market, offering support and assistance to individuals in distress. These tools represent a new frontier in mental health care, leveraging technology to provide immediate, accessible support.
In the digital age, where communication via messaging is often more natural than verbal interaction, people, especially the younger generation, are likely to find AI-driven support more comfortable and accessible, both in terms of availability and cost. This shift towards “typed” communication aligns with a generation that has grown up in a digital world, seamlessly integrating technology into their daily lives.
The widespread adoption of chatbots further underscores this trend. Roughly 1.5 billion people are using chatbots, and about 88% of customers have had at least one conversation with a chatbot within the past year. The healthcare chatbots market is expected to reach $543 million by 2026, and globally, healthcare chatbots help save as much as $3.6 billion. These figures indicate the growing reliance on and potential of AI-driven chat tools in healthcare, including mental health services.
While the growing market for healthcare chatbots is promising, this technological advancement is not without its challenges. The complexities of human emotions and the multi-faceted nature of suicide present unique obstacles. Recent incidents, such as the one where a Belgian man reportedly died by suicide after talking to an AI chatbot, highlight the delicate balance that must be maintained. While AI offers unprecedented accessibility and immediacy, it also raises critical questions about empathy, understanding, and ethical responsibility in mental health care.
In this paper, we envision how the future of technology for suicide prevention looks like and suggest how a multi-disciplinary collaboration between mental health practitioners, AI practitioners and regulators is key to harness the power of this technology to meet the rising demands of mental wellbeing, distress management and suicide prevention.
The Curious Case of AI in Suicide Prvention
Suicide is a deeply complex issue, often rooted in a combination of psychological, social, and biological factors. The challenge of understanding and preventing suicide is not merely a matter of identifying symptoms but involves unravelling an intricate web of personal struggles, societal pressures, and mental health conditions.
There are multiple factors to consider when assessing suicidality. Examples include:
- Interactions of the Risk Factors and (Lack of) Protective Factors: Understanding how risk factors interact with each other and the absence of protective elements is crucial in assessing suicide risk.
- Stressors and Triggers: These can be long-term, short-term, possibly random, and unpredictable, adding complexity to the issue
- Cultural Factors: Different cultures may have unique perspectives and stigmas related to mental health and suicide.
- Individual Vulnerability: Different individuals may have brief periods of heightened vulnerability, making universal assessment challenging,
- Availability of Means: Access to means of self-harm can significantly influence the risk.
Given these complexities, AI can offer a unique advantage through its ability to compute information from various sources over different periods of time. For instance, consider a hypothetical user, George, who has been using an AI mental health chatbot for six months. Over this period, AI learns how George likes to express himself by engaging in empathic conversations with George using Generative AI. Text analytics and natural language processing leveraging AI can track subtle changes in George’s language, tone, words used and the timing of his messages. AI can detect that George’s messages have increasingly included words like “hopeless,” “trapped,” and “worthless,” especially during late-night hours. It is possible for AI to correlate such changes with specific life events George would have mentioned, such as losing a job or relationship difficulties.
Imagine if George inquired about writing a will three months ago and asked for information on grief counselling six months ago. These inquiries, when combined with his recent language patterns and life events, create a multi-dimensional picture of George’s mental state. By synthesizing this data, the AI program could flag George as a high-risk individual for suicide, even before he explicitly states any suicidal thoughts. This pre-emptive identification could be invaluable, especially if it triggers a more immediate human intervention, like activating George’s safety net and connecting George to a mental health professional or a suicide helpline.
Some people may argue that AI has limited understanding of context and underlying emotions can lead to misinterpretations. For example, if George were to suddenly start using more positive language, the AI might incorrectly assume that his risk has decreased, not considering that some individuals may express a sense of relief when they made the decision to take their own lives. Machines may lack the nuances of such emotional expression but these limitations can be overcome by grounding AI to the science of emotional cognizance which can only be achieved via close collaboration between AI and psych practitioners.
Collaborative Intelligence: The Future for AI in Suicide Prevention
Using George’s story as a lens, we can already see the opportunities and challenges facing the use of AI as a tool for suicide prevention. On one hand, AI offers unparalleled accessibility, data-driven insights, and scalability, serving as a lifeline for those in acute emotional distress and filling critical gaps in traditional mental health services. It can analyse extensive datasets over long periods, capturing trends and risk factors that may elude human therapists. On the other hand, AI’s limitations in emotional intelligence and contextual understanding can lead to potentially dangerous misinterpretations. Ethical concerns around data privacy and the potential for over-reliance on technology further complicate its role. Therefore, while AI holds significant promise, its effective and ethical integration into mental health care requires a collaborative approach that leverages the strengths of both machine and human expertise.
Hence, we believe that a Human-AI collaboration approach is critical to harness the power of scale that AI brings while respecting the criticality of the high risk setting of mental health. Below we envision the key dimensions that this collaborative intelligence framework:
Central to this collaborative approach is a strong ethical foundation. We explore how ethics play a pivotal role in shaping the interaction between AI and human expertise.
1. Ethics at the core: |
While ethics provide the framework, the practical application of AI in providing ‘always-on’ mental health care is equally crucial. We examine how this 24/7 availability can be a game-changer in the realm of suicide prevention and mental health care.
While ethics provide the framework, the practical application of AI in providing ‘always-on’ mental health care is equally crucial. We examine how this 24/7 availability can be a game-changer in the realm of suicide prevention and mental health care.
2. Always-on care: |
Beyond availability, the true power of AI lies in its ability to analyse vast amounts of data for personalised, evidence-based therapy. We delve into how this data-driven approach can revolutionise mental health care.
3. Data-driven Therapy: |
While personalised, data-driven therapy is transformative, it is also essential to address the economic aspects of mental health care. We believe that AI can make mental health care more cost-efficient without compromising quality.
4. Cost efficiency of care: |
Conclusion: A Village Reimagined
As cliché as it might sound, it takes a village to prevent suicides. In the intricate and sensitive landscape of mental health and suicide prevention, it is increasingly evident that a multi-pronged, interdisciplinary approach is not just beneficial—it is essential. The traditional “village” of caregivers, mental health professionals, and support networks now has a new, albeit digital, inhabitant: artificial intelligence. This technological entity doesn’t supplant the human elements of empathy, understanding, and clinical expertise; rather, it complements them, offering a layer of immediacy and accessibility that can be lifesaving.
The imperative now is to integrate this AI component responsibly into our existing healthcare frameworks. This means tech companies must move beyond token collaborations with mental health professionals, engaging them in meaningful partnerships that shape the development and deployment of these AI tools. Ethical considerations, particularly around data privacy and user consent, must be more than just checkboxes; they should be foundational elements of any AI-driven mental health initiative. And when setbacks occur— as they inevitably will in any evolving field—our response must be measured and constructive, aimed at refining and improving the technology, rather than hastily dismissing it.
In this modern “village”, every member has a role to play. From the AI developers and data scientists to the mental health professionals; from policymakers to each one of us as potential users, we all contribute to the efficacy and ethicality of this emerging landscape. The goal remains unchanged: to create a more effective, compassionate, and comprehensive mental health care system. But our toolkit is expanding, and it now includes algorithms alongside empathy, data alongside compassion.
Let us proceed with both caution and optimism, recognizing that our village for suicide prevention has grown. It is a village that now includes not just human caregivers but also their digital counterparts, each with a unique but interconnected role in safeguarding our collective mental well-being.
References:
- The Future of Chatbots: 80+ Chatbot Statistics for 2023 (https://www.tidio.com/blog/chatbot-statistics/)
- https://www.globenewswire.com/en/news-release/2021/07/21/2266698/0/en/Healthcare-Chatbots-Market-%20Size-Worth-USD-543-65-Million-by-2026-at-19-5-CAGR-Report-by-Market-Research-Future-MRFR.html
- ‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says (https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says)
- Lejeune A, Le Glaz A, Perron PA, Sebti J, Baca-Garcia E, Walter M, et al. Artificial intelligence and suicide prevention: a systematic review. Eur Psychiatry. 2022;65(1):1-22.
- McKernan LC, Clayton EW, Walsh CG. Protecting Life While Preserving Liberty: Ethical Recommendations for Suicide Prevention With Artificial Intelligence. Front Psychiatry. 2018;9:650.