< Back

A Modern Village: AI and Suicide Prevention

Dr. Jared Ng and Dr. Sharmili Roy

  • Dr. Jared Ng is a senior consultant and Medical Director at Connections MindHealth. As the founding Chief of the Department of Emergency & Crisis Care when he was at the Institute of Mental Health, he has deep  expertise and experience in suicide risk assessment and suicide prevention.
 
  • Dr. Sharmili Roy is PhD in AI and co-founder of Zoala. Zoala is a mental health tech startup which develops wellness solutions centred around adolescents using AI/mobile/digital technologies.

The role of artificial intelligence (AI) in mental health care, particularly in suicide prevention, has garnered significant attention in recent years. Various AI chat tools, such as Woebot, Wysa, and even ChatGPT, are available in the market, offering support and assistance to individuals in distress. These tools represent a new frontier in mental health care, leveraging technology to provide immediate, accessible support.

In the digital age, where communication via messaging is often more natural than verbal interaction, people, especially the younger generation, are likely to find AI-driven support more comfortable and accessible, both in terms of availability and cost. This shift towards “typed” communication aligns with a generation that has grown up in a digital world, seamlessly integrating technology into their daily lives.

The widespread adoption of chatbots further underscores this trend. Roughly 1.5 billion people are using chatbots, and about 88% of customers have had at least one conversation with a chatbot within the past year. The healthcare chatbots market is expected to reach $543 million by 2026, and globally, healthcare chatbots help save as much as $3.6 billion. These figures indicate the growing reliance on and potential of AI-driven chat tools in healthcare, including mental health services.

While the growing market for healthcare chatbots is promising, this technological advancement is not without its challenges. The complexities of human emotions and the multi-faceted nature of suicide present unique obstacles. Recent incidents, such as the one where a Belgian man reportedly died by suicide after talking to an AI chatbot, highlight the delicate balance that must be maintained. While AI offers unprecedented accessibility and immediacy, it also raises critical questions about empathy, understanding, and ethical responsibility in mental health care. 

In this paper, we envision how the future of technology for suicide prevention looks like and suggest how a multi-disciplinary collaboration between mental health practitioners, AI practitioners and regulators is key to harness the power of this technology to meet the rising demands of mental wellbeing, distress management and suicide prevention.

The Curious Case of AI in Suicide Prvention

Suicide is a deeply complex issue, often rooted in a combination of psychological, social, and biological factors. The challenge of understanding and preventing suicide is not merely a matter of identifying symptoms but involves unravelling an intricate web of personal struggles, societal pressures, and mental health conditions.

There are multiple factors to consider when assessing suicidality. Examples include: 

 

  • Interactions of the Risk Factors and (Lack of) Protective Factors: Understanding how risk factors interact with each other and the absence of protective elements is crucial in assessing suicide risk.

  • Stressors and Triggers: These can be long-term, short-term, possibly random, and unpredictable, adding complexity to the issue

  • Cultural Factors: Different cultures may have unique perspectives and stigmas related to mental health and suicide.

  • Individual Vulnerability: Different individuals may have brief periods of heightened vulnerability, making universal assessment challenging,

  • Availability of Means: Access to means of self-harm can significantly influence the risk.

Given these complexities, AI can offer a unique advantage through its ability to compute information from various sources over different periods of time. For instance, consider a hypothetical user, George, who has been using an AI mental health chatbot for six months. Over this period, AI learns how George likes to express himself by engaging in empathic conversations with George using Generative AI. Text analytics and natural language processing leveraging AI can track subtle changes in George’s language, tone, words used and the timing of his messages. AI can detect that George’s messages have increasingly included words like “hopeless,” “trapped,” and “worthless,” especially during late-night hours. It is possible for AI to correlate such changes with specific life events George would have mentioned, such as losing a job or relationship difficulties.

Imagine if George inquired about writing a will three months ago and asked for information on grief counselling six months ago. These inquiries, when combined with his recent language patterns and life events, create a multi-dimensional picture of George’s mental state. By synthesizing this data, the AI program could flag George as a high-risk individual for suicide, even before he explicitly states any suicidal thoughts. This pre-emptive identification could be invaluable, especially if it triggers a more immediate human intervention, like activating George’s safety net and connecting George to a mental health professional or a suicide helpline.

Some people may argue that AI has limited understanding of context and underlying emotions can lead to misinterpretations. For example, if George were to suddenly start using more positive language, the AI might incorrectly assume that his risk has decreased, not considering that some individuals may express a sense of relief when they made the decision to take their own lives. Machines may lack the nuances of such emotional expression but these limitations can be overcome by grounding AI to the science of emotional cognizance which can only be achieved via close collaboration between AI and psych practitioners.

Collaborative Intelligence: The Future for AI in Suicide Prevention

Using George’s story as a lens, we can already see the opportunities and challenges facing the use of AI as a tool for suicide prevention. On one hand, AI offers unparalleled accessibility, data-driven insights, and scalability, serving as a lifeline for those in acute emotional distress and filling critical gaps in traditional mental health services. It can analyse extensive datasets over long periods, capturing trends and risk factors that may elude human therapists. On the other hand, AI’s limitations in emotional intelligence and contextual understanding can lead to potentially dangerous misinterpretations. Ethical concerns around data privacy and the potential for over-reliance on technology further complicate its role. Therefore, while AI holds significant promise, its effective and ethical integration into mental health care requires a collaborative approach that leverages the strengths of both machine and human expertise.

Hence, we believe that a Human-AI collaboration approach is critical to harness the power of scale that AI brings while respecting the criticality of the high risk setting of mental health. Below we envision the key dimensions that this collaborative intelligence framework:

Central to this collaborative approach is a strong ethical foundation. We explore how ethics play a pivotal role in shaping the interaction between AI and human expertise.

1. Ethics at the core:
Fostering ethical use of technology for mental health care

AI’s role:
  • Digital Caregiver: AI can act as a “first responder”, offering immediate emotional support and basic guidance, thus serving as a digital extension of the mental health care system.

  • Equitable Care: AI should be designed to be free from biases related to race, religion, gender, or socio-economic status, ensuring fair and equitable care for all users.

  • Data Privacy: AI systems should be built with robust encryption methods to protect user data from unauthorised access or breaches.

  • Informed Consent: Users should be clearly informed about how their data will be used and stored, ensuring transparency and accountability.

Human’s role:
  • Ethical Guidelines: Human oversight is essential for setting and enforcing ethical standards for data collection, storage, and analysis.

     

  • Ongoing Oversight: Regular audits and reviews should be conducted to ensure that AI is serving the best interests of the users.

     

  • Sensible Response: Humans should manage any adverse events or setbacks in a responsible manner, without sensationalising or trivialising the issues.

While ethics provide the framework, the practical application of AI in providing ‘always-on’ mental health care is equally crucial. We examine how this 24/7 availability can be a game-changer in the realm of suicide prevention and mental health care.

While ethics provide the framework, the practical application of AI in providing ‘always-on’ mental health care is equally crucial. We examine how this 24/7 availability can be a game-changer in the realm of suicide prevention and mental health care.

2. Always-on care:
Providing care that is accessible and always available  

AI’s role:
  • 24×7 Availability: AI can fill the gap when human professionals are unavailable, providing a constant support system.

     

  • Appeal to Younger Generations: The text-based interaction can be more comfortable for younger people who are accustomed to digital communication.

     

  • Safety Net Activation: AI can flag high-risk behaviours or language, activating a more immediate human intervention.

     

  • Resource Direction: AI can guide users to appropriate resources such as articles, helplines, or self-help interventions based on their specific needs.

     

Human’s role:
  • Navigating Emotional Complexity: Humans bring an irreplaceable depth of emotional understanding that AI currently cannot replicate, making them crucial in high-risk situations.

  • Crisis Intervention: While AI can flag high-risk behaviours or language, human interventions are still essential for immediate crisis intervention and support which are potentially life-saving.

  • Family and Support Network Link-up: Humans can engage with the patient’s family or support network in a sensitive and effective manner, something that AI may not be equipped to do.

  • Follow-Up Care: After immediate risks have been managed, human professionals can provide ongoing care, including follow-up appointments, medication management (if necessary), and long-term treatment planning.

Beyond availability, the true power of AI lies in its ability to analyse vast amounts of data for personalised, evidence-based therapy. We delve into how this data-driven approach can revolutionise mental health care.

 

3. Data-driven Therapy:
Enabling patients to benefit from evidence-based personalised therapy

AI’s role:
  • Long-Term Analysis: AI can analyse data over extended periods, identifying long-term trends or changes in behaviour that might be missed otherwise.

     

  • Predictive Analytics: AI can use machine learning algorithms to predict future mental health crises based on historical data and current user interactions.

     

  • Complex Diagnostics: Advanced algorithms can help in diagnosing complicated cases by analysing multiple data points.

     

  • Actionable Insights: AI can turn raw data into practical advice or treatment suggestions, which can be particularly useful for evidence-based therapies.

     

Human’s role:
  • Emotional Intelligence: Humans can train AI systems to better understand emotional cues and contexts, improving their diagnostic capabilities.

     

  • Algorithm Oversight: Human experts should have the final say in the design and functioning of the AI algorithms, ensuring they meet medical and ethical standards.

     

  • Therapeutic Relationships: Build and maintain the therapist-patient relationship, offering emotional support, trust and a human touch that AI cannot provide.

While personalised, data-driven therapy is transformative, it is also essential to address the economic aspects of mental health care. We believe that AI can make mental health care more cost-efficient without compromising quality.

 

4. Cost efficiency of care:
Enhancing affordability and reducing the economic burden of mental health care

AI’s role:
  • Scalability: AI can handle a large number of users simultaneously, making it a cost-effective solution for basic mental health care needs.

     

  • Interim Solutions: For those on waiting lists for professional care, AI can provide immediate, although limited, support.

     

  • Task Automation: Routine administrative tasks like appointment scheduling, billing, and record-keeping can be automated, freeing human professionals to focus on more complex cases.

     

Human’s role:
  • Quality Assurance: While AI can handle the quantity, humans ensure the quality of care, especially for complex or high-risk cases.

  • Informed Decision-Making: Human professionals can use AI-generated data to make better-informed decisions, potentially reducing the cost and duration of treatment.

  • Ethical Spending: Ensure that cost-saving measures do not compromise ethical standards or the quality of care for mental health care, making adjustments as needed.

Conclusion: A Village Reimagined

As cliché as it might sound, it takes a village to prevent suicides. In the intricate and sensitive landscape of mental health and suicide prevention, it is increasingly evident that a multi-pronged, interdisciplinary approach is not just beneficial—it is essential. The traditional “village” of caregivers, mental health professionals, and support networks now has a new, albeit digital, inhabitant: artificial intelligence. This technological entity doesn’t supplant the human elements of empathy, understanding, and clinical expertise; rather, it complements them, offering a layer of immediacy and accessibility that can be lifesaving.

The imperative now is to integrate this AI component responsibly into our existing healthcare frameworks. This means tech companies must move beyond token collaborations with mental health professionals, engaging them in meaningful partnerships that shape the development and deployment of these AI tools. Ethical considerations, particularly around data privacy and user consent, must be more than just checkboxes; they should be foundational elements of any AI-driven mental health initiative. And when setbacks occur— as they inevitably will in any evolving field—our response must be measured and constructive, aimed at refining and improving the technology, rather than hastily dismissing it.

In this modern “village”, every member has a role to play. From the AI developers and data scientists to the mental health professionals; from policymakers to each one of us as potential users, we all contribute to the efficacy and ethicality of this emerging landscape. The goal remains unchanged: to create a more effective, compassionate, and comprehensive mental health care system. But our toolkit is expanding, and it now includes algorithms alongside empathy, data alongside compassion.

Let us proceed with both caution and optimism, recognizing that our village for suicide prevention has grown. It is a village that now includes not just human caregivers but also their digital counterparts, each with a unique but interconnected role in safeguarding our collective mental well-being.

References: