thumbnail

Leveraging AI for Suicide Risk Prediction

Frontiers in AI & Mental Health: Research & Clinical Considerations

A recurring series exploring cutting edge research and clinical applications of artificial intelligence in mental health treatment

by Christopher Campbell, MD Candidate (Medical University of South Carolina) (with assistance from ChatGPT4o/Scholar AI in summarizing the research studies)

Suicide is among the most profound challenges in mental health care. Patients with suicidal thoughts often experience overwhelming emotional pain and distress. The impact of suicide extends far beyond the individual, leaving families and communities grieving and creating ripples of loss that can affect generations.

Tragically, suicide remains one of the leading causes of death in the United States, and trends in suicide continued to worsen, with a 36% increase in age-adjusted suicide between the year 2000 to 2018 (Harmer et al., 2024; Kochanek et al., 2017). In the aftermath of a patient’s suicide, clinicians may face challenging emotions of guilt, shame, and fear, along with feelings of incompetence (Downey & Alfonso, 2023).

Finding effective strategies for suicide prediction and prevention is of paramount importance. Traditional methods of assessing suicide risk rely heavily on clinician judgment, which, while vital, can sometimes fall short. There is room for improvement and innovation. Artificial intelligence has recently demonstrated promising advances in suicide prevention.  AI systems can analyze patient data to identify patterns and other nuanced suicide risk factors. This emerging research shows the potential for more precise and early intervention strategies with encouraging clinical implications.

The following studies represent the diverse array of data sets, including therapy transcripts, social media, and electronic health records, that can be successfully analyzed to aid in the identification of suicidal ideations. These data sets—ranging from historical patterns found in electronic health records to real-time sentiment analysis from social media and therapy transcripts—contribute unique insights into suicide risk. These studies are just a few examples of an incredibly rich and expansive area of research in AI-assisted detection of suicide risk.

Automatic Quantification of the Veracity of Suicidal Ideation in Counseling Transcripts

This study aimed at detecting suicidal ideation in counseling transcripts using sentiment analysis and machine learning.

  • Methods: The study used a collection of 745 therapy session transcripts from the Alexander Street database, which were already categorized by topics like suicidal ideation, suicidal behavior, and self-harm. Researchers analyzed the text for psychological and emotional patterns in language, including words and phrases strongly linked to suicidal thoughts or behaviors. Finally, they trained machine learning models to predict whether a given transcript belonged to one of the suicide-related categories. This process aimed to improve the accuracy of identifying suicidal ideation in therapy transcripts.
  • Key Findings: The models achieved an accuracy in detecting suicide-related transcripts of 80-89%, demonstrating the promise of AI in detecting suicidal ideation risk in counseling sessions.
  • Clinical Implications: AI-based tools for identifying suicidal ideation in therapy transcripts may improve suicide prevention efforts, enabling mental health professionals to intervene earlier and more accurately in crisis situations.

Suicide Ideation Detection on Social Networks: Short Literature Review

This study explores suicide ideation detection through social networking platforms by analyzing posted content and applying machine learning techniques.

  • Methods: The study used machine learning (ML) and deep learning (DL) to detect suicidal thoughts in social media posts from platforms like Twitter and Reddit. First, posts were collected and labeled by researchers as suicidal or non-suicidal based on keywords and patterns. The text was next cleaned and converted into numerical data for analysis. ML models and advanced DL were trained to identify patterns linked to suicidal ideation.
  • Key Findings: Social media posts can reveal high correlation with suicide risk through analysis of language patterns. The study found that machine learning and deep learning techniques can effectively identify suicidal ideation in social media posts, with certain models achieving over 90% accuracy.
  • Clinical Implications: Implementing AI-based systems for analyzing social media posts can aid in early, real-time identification of individuals at risk for suicide. This technology could enable timely intervention and offer scalable mental health support for large amounts of at-risk populations.

Prediction of Suicide Attempts Using Clinician Assessment, Patient Self-report, and Electronic Health Records

This study explores prediction of suicide attempts within 1 and 6 months of presentation at an emergency department (ED) using a combination of clinician assessment, patient self-report, and electronic health records (EHR).

  • Methods: This prognostic study evaluated the 1-month and 6-month risk of suicide attempts among 1818 patients presenting to the Massachusetts General Hospital emergency department (ED). Data sources included clinician evaluations, a patient self-report questionnaire, and machine learning models applied to EHR data. Ensemble machine learning methods (methods that combine multiple individual machine learning models) were used to predict suicide risk.
  • Key Findings: Machine learning models which utilized data from patient self-reports combined with EHR data to predict 1-month and 6-month risk of suicide attempts were significantly more accurate than clinician assessments alone, achieving an area under the curve (AUC)* of 0.77 for predicting suicide attempts within 1 month and 0.79 for 6 months, compared with clinician assessments of AUC of 0.67 and 0.60 for 1 and 6 months, respectively.
  • Clinical Implications: Utilizing machine learning algorithms which predict suicide attempts via analysis of patient self-report data and EHR data may significantly enhance a clinicians’ ability to identify high-risk individuals who arrive to the ED. Such enhanced predictive value may offer potential for closer monitoring of high-risk patients and earlier intervention in order prevent suicide attempts.

*Area under the curve (AUC) value of 1 implies perfect prediction, value of 0.5 implies random chance, and value of less than 0.5 implies worse than random chance. 

Conclusion

So what do these studies entail for clinical practice? While clinician expertise is invaluable, AI may provide additional tools and data-points to enhance predictive accuracy and support clinicians in preventing devastating outcomes such as suicide, helping to mitigate the limitations inherent in human predictive capacity. As AI-based suicide prediction develops, what could the clinical implementation look like?

Upon beginning clinical work with a new patient, the clinician offers the opportunity for the patient to consent to AI-facilitated analysis of various data sets, including social media history, therapy transcripts, electronic health records, and even biometric health data (such as that supplied by smart watches). In a similar way that vital signs are monitored in a critically ill patient, the aforementioned data would be monitored regularly by the mental health care clinician. That might also involve continuously updated suicide-risk percentages and corresponding alerts when particularly concerning risk thresholds are met, helping to promote real-time suicide prevention interventions.

Important ethical issues will inherently arise as such technology is developed and adopted. Prediction failures may result in a false-positive for suicide risk, resulting in the unnecessary, involuntary hospitalization of a non-suiciudal patient. But overreliance on technology could create a false sense of security, causing a clinician to overlook a patient who may actually have suicidal ideation or intention. Furthermore, issues such as the privacy of a patient’s data and informed consent to use that data will be crucial for protecting the safety and trust of patients. Lastly, we must ensure that technology-assisted screening for suicide risk is paired with timely, preventative action overseen by trained, qualified mental health professionals.

The goal is not to remove or replace  clinical responsibility, but to assist and augment clinical knowledge and experience with technological tools. By combining AI’s analytical power with clinicians’ judgment and interpersonal skills, this hybrid approach may create a more comprehensive and effective system for suicide prevention.

References:

Downey, J. I., & Alfonso, C. A. (2023). The Impact of Patient Suicide on Clinicians. Psychodynamic Psychiatry, 51(4), 381–385. https://doi.org/10.1521/pdps.2023.51.4.381

Harmer, B., Lee, S., Duong, T. vi H., & Saadabadi, A. (2024). Suicidal Ideation. PubMed; StatPearls Publishing. https://pubmed.ncbi.nlm.nih.gov/33351435/

IBM. (2021, September 22). Machine learning. Ibm.com. https://www.ibm.com/think/topics/machine-learning

Kochanek, K., Murphy, S., Xu, J., & Arias, E. (2017). Mortality in the United States, 2016 Key findings. https://www.cdc.gov/nchs/data/databriefs/db293.pdf

Lasri, S., Nfaoui, E. H., & El haoussi, F. (2022). Suicide Ideation Detection on Social Networks: Short Literature Review. Procedia Computer Science, 215, 713–721. https://doi.org/10.1016/j.procs.2022.12.073

Nock, M. K., Millner, A. J., Ross, E. L., Kennedy, C. J., Al-Suwaidi, M., Barak-Corren, Y., Castro, V. M., Castro-Ramirez, F., Lauricella, T., Murman, N., Petukhova, M., Bird, S. A., Reis, B., Smoller, J. W., & Kessler, R. C. (2022). Prediction of Suicide Attempts Using Clinician Assessment, Patient Self-report, and Electronic Health Records. JAMA Network Open, 5(1), e2144373. https://doi.org/10.1001/jamanetworkopen.2021.44373

Oseguera, O., Rinaldi, A., Tuazon, J., & Cruz, A. C. (2017). Automatic Quantification of the Veracity of Suicidal Ideation in Counseling Transcripts. Communications in Computer and Information Science, 473–479. https://doi.org/10.1007/978-3-319-58750-9_66

 

 

 

Alexander Stein