The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Treatment in PsychiatryFull Access

Suicide Prediction With Machine Learning

Case Vignette

“Mr. A” is a 35-year-old Caucasian veteran who just completed a tour of Iraq and returned stateside. He presents to the emergency department at the Veterans Affairs (VA) Hospital complaining of persistent nightmares, inability to go out at all, and inability to be normal around his children. He says he wants to die. He has no family history of suicide, and his wife has given away his guns. He is deemed to be a safety risk to himself and is admitted to the inpatient service. His symptoms are controlled on an optimal medication regimen, but a week after he leaves the hospital he discontinues his medications because they make him feel dull. At his outpatient follow-up appointment 2 weeks later, he reports feeling okay but endorses transient thoughts of dying, which become increasingly severe over the next few weeks. He expresses being at the end of the rope and wanting to die. He spends most of his time thinking about ways to kill himself, and one day calls the crisis line when his wife is away at work.

For anyone who has worked at the Veterans Affairs (VA), this situation described in the above case vignette may seem familiar. Despite all the psychotropic medications we have at our disposal, and despite our best efforts, as any mental health provider who has lost a patient to suicide knows all too well, there is no way to accurately predict what this veteran will do next.

Suicide is the 10th leading cause of death in the United States. A total of 41,149 people died by suicide in 2015. Suicide costs the health care industry $51 billion annually (1). Firearms are responsible for more than 50% of suicides, and middle-aged white men have the highest rates (1). Given these statistics combined with the numerous stressors associated with deployment and reintegration, it is perhaps unsurprising that suicide prevention is a top priority for the U.S. military.

Multiple studies have investigated factors correlated with completed suicide; a well-known strong predictor of completed suicide is a previous suicide attempt. A 2016 meta-analysis of longitudinal studies pointed out discrepancies between studies that have examined the influence of previous suicidal attempts on suicidal behavior (2). Individual studies report risk ranging from nonsignificant (3) to 40-fold (4) to 70-fold (5).

Clinical prediction rules are increasingly used to facilitate evidence-based decision making regarding diagnosis and treatment; in essence, a clinical prediction tool helps a clinician weigh the odds and arrive at an average predicted risk (6). Mental health prediction rules have been slower to develop than clinical prediction rules, such as the Wells Criteria (7) (to help assess risk of pulmonary embolism) or the CHADS2 score (8) (to help assess risk of stroke with atrial fibrillation). Efforts to validate specific scales to predict suicide risk have been undertaken, with studies evaluating the Columbia Suicide Severity Rating Scale, the Suicide Trigger Scale, and the Barwon Health Suicide Risk Assessment (911). Of these, the Columbia Suicide Severity Rating Scale is standardized for use in different populations, ranging from children to adults and veterans, and has reasonably good data for validity and reliability (12, 13).

Compounding the complexity of predicting suicide risk is the fact that 60% of deaths from suicide come about from the first suicide attempt, and a complex relationship exists between previous suicide attempts, current suicidal ideation, and lifetime suicide risk (2, 13, 14). Adding the high degree of variability of type of illness associated with suicidal ideation (major depressive disorder, posttraumatic stress disorder [PTSD], borderline personality), mental health clinicians currently have no evidence-based or systematic way of arriving at a composite risk score for each patient from all of the individual risk factors (14). To date, the identification of a biomarker or biomarkers to predict suicide risk has remained elusive, and there is no blood test to predict suicide risk (15). In summary, the ability of a mental health provider to predict an individual patient's suicide risk with any certainty is limited by lack of clinical prediction rules, a problem that is compounded and highlighted by individual diagnostic, psychosocial, and medical comorbidity.

It is at this point where machine learning can enter the landscape of psychiatry care. For the last 10 years, machine learning has made its foray into medical practice and biomedical applications and has facilitated the development of well-accepted clinical prediction rules (6). A machine-learning algorithm is a statistical technique that utilizes complex calculations to look at large data sets to predict factors or variables that can influence outcomes. Using variables that have been identified as significantly predictive, a software interface can be designed so that a provider in a hospital setting can ascertain the variables relevant to the patient being examined, input the variables into the system, and receive output in the form of a risk calculated through the available data. When set up properly, the information entered by providers can be harnessed to add to the data set and improve the accuracy of the clinical prediction rule.

Machine learning was initially used to build faster search engines like Google, for signal detection and for many other engineering feats. A recent article in JAMA highlights how machine learning is instrumental for health care in the 21st century (16). Studies have already shown the use of machine learning in risk stratification and outcome prediction in multiple medical and surgical specialties. It follows that machine learning may be a useful adjunct to the clinical assessment of suicide risk (14).

There are a few studies that have already used machine learning to predict suicidal risk in clinical and nonclinical settings. Querying PubMed with the MeSH terms “machine learning” and “suicide” and selecting studies that used clinical populations for assessment of suicidal risk, we chose two studies. Both studies applied machine learning to retrospective data sets encompassing clinical and demographic details of patients. The first study applied machine learning to predict suicide risk in a sample of outpatients with mood disorders and determined a sensitivity of 70% and specificity of 70% while finding previous hospitalization for major depressive disorder (using DSM-IV), a history of psychosis, cocaine dependence, and comorbid PTSD to be the strongest predictors of completed suicide risk (17).

The second study comes from the STARRS [Study to Assess Risk and Resilience in Service Members] project. This study utilized machine learning to predict suicide risk among 53,769 previously deployed soldiers and veterans after discharge from inpatient hospitalization from 2004 to 2009 (18). Variables considered included demographics, diagnoses (as distinguished by appropriate ICD-9 codes), assessment tool results, pharmacotherapy, psychotherapy (if any), and information about hospital course. The STARRS model was able to predict suicide risk with sensitivity and specificity each approaching 70% and identified male sex, late age at enlistment, criminal offenses, and presence of previous suicidal ideation as the strongest predictors of completed suicide.

Generation of a composite risk score for an individual patient using machine learning relies on a computational process based on patterns seen in previously analyzed data sets. Both the score and the clinical prediction rule used for generating the score need adjunctive clinical interpretation before assigning relevance to the score. A trained mental health provider knows that presence of comorbid substance use disorders and psychotic symptoms (elucidated as predictive of completed suicide in a study by Passos et al. [17]) increase suicidal risk in patients with major depressive disorder or PTSD. The benefit of a machine-learning approach permits the validation and strengthening of clinical prediction rules as numbers of inputs rise while, at the same time, facilitating more accurate triage of patients and more reliable assessment of suicide risk in cases in which the clinical situation seems ambiguous, as in the case of the patient in the above clinical vignette. As with any computer application, the technology is only as good as the information and programming that goes in to it, and misclassification or wrongful assignment of risk is possible. It is for this reason that adjunctive clinical assessment and ongoing modifications are necessary to optimize the utility of the strategy.

Another avenue for strengthening risk prediction is applying machine learning to biomarker data in conjunction with clinical assessment data. Numerous candidate biomarkers have been postulated for suicide (15, 19). Some of them include neurotransmitter systems (dopamine, norepinephrine, serotonin, GABA), cytokine levels, imaging biomarkers (e.g., PET, diffusion tensor imaging), and cortisol/HPA systems. An optimal biomarker should be unique to suicide and have good validity and reliability. It is possible that lack of an optimal biomarker speaks to the complex neurobiology of suicide. An ideal research goal would be to apply machine learning to databases comprising clinical data, as well as candidate biomarker data. This would result in being able to choose both clinical and biomarker variables with the highest capability of suicide risk prediction to generate a composite score for patients seen in the emergency department or inpatient/outpatient settings, irrespective of diagnoses. These endeavors highlight a future direction in psychiatry that will help reduce our margin of error in suicide prediction.

Key Points/Clinical Pearls

  • Suicide is a complex neurobiological phenomenon, and there is great variability in validity of clinical assessment tools to predict suicidal risk.

  • Machine learning is a statistical technique that can pinpoint suicide risk prediction variables that could be clinical or demographic information; with extrapolation this could also include investigational results such as cytokine levels or brain imaging parameters like neurotransmitter binding using PET imaging, white matter integrity, or brain cortical thickness.

  • A composite score calculated from highlighted variables could help stratify suicidal risk for patients seen in various settings, much similar to CHADS2 score for risk of stroke in atrial fibrillation.

  • The technique is not without limitations and would need to be used in conjunction with clinical assessment to decrease margin of error in suicidal risk prediction.

Dr. Rakesh is a third-year resident in the Department of Psychiatry and Behavioral Sciences, Duke University Health System, Durham, N.C., as well as an Associate Editor of the Residents' Journal and Guest Editor for this issue.

The author thanks Jane Gagliardi, M.D., M.H.S., Associate Professor of Psychiatry and Internal Medicine, Duke University Health, for her suggestions and mentoring.

References

1. https://www.cdc.gov/ Google Scholar

2. Ribeiro JD, Franklin JC, Fox KR, et al.: Self-injurious thoughts and behaviors as risk factors for future suicide ideation, attempts, and death: a meta-analysis of longitudinal studies. Psychol Med 2016; 46(2):225–236 CrossrefGoogle Scholar

3. Tejedor MC, Diaz A, Castillon JJ, et al.: Attempted suicide: repetition and survival: findings of a follow-up study. Acta Psychiatr Scand 1999; 100(3):205–211 CrossrefGoogle Scholar

4. Harris EC, Barraclough B: Suicide as an outcome for mental disorders: a meta-analysis. Br J Psychiatry 1997; 170:205–228 CrossrefGoogle Scholar

5. Sanchez-Gistau V, Baeza I, Arango C, et al.: Predictors of suicide attempt in early-onset, first-episode psychoses: a longitudinal 24-month follow-up study. J Clin Psychiatry 2013; 74(1):59–66 CrossrefGoogle Scholar

6. Adams ST, Leveson SH: Clinical prediction rules. BMJ 2012; 344:d8312. CrossrefGoogle Scholar

7. Wolf SJ, McCubbin TR, Feldhaus KM, et al.: Prospective validation of Wells Criteria in the evaluation of patients with suspected pulmonary embolism. Ann Emerg Med 2004; 44(5):503–510 CrossrefGoogle Scholar

8. Keogh C, Wallace E, Dillon C, et al.: Validation of the CHADS2 clinical prediction rule to predict ischaemic stroke: a systematic review and meta-analysis. Thromb Haemost 2011; 106(3):528–538 CrossrefGoogle Scholar

9. Large M, Kaneson M, Myles N, et al.: Meta-analysis of longitudinal cohort studies of suicide risk assessment among psychiatric patients: heterogeneity in results and lack of improvement over time. PLoS One 2016; 11(6):e0156322 Google Scholar

10. Nelson HD, Denneson L, Low A, et al.: Systematic Review of Suicide Prevention in Veterans: VA Evidence-Based Synthesis Program Reports. Washington, DC, Veterans Affiars Administration, 2015Google Scholar

11. Chan MK, Bhatti H, Meader N, et al.: Predicting suicide following self-harm: systematic review of risk factors and risk scales. Br J Psychiatry 2016; 209(4):277–283 CrossrefGoogle Scholar

12. Chappell P, Feltner DE, Makumi C: Initial validity and reliability data on the Columbia-Suicide Severity Rating Scale. Am J Psychiatry 2012; 169(6):662–663 LinkGoogle Scholar

13. Posner K, Brown GK, Stanley B, et al.: The Columbia-Suicide Severity Rating Scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults. Am J Psychiatry 2011; 168(12):1266–1277 LinkGoogle Scholar

14. Ribeiro JD, Franklin JC, Fox KR, et al.: Suicide as a complex classification problem: machine learning and related techniques can advance suicide prediction: a reply to Roaldset (2016). Psychol Med 2016; 46(9):2009–2010 CrossrefGoogle Scholar

15. Oquendo MA, Sullivan GM, Sudol K, et al.: Toward a biosignature for suicide. Am J Psychiatry 2014; 171(12):1259–1277 LinkGoogle Scholar

16. Darcy AM, Louie AK, Roberts LW: Machine learning and the profession of medicine. JAMA 2016; 315(6):551–552 CrossrefGoogle Scholar

17. Passos IC, Mwangi B, Cao B, et al.: Identifying a clinical signature of suicidality among patients with mood disorders: a pilot study using a machine learning approach. J Affect Disord 2016; 193:109–116 CrossrefGoogle Scholar

18. Kessler RC, Warner CH, Ivany C, et al.: Predicting suicides after psychiatric hospitalization in US Army soldiers: the Army Study To Assess Risk and Resilience in Servicemembers (Army STARRS). JAMA Psychiatry 2015; 72(1):49–57 CrossrefGoogle Scholar

19. Olvet DM, Peruzzo D, Thapa-Chhetry B, et al.: A diffusion tensor imaging study of suicide attempters. J Psychiatr Res 2014; 51:60–67 CrossrefGoogle Scholar