Article Text

Download PDFPDF

Machine learning algorithms for suicide risk: a premature arms race?
  1. Jack C Lennon
  1. Department of Psychology, Adler University, Chicago, Illinois, USA
  1. Correspondence to Jack C Lennon; jlennon{at}adler.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Machine learning (ML) techniques1 are becoming a major area of study in psychiatry, as ML possesses the theoretical capacity to draw conclusions based on a broad range of data once the system has been taught through a process of trial and error. Specifically, and arguably most importantly, ML can do so more quickly and accurately than clinicians.2 Several studies have demonstrated accuracy through the use of ML in various populations and geographic locations, further perpetuating the perceived need to incorporate ML into ongoing studies. However, many considerations that must be accounted for when discussing ML in the context of data collection and clinical implementation, many of which have done little to thwart the ongoing pursuit of this type of research.

Given the promises and overall potential of ML techniques, suicide is one global pandemic that could benefit greatly from its use due to failed attempts at prevention.3 However, how one defines and views suicide will determine his or her perceptions of ML’s ability to be used globally in present times. Second, the limitations of ML in the context of suicide are critical to understand its utility in terms of both operation and timeliness. The use of ML techniques in suicide research is potentially premature, thus allocating funding to a solution to psychiatric translational issues prematurely. Torous and Walker4 reported that ML can serve as a practical tool to augment current knowledge and assist in overcoming translational issues. However, there are several considerations specific to suicide that are neither discussed nor given commensurate attention. While ML need not be considered a panacea to be perceived as fallacious, the underlying concern is that ML holds greater potential as a secondary measure to large-scale prospective studies than it does as a current psychiatric tool.

Defining suicide for machine learning

Views on suicide require a substantial paradigm shift,5 much like depression is in desperate need of reconsideration due to its biological and clinical heterogeneity.6 Based on current diagnostic criteria, suicide is not viewed as a distinct disorder or trajectory, despite vast literature supporting differences between those who are depressed, those who ideate and those who die by suicide.7 Instead, suicide is viewed as a cause of death—the result of brain dysfunction that may or may not have included depression. If this is the initial premise of one’s syllogism, initial and ongoing conditions will hold less value than determining an ultimate outcome through ML—thus, ML could be as effective as it is in other disciplines such as determining immune response or tumour growth. If a different initial premise is presented, such that suicide is its own discrete trajectory,5 rather than a cause of death, differing from all disorders that either possess similar risk factors or serve as risk factors for suicide, then one can understand why ML requires significantly more data because the system needs to learn about initial conditions to speculate about conclusions. Studies that have developed risk profiles through ML continue to report that we are facing understudied risk factors8 but, more importantly, we do not have the large, prospective datasets to use the risk factors of which we are aware.

Relevant limitations of machine learning

ML limitations include but are not limited to external validity—generalisation to populations outside of the specific dataset through which the algorithm has been developed. Increased heterogeneity serves as a barrier for accuracy in ML, even within a given population, requiring large sample sizes.9 While it could be claimed that each hospital system across the globe could develop its own ML algorithm to accurately predict who is at clinically significant risk of suicide, this would require a significant amount of funding and time and, further, would be based on the conditions of the time at which the study is conducted. For example, ML algorithms developed during the coronavirus disease 2019 (COVID-19) pandemic will likely only reflect this time period and become obsolete over time. The experience of grief and bereavement may seem to bear significantly more weight than is currently the case given what we currently know about suicide decedents. Risk factors will likely remain the same during COVID-19, but the rapid and successive nature of personal and vicarious traumas will result in overleveraged data that will be limited by more than the population—time itself will limit the utility of these procedures.

Prematurity in machine learning research for suicide

Prematurity is not to suggest that research on suicide is premature, as suicide is an overwhelming global health concern that is in desperate need of novel research and solutions. This is potentially what makes ML so convincing and appealing to researchers across disciplines. ML ostensibly allows for accuracy in determining who is at risk for suicide—how could this be a problem? This is not the problem. The concern, however, is that we are embarking on what may very well be the future of psychiatric decision-making with insufficient data. In doing so, we are expending more funding and a greater number of hours to developing ML algorithms that will become outdated. They will become outdated because current systems are not being taught with sufficient data points, which include many of the suicide risk factors we already know. We lack a dataset that incorporates all necessary components of suicide that would result in consistently viable ML algorithms. We continue to grapple with the ethical considerations of metadata, as well as the contents of said metadata whether they are housed within electronic health records or elsewhere. These types of data and studies are necessary to consider one of the oft forgotten and undervalued factors in suicide risk assessment—temporality. ML, including deep learning methods, must incorporate the timing of events, including the order of risk and protective factors in a given patient’s life. This relates to predicting or assessing recidivism risk or any other prospective claim.10 The simple but extensive combination of events will not yield the same predictive results as algorithms that account for temporality—an area that is understudied.11 Thus, we simply lack the information necessary for adequate learning to occur within an artificial intelligence network.

Conclusion

ML techniques are likely the future of psychiatry and psychology, particularly in acute settings when difficult and time-sensitive decisions must be made. While efforts to develop ML algorithms that focus on specific populations within specific settings are both impressive and laudable, ML holds greater potential than the localised efforts in which it is being used. It appears that the mere sight of a light at the end of the tunnel, promising substantial improvements in reducing suicide deaths, is blinding to the rough terrain that must be traversed in the interim. Not entirely dissimilar to an internet search, there must be data through which to search. In the case of suicide, whether the data are originating from registries, electronic health records or other databases, they invariably lack the data necessary to be generalisable and sufficiently comprehensive in terms of relevant variables. Until we gather these data and make them available for inclusion in these critical algorithms, ML will make predictions but never reach its full potential. Even more worrisome is the potential for researchers to misuse resources through premature efforts or, further, move past ML as a viable suicide prevention strategy in the future simply because it was never offered the opportunity to fully engage in what it is meant to do.

References

Jack C Lennon received his BA in Psychology at Kansas Wesleyan University in 2013 and his MA in Psychology at the Illinois School of Professional Psychology in 2018. Now he is a fifth-year PsyD candidate in Clinical Psychology and Neuropsychology at Adler University and neuropsychology extern at Rush University Medical Center. He serves as an ad-hoc reviewer for several peer-reviewed journals in psychiatry and neuroscience and is an Associate Editor for Journal of Alzheimer’s Disease. He also serves as the co-chair of the American Psychological Association’s (APA) Division 56 (Trauma Psychology) Student Publications Committee, the APA of Graduate Students liaison to the Board of Educational Affairs, and the Student Volunteer Coordinator for the National Academy of Neuropsychology. Clinically, he is interested in the spectrum of tau and alpha-synuclein pathologies in the context of complex medical histories. On a research level, his background is in translational neurobiology and behavioural and psychological symptoms of dementia. Through these experiences, he has become particularly interested in the clinical utility of neuropsychological assessment in early detection and prediction of disease onset, including the extension of neuropsychology and deep learning approaches to suicide risk assessment in neurological and neuropsychiatric populations.


Embedded Image

Footnotes

  • Twitter @JackCLennon

  • Contributors JCL is responsible for the concept, drafting and revisions of this original submission.

  • Funding JCL is supported by the Alfred Adler Scholarship.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.