Article Text

Download PDFPDF

Calibrating violence risk assessments for uncertainty
  1. Michael H Connors1,2,3 and
  2. Matthew M Large2
  1. 1 Centre for Healthy Brain Ageing, University of New South Wales, Sydney, New South Wales, Australia
  2. 2 Discipline of Psychiatry and Mental Health, University of New South Wales, Sydney, New South Wales, Australia
  3. 3 Department of Psychiatry, University of Melbourne, Melbourne, Victoria, Australia
  1. Correspondence to Dr Michael H Connors; m.connors{at}unsw.edu.au

Abstract

Psychiatrists and other mental health clinicians are often tasked with assessing patients’ risk of violence. Approaches to this vary and include both unstructured (based on individual clinicians’ judgement) and structured methods (based on formalised scoring and algorithms with varying scope for clinicians’ judgement). The end result is usually a categorisation of risk, which may, in turn, reference a probability estimate of violence over a certain time period. Research over recent decades has made considerable improvements in refining structured approaches and categorising patients’ risk classifications at a group level. The ability, however, to apply these findings clinically to predict the outcomes of individual patients remains contested. In this article, we review methods of assessing violence risk and empirical findings on their predictive validity. We note, in particular, limitations in calibration (accuracy at predicting absolute risk) as distinct from discrimination (accuracy at separating patients by outcome). We also consider clinical applications of these findings, including challenges applying statistics to individual patients, and broader conceptual issues in distinguishing risk and uncertainty. Based on this, we argue that there remain significant limits to assessing violence risk for individuals and that this requires careful consideration in clinical and legal contexts.

  • risk assessment
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Risk of violence is an important consideration when treating mental illness.1–3 In clinical settings, such risk can be used to justify involuntary detention, coercive treatment, and breaching patient confidentiality. In legal settings, such risk can influence decisions about sentencing and release for those charged with a crime. In both contexts, decision makers face significant challenges, including uncertainty about a given individual’s future behaviour; pressure from limited, overburdened health resources; and tension balancing the individual’s interests with community safety. Given such challenges, decision makers often turn to some appraisal of future risk.1–3 The ability of psychiatrists to assess this risk, however, is controversial.4 Although researchers have made considerable improvements in identifying risk factors at a group level over recent decades, the ability to apply these findings clinically to individual patients remains contested. In this paper, we review methods of assessing violence risk and empirical findings on their predictive validity. We also discuss the clinical applications of these findings and broader concepts of risk and uncertainty. Based on this, we argue that there remain significant limits to accurately predicting violence risk at an individual level and that this requires careful consideration in clinical and legal contexts.

Risk assessment

Risk assessments usually involve a process of identifying factors present in an individual that predict future outcomes.1–3 Such factors are typically derived from research and/or theoretical models of behaviour. By convention, they are usually expressed in terms of whether they are associated with either greater risk (‘risk factors’) or lower risk (‘protective factors’).1–3 Both types are often further categorised into whether they are constantly present (‘static’; eg, demographics, past behaviours) or changeable (‘dynamic’; eg, specific psychiatric symptoms, intoxication).1–3 The process of identifying relevant factors itself can vary in terms of whether it is unstructured—following an individual clinician’s personal choices and methods—or structured—following formalised procedures to identify factors and, based on these, categorise the level of risk.1–3

Structured risk approaches vary further in terms of how they are scored. Some are scored algorithmically to produce a risk estimate with little or no clinician involvement (‘actuarial’ approach).1–3 Others are designed to assist clinicians to make broader risk classifications incorporating elements of their own judgement (‘structured professional judgement’).1–3 The latter approach, for example, may provide a list of factors to assess for, allow evaluators discretion to judge which factors are relevant to the individual, and guide evaluators in formulating an overall risk profile considering situational factors. A third type of structured approach retrospectively examines violent incidents to identify potential contributory factors in the individual and situation (‘anamnestic’ approach).1–3 This last method is used to identify idiosyncratic factors, particularly of a dynamic nature, for a given individual to inform management strategies, but typically does not yield longer-term estimates of violence risk.

The usual result of a risk assessment, whether actuarial or structured professional judgement, is a categorisation of risk. Patients are given a score or assigned to a group—such as high or low risk of violence—based on their identified risk factors. These categories, in turn, may reference a probability estimate of future risk based on previously aggregated data.1–3 Using this information, clinicians might also seek to produce a formulation of an individual’s risk, incorporating identified risk factors and possible future situational variables, with a goal to inform management.1–3 A proportion of risk factors, for example, may be modifiable through intervention (eg, mental illness, substance use), while others can help anticipate overall risk (eg, past violence).

Evaluating predictive validity

Predictive validity assesses the ability of risk assessments to predict future outcomes. It involves two core components: discrimination and calibration.5–10 Discrimination refers to how well an instrument separates patients by outcome—that is, correctly distinguishes between those who do and do not commit violence in its risk categorisations.5–10 By contrast, calibration refers to how well an instrument’s predictions correspond to actual outcomes—that is, provides accurate absolute risk estimates of violence.5–10 Both components are crucial to risk assessment4 10 11 and are evaluated by distinct statistical indices.5–10

Discrimination

Indices of discrimination measure the accuracy of risk assessment tools at separating patients according to the outcome of interest—in this case, violence.5–10 These indices assess the extent to which a tool’s categorisations of risk correctly distinguish between those who commit violence and those who do not.5–10 They are calculated retrospectively based on the outcome. After following a group of patients for a period of time and determining which patients were violent, researchers can calculate the proportion that was correctly classified by the risk assessment tool beforehand.10 If those who committed violence all had higher predicted risks than those who did not, the tool has perfect discrimination, even if the predicted risks did not match the actual rates of violence.7 Measures of discrimination depend on the distribution of patient characteristics within the population assessed:5 a more heterogeneous distribution of characteristics used to predict violence (in which patients differ considerably) may facilitate more accurate discrimination than a homogenous distribution (in which all patients are similar).

Common statistical indices of discrimination include sensitivity (the proportion of those who committed violence judged to be high risk); specificity (the proportion of those who did not commit violence judged to be low risk); point-biserial correlation coefficient (the correlation between risk classification score and violence outcome); diagnostic odds ratio (the ratio of the odds of a high risk classification in those who committed violence to the odds of a high risk classification in those who did not); logistic odds ratio (the ratio of the odds of a lower risk classification in those who did not commit violence to the odds of a higher risk classification in those who did); and the area under the curve (AUC) of a receiver operator curve that plots true positive rate against false positive rate (the probability that a randomly selected individual who was violent received a higher risk classification than a randomly selected individual who was not violent).10 These measures do not explicitly factor in base rates of violence in their calculations.10 Nevertheless, base rates can still indirectly influence many of these measures in empirical studies by (i) reducing the precision of estimates when rates of violence are low and (ii) affecting researchers’ decisions about the cut-off thresholds used on continuous scales and definitions of what constitutes violence, particularly at extremes of low and high base rates.12 13 In the case of the point-biserial correlation coefficient, divergences from base rates of 50% also restrict the range of correlations, constraining possible correlation values independent of the risk assessment tool used.10

Of these various measures, the AUC has the advantages of assessing discrimination independent of a specific cut-off threshold and minimising the influence of the base rate of violence. As a result, the AUC has emerged as the most commonly used—and sometimes the only reported—measure of predictive validity in risk assessment research.14 It provides values between 0.00 and 1.00, where chance discrimination is 0.50 and perfect discrimination is 1.00. The AUC can also be compared across studies or converted to an effect size for meta-analysis. Of note, however, the AUC and other measures of discrimination usually assume dichotomous distinctions. Assessment tools that use tripartite distinctions (‘low’, ‘medium’, ‘high’) or more numerous divisions need to be converted to dichotomous categorisations for these measures to be applied. The measures also do not reflect inter-rater reliability, which represents a separate concern for assessing the accuracy of classification.

Using indices of discrimination, research has shown that structured risk assessments are superior to unstructured approaches.15 16 Unstructured approaches appear to be somewhat unreliable given their informal nature, potential for bias, and variability between clinicians.17 Estimates of AUC values for unstructured approaches typically fall between 0.55 and 0.66,18 though the meaningfulness of these estimates is limited by the variability already noted. By contrast, structured risk assessments typically yield AUC values between 0.66 and 0.78.19 This means that, for 66%–78% of comparisons using a structured instrument, a randomly selected perpetrator of violence had a higher score or risk categorisation than a randomly selected non-perpetrator (cf. only 55%–66% using unstructured approaches).20 These values are much lower than most medical screening tests, though comparable to other tools used in criminal justice to predict recidivism.21

Of the many structured risk assessment tools available, meta-analytic comparisons show little difference between them. The nine most commonly used instruments all have a moderate level of predictive accuracy and appear to be essentially interchangeable22 or, when considering calibration as well, differing only slightly.19 Instruments using structured professional judgement also seem to perform similarly to actuarial approaches.23 This similarity in performance is perhaps not surprising given that instruments assess similar risk factors.2 These risk factors appear to have four overlapping dimensions: criminal history; persistent antisocial lifestyle; psychopathic personality; and mental health and substance use issues.24 Consistent with this, novel instruments created by randomly selecting risk factors from established instruments perform very similarly to the established instruments.24 As such, using structured approaches, clinicians appear able to distinguish patients by violence risk consistently above chance at a group level.

Calibration

Indices of calibration measure the accuracy of risk assessment tools at predicting absolute risk.5–10 These indices evaluate the match between predicted and observed outcomes.5–10 They are calculated prospectively to determine the relationship between risk categorisation and subsequent violence.10 As such, they represent the practical performance of risk assessment tools within particular populations and thus closely depend on base rates of violence within those populations. Measures of calibration can be calculated from some measures of discrimination (those assessing accuracy at separating patients by outcomes) by explicitly incorporating base rates of outcomes. Nevertheless, measures of calibration remain conceptually distinct given their focus on assessing the fit between predicted and observed outcomes for those classified in different risk strata.5–10 Risk assessment tools can have poor calibration despite good discrimination: a tool could accurately rank patients in terms of their likelihood of committing violence (good discrimination), but still be misleading if it predicts that their absolute risk of violence is much higher or lower than it actually is (poor calibration).5

Given this focus on practical performance, measures of calibration should arguably be of greater relevance to clinical practice than those of discrimination: clinicians usually seek to assess an individual patient’s actual risk of violence rather than merely compare and rank different patients relative to one another.10 Measures of calibration, however, are often overlooked.4 11 One practical reason for this may be the measures’ reliance on base rates. Base rates vary considerably depending on a range of factors, including clinical setting, recruitment and sampling, the time period being considered, and the outcome of interest. Such variation makes it very difficult to generalise or compare across research studies or across different clinical settings and locations, with comparisons inevitably confounded by potential differences in samples.20 25 26

Common statistical indices of calibration include positive predictive value (the proportion of those judged to be at high risk who later commit violence) and negative predictive value (the proportion of those judged to be low risk who do not later commit violence).10 Further indices include the number needed to detain (the number of individuals judged to be high risk who would need to be detained to prevent a single violent act) and the number safely discharged (the number of individuals judged to be at low risk who could be discharged before a single violent act).10 All four indices reflect the base rate of events within the population they are applied to. They also typically require a single cut-off threshold, which can limit their applicability for risk assessment tools with more than two categories. Other measures, such as the likelihood ratio and Brier index, reflect both discrimination and calibration,9 10 though they are conceptually more complicated and have not been widely used in violence risk assessment research.14

Perhaps as a result of these methodological issues, empirical research has tended to emphasise measures of discrimination while neglecting calibration, despite the latter’s greater clinical relevance.4 11 Clinicians have also tended to overlook calibration, perhaps due to challenges in calculating the base rates of events within specific clinical practices, the lack of consensus around the probability thresholds required to justify particular interventions,27 and general cognitive biases that prioritise salient details over base rates.28 Despite such neglect, research has revealed significant limitations in risk assessment with respect to calibration. Across studies, patients deemed high risk in structured risk assessments vary considerably in their rates of violence both within and between instruments used (analogous to positive predictive validity).29 Individual instruments themselves vary across studies, with annualised rates of violence for those deemed high risk ranging from 0% to 100%.29 As such, the practical value of high-risk categorisation appears highly questionable. It also does not appear to be possible to reliably assign a numerical probability on the potential for an individual to act violently, at least using existing tools.

Other research has revealed that the impact of the relative inaccuracy in discrimination of risk assessment tools is disproportionately large compared with the base rates of violence in most populations.30 As a result, a large proportion of people categorised as high risk will not commit violence, while some categorised as ‘low risk’ will. In the case of patients with schizophrenia, for example, assuming typical psychometric properties of risk assessment tools and base rates of violence, approximately 2 500 patients categorised as ‘high risk’ would need to be detained for 1 year to prevent one homicide, including family members and associates, while 35 000 would need to be detained to prevent one homicide of a stranger.30 These large numbers would not prevent false negatives. Assuming a sensitivity of 80% for detecting violence generally, around one-fifth of patients who commit homicide would likely have been categorised as low risk.30 Such calculations indicate considerable practical limitations of risk assessments.

Clinical applications

Independent of statistical indices, an additional challenge in clinical practice involves applying aggregated group data to individual patients. In most scenarios, there is likely to be some uncertainty about whether an individual is representative of the samples recruited in research studies and how to consider variability within such groups. This issue is particularly relevant given the large heterogeneity in predictive validity both within and across studies;23 the restricted set of outcomes and longer time periods considered in most studies compared with clinical practice; imperfect inter-rater reliability; and many potential research biases (including authorship conflict-of-interest, whereby studies conducted by instrument designers find higher predictive values than other studies).31 In addition, some authors have argued that probabilistic group estimates from risk assessments are too imprecise (ie, have overly wide confidence intervals) to be meaningful for an individual.32 While these latter claims have been disputed,33 they continue to generate controversy and highlight challenges describing one’s confidence in an estimate for an individual based on previously collected group data.20

A related practical concern is how risk is conceptualised and communicated. Numerical probability estimates do not appear to be reliable due to large variability within and across samples.29 Categorical estimates, the alternative, are also problematic because (i) there is no consensus on the risk associated with particular categories (eg, high risk);27 (ii) they obscure decisions about cut-off thresholds, which involve decisions about the benefits and harms of accurate and inaccurate classification;27 and (iii) they can be misleading if, as previous research has shown, the majority classed as high risk do not perpetrate violence.30 Independent of format, framing of outcomes appears to distort perceived risk. Describing a risk estimate as the probability of violence, for example, increases perceived risk compared with describing it as the probability of violence not occurring.34 Likewise, describing risk in a frequency format (eg, 10 out of 100) increases perceived risk compared with doing so as a probability (eg, 10%).35 Such issues highlight practical challenges in conveying risk beyond concerns of accuracy.36

A final indicator of psychiatrists’ ability to predict violence risk is whether risk assessments reduce violence. There is, however, no clear evidence of this. Findings from cluster randomised studies are mixed, while the vast majority of pre-post studies show no benefit.37 Even in the few studies that do show reduced violence, effects do not appear related to prediction accuracy and instead seem to arise from other factors (eg, greater staff vigilance, regression to the mean). Clinicians’ attitudes and behaviour also suggest scepticism about accuracy and value in daily practice. Clinicians vary considerably in terms of whether they consider risk assessments useful for designing management plans and, if they do, whether they actually apply them in management efforts.37

When applied to management, a limitation of current instruments is that the risk factors they identify are relatively crude and only a small proportion are modifiable. Typical modifiable risk factors include, for example, mental illness, substance use problems, poor engagement with treatment, and lack of social supports.1–3 As a result, they appear to have relatively limited value for guiding treatment beyond what would be obvious to most clinicians (ie, treating mental illness and substance use, fostering therapeutic engagement, increasing social supports, arranging stable accommodation, and so on). In a similar way, the practical value of dynamic risk factors is limited by low base rates of violence—rates lowered further when adjusting for the short time period that the risk factor applies. As a result, dynamic risk factors need to be associated with very large increases in risk to be clinically meaningful for a given individual, again leaving only a narrow set of relatively obvious variables (eg, intoxication, clear plan and intent). When considering the significant time required to complete structured assessments38—around 15 hours on average39—and the associated opportunity-costs, the overall utility of instruments for informing management appears questionable.

Risk and uncertainty

Altogether, these challenges point to more fundamental issues about the nature of risk. In an influential account, Knight40 distinguished risk from uncertainty based on the extent to which probabilities of future events could be quantified beforehand. Whereas risk involves probabilities that can be clearly quantified (eg, coin toss, rolling dice), uncertainty involves probabilities that cannot be quantified, either due to lack of knowledge (‘epistemic uncertainty’) or randomness in the processes involved (‘aleatory uncertainty’).40 41 While others have used different definitions and terminologies,20 41–43 key distinctions remain between (i) whether or not probabilities can be quantified and (ii) whether the source of uncertainty is ignorance (unknown but potentially knowable outcomes) or the stochastic nature of the relevant processes (unknown and unknowable outcomes).

Using this framework, attempts to predict adverse events for individual patients can be understood as involving aleatory uncertainty. It is not possible to quantify the probabilities involved because the full range of possible scenarios and outcomes is unknown, events and environmental factors are subject to randomness, and there is intrinsic unpredictability in the outcomes for a given individual (cf. at a group level or with repeated trials). Aspects of epistemic uncertainty might also apply due to a lack of confidence in the information derived from a particular clinical assessment but which could be verified with further investigation.41 The overriding nature of risk assessments for a given individual, however, remains aleatory given the stochastic processes involved, regardless of what is known about the individual. As such, it underscores limitations in what is possible for risk assessments to achieve.20 44–46

Faced with these limitations, proponents appear to have redefined their goals and proposed that risk assessments do not seek to predict actual outcomes but instead help to identify and manage potential risks.47 48 Putting aside instrument developers’ conflicts of interest,31 such claims remain problematic. The redefined goals do not address the aleatory nature of the processes involved—it is not possible to quantify the probability of supposed potential risks, leaving them somewhat nebulous and unfalsifiable. Just as significantly, instruments leave potential losses undefined.49 Violence is highly heterogeneous, varying in both its severity and impact. Acts of violence, for example, can range from verbal aggression when provoked to premeditated serial homicide. A particular act of violence can also vary in the physical, psychological, social and financial harms it inflicts on victims, families and clinicians.49 Such variation is so broad as to render the notion of a single category of violence almost meaningless while still neglecting overall impact and loss.49

Risk, as ordinarily conceptualised, involves two key elements: (i) the chance or possibility of harm (ie, the probability) and (ii) the nature and extent of the harm or injury (ie, the loss).46 49 Both elements remain ill-defined in this context, leaving the overall construct of risk similarly vague and difficult to sustain. Beyond such conceptual difficulties, this ambiguity poses practical challenges for risk assessment. In particular, current instruments are designed for a single outcome, varying somewhat in how this is operationalised, but do not encompass the full range of potential violent acts. Different acts, however, can be associated with distinct risk factors. Mania, for example, is associated with aggression and minor violence, but usually not severe violence or homicide,50 51 while sexual violence is associated with different variables than physical violence.52 As a result, different potential outcomes require completion of distinct instruments, each with its own investment of time. This challenge of attempting to anticipate a wide range of potential harms is complicated further by the different base rates of outcomes, often specific to particular populations; difficulty applying historical group data to given individuals; and the varying impact that any discrete act of violence can have, sometimes simply due to chance. In sum, the combination of aleatory uncertainty and the difficulty establishing calibration for even simple outcomes indicates the likely futility of the project: that is, in seeking to know—or manage—the unknowable.

Risk assessment in psychiatry—whether conceptualised as prediction or evaluating a more nebulous notion of potential—thus falls within broader social trends in response to anxiety and limited control over future events. Faced with uncertainty about the future and public and legal intolerance for adverse outcomes, institutions have sought to quantify risk; implement procedures and regulations designed to mitigate it; and, where possible, displace blame onto individual decision makers.53 54 As a result, decision makers face their own reputational risks from adverse outcomes (‘secondary risk’) and strive to minimise these.53 54 In this context, risk assessments can be viewed as a form of defensive proceduralism,53 54 a process undertaken to minimise blame, independent of actual predictive utility. Such social forces may contribute to the ongoing popularity of risk assessments, despite poor predictive validity at an individual level.44 46 55

Conclusion

Some concept of risk is likely to remain a part of psychiatric assessments. Clinicians are inevitably tasked with making decisions under conditions of uncertainty and need to weigh the anticipated outcomes associated with particular interventions and contingencies. Clinicians are also faced with additional pressures from resource allocation, the need to justify involuntary treatment, and assumed responsibility over adverse events. In the case of anticipating violence, however, research indicates fundamental limitations in the ability of clinicians to predict future events. Indices of calibration indicate that risk categorisations, including designations of ‘high risk’, involve highly variable rates of outcomes in actual practice to the point where such categorisations become almost meaningless. Challenges in applying risk assessment tools to individuals and the fact that violence involves aleatory uncertainty, rather than quantifiable risk per se, likewise mean that clinicians simply cannot predict or avoid adverse events.

This predicament points to a need for a universal standard of care wherever possible. Patients should ideally be admitted to mental health hospitals on the basis of current symptoms and clinical need, rather than anticipated future risk, and discharged with ongoing follow-up, regardless of their supposed risk categorisation. For patients with a previous history of violence or coming from a population with higher rates of violence, such follow-up should ideally be on an assertive and enforceable basis, regardless of their score on a risk assessment tool. Resource limitations, however, mean that clinicians and services often need to select and prioritise certain patients. In so doing, false positives and false negatives will inevitably occur, regardless of the methods and criteria used.

Insofar as a formal estimation of risk is required, structured approaches appear superior to unstructured approaches. It is not always clear, however, whether these increases in accuracy, still highly limited for individual patients, are worth the clinical resources and large amounts of time required to complete structured tools—resources and time that could otherwise be devoted to clinical interventions. Briefer—or perhaps in future, computer automated—structured risk assessments might have a role at an institutional level when allocating scare resources to identify groups most likely to benefit from interventions. Structured assessments may also help substantiate longer-term management plans for patients in settings with high baseline rates of violence.56 Given their limited accuracy for individuals, however, formalised assessments still arguably perform more of a bureaucratic function than a clinical one in these settings by confirming that obvious modifiable risk factors have not been missed, satisfying managerial demands for transparency, and attempting to avert concerns about liability.

Risk assessments remain fundamentally limited in their accuracy for individual patients. As such, they foster unrealistic expectations about clinicians’ ability to anticipate adverse events and inevitably misclassify a proportion of patients. When used to justify resource allocation, they also unavoidably result in significant harms. These include the unnecessary treatment, detention and stigma for those mistakenly deemed high risk (false positives); the failure to anticipate violence arising from those mistakenly deemed low risk (false negatives); and the deprivation of more benign interventions for those deemed low risk more generally.57 58 Such issues indicate the need to recognise the inherent limitations of risk assessment. They also point to a need to acknowledge our own discomfort at the prospect of facing unpredictable and uncontrollable adverse events.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Michael Connors is a conjoint senior lecturer in the Centre for Healthy Brain Ageing and the Discipline of Psychiatry and Mental Health at the University of New South Wales in Sydney, Australia. He is also a psychiatry registrar at Prince of Wales Hospital in Sydney and currently finishing a Master of Psychiatry at the University of Melbourne. Dr Connors completed a Bachelor of Science (Honours) in psychology, a Bachelor of Arts in philosophy and religion, and a Doctor of Medicine at the University of Sydney. He also completed a Doctor of Philosophy on the cognitive neuropsychiatry of delusions at Macquarie University. His research focuses on neuropsychiatric symptoms and the clinical epidemiology of mental illness, ageing, and neurodegenerative disease.


Embedded Image

Matthew Large is a conjoint professor in the Discipline of Psychiatry and Mental Health at the University of New South Wales and is the Clinical Director of the Eastern Suburbs Mental Health Service at Prince of Wales Hospital in Sydney, Australia. He completed Bachelor of Medicine, Bachelor of Surgery, and Bachelor of Science degrees at the University of Sydney; a Doctor of Medical Science at the University of New South Wales; and a fellowship with the Royal Australian and New Zealand College of Psychiatry. Professor Large has longstanding research interests in mortality associated with mental illness and is internationally recognised as an expert in mental health risk assessment, suicide, and homicide.


Embedded Image

Footnotes

  • Contributors MHC conceptualised the manuscript and wrote the original draft. MML helped to conceptualise the manuscript and revised it for critically meaningful content. Both read and approved the final version.

  • Funding The authors received no financial support for the research, authorship and/or publication of this article.

  • Competing interests MML regularly provides expert opinion to courts on matters related to suicide and homicide.

  • Provenance and peer review Not commissioned; externally peer reviewed.