Suicide Risk Prediction Tools Fail Racialized Patients

Current models used to predict suicide risk fall short for racialized populations including Black, Indigenous, and People of Color (BIPOC), new research shows.

Investigators developed two suicide prediction models to examine whether these types of tools are accurate in their predictive abilities, or whether they are flawed.

They found both prediction models failed to identify high-risk BIPOC individuals. In the first model, nearly half of outpatient visits followed by suicide were identified in White patients versus only 7% of visits followed by suicide in BIPOC patients. The second model had a sensitivity of 41% for White patients, but just 3% for Black patients and 7% for American Indian/Alaskan Native patients.

“You don’t know whether a prediction model will be useful or harmful until it’s evaluated. The take-home message of our study is this: You have to look,” lead author Yates Coley, PhD, assistant investigator, Kaiser Permanente Washington Health Research Institute, Seattle, told Medscape Medical News.

The study was published online April 28 in JAMA Psychiatry.

Racial Inequities

Suicide risk prediction models have been “developed and validated in several settings” and are now in regular use at the Veterans Health Administration, HealthPartners, and Kaiser Permanente, the authors write.

But the performance of suicide risk prediction models, while accurate in the overall population, “remains unexamined” in particular subpopulations, they note.

“Health records data reflect existing racial and ethnic inequities in healthcare access, quality, and outcomes; and prediction models using health records data may perpetuate these disparities by presuming that past healthcare patterns accurately reflect actual needs,” Coley said.

Coley and her team “wanted to make sure that any suicide prediction model we implemented in clinical care reduced health disparities rather than exacerbated them.”

To investigate, researchers examined all outpatient mental health visits to seven large integrated healthcare systems by patients 13 years and older (n = 13,980,570 visits by 1,422,534 patients; 64% female, mean [SD] age 42 [18] years). The study spanned from January 1, 2009 to September 30, 2017 with follow-up through December 31, 2017.

In particular, researchers looked at suicides that took place within 90 days following the outpatient visit.

Researchers used two prediction models: logistic regression with LASSO (Least Absolute Shrinkage and Selection Operator) variable selection and random forest technique, a “tree-based method that explores interactions between predictors (including those with race and ethnicity) in estimating probability of an outcome.”

The models considered prespecified interactions between predictors, including prior diagnoses, suicide attempts, and PHQ-9 [Patient Health Questionnaire-9] responses, and race and ethnicity data.

Researchers evaluated performance of the prediction models in the overall validation set and within subgroups defined by race/ethnicity.

The area under the curve (AUC) measured model discrimination, and sensitivity was estimated for global and race/ethnicity-specific thresholds.

Unacceptable Scenario

Within the total population, there were 768 deaths by suicide within 90 days of 3143 visits. Suicide rates were highest for visits by patients with no recorded race/ethnicity, followed by visits by Asian, White, American Indian/Alaskan Native, Hispanic, and Black patients (Table 1).

Table 1. Suicide Rates by Ethnicity

Ethnicity Outpatient visits followed
by suicide within 90 days, n
Suicide rate
per 10,000 visits
Not recorded 313 5.71
Asian 187 2.99
White 2134 2.65
American Indian/Alaskan Native 21 2.18
Hispanic 392 1.18
Black 65 .56

Both models showed “high” AUC sensitivity for White, Hispanic, and Asian patients but “poor” AUC sensitivity for BIPOC and patients without recorded race/ethnicity, the authors report (Table 2).

Table 2. Area Under the Curve (AUC) Sensitivity for Predication Models

Ethnicity, model AUC ≥ 95% Sensitivity
Entire validation set

Logistic regression
Random forest

.822
.816

41.1%
38.0%
White

Logistic regression
Random forest

.828
.812

46.8%
40.6%
Hispanic

Logistic regression
Random forest

.855
.831

36.8%
38.2%
Black

Logistic regression
Random forest

.775
.786

6.7%
3.3%
Asian

Logistic regression
Random forest

.834
.882

31.8%
60.0%
American Indian/Alaskan Native

Logistic regression
Random forest

.599
.642

6.7%
6.7%
Not recorded

Logistic regression
Random forest

.640
.676

23.4%
20.4%


“Implementation of prediction models has to be considered in the broader context of unmet healthcare needs,” said Coley.

“In our specific example of suicide prediction, BIPOC populations already face substantial barriers in accessing quality mental healthcare and, as a result, have poorer outcomes, and using either of the suicide prediction models examined in our study will provide less benefit to already underserved populations and widen existing care gaps” — a scenario Coley said is “unacceptable.”

“We must insist that new technologies and methods be used to reduce racial and ethnic inequities in care, not exacerbate them,” she added.

Biased Algorithms

Commenting on the study for Medscape Medical News, Jonathan Singer, PhD, LCSW, associate professor, School of Social Work, Loyola University, Chicago, Illinois, described it as an “important contribution because it points to a systemic problem and also to the fact that the algorithms we create are biased, created by humans, and humans are biased.”

Although the study focused on the healthcare system, Singer believes the findings have implications for individual clinicians.

“If clinicians may be biased against identifying suicide risk in Black and Native American patients, they may attribute suicidal risk to something else. For example, we know that in Black Americans, expressions of intense emotions are oftentimes interpreted as aggression or being threatening, as opposed to indicators of sadness or fear,” noted Singer, who is also president of the American Academy of Suicidology and was not involved with the study,

“Clinicians who misinterpret these intense emotions are less likely to identify a Black client or patient who is suicidal,” Singer said.

The research was supported by the Mental Health Research Network from the National Institute of Mental Health. Coley has reported receiving support through a grant from the Agency for Healthcare Research and Quality. Disclosures for the other authors are listed in the article.
Singer has reported no relevant financial relationships.

JAMA Psychiatry. Published online April 28, 2021. Abstract

For more Medscape Psychiatry news, join us on Facebook and Twitter.

Source: Read Full Article