Tech & Science

Machine-Learning Algorithms Can Predict Suicide Risk More Readily Than Clinicians, Study Finds

03_10_Suicide_01
03/10/17
In the Magazine
By finding useful patterns among dozens or hundreds of risk factors, machine learning algorithms could be better at predicting suicides than humans. Richard Wareham Fotografie/Getty

Each year in the United States, more than 40,000 people die by suicide, and from 1999 to 2014, the suicide rate increased 24 percent. You might think that after generations of theories and data, we would be close to understanding how to prevent self-harm, or at least predict it. But a new study concludes that the science of suicide prediction is dismal, and the established warning signs about as accurate as tea leaves.

There is, however, some hope. New research shows that machine-learning algorithms can dramatically improve our predictive abilities on suicides. In a new survey in the February issue of Psychological Bulletin, researchers looked at 365 studies from the past 50 years that included 3,428 different measurements of risk factors, such as genes, mental illness and abuse. After a meta-analysis, or a synthesis of the results in these published studies, they found that no single risk factor had clinical significance in predicting suicidal ideation, attempts or completion.

That may seem surprising. Surely depressed people are more likely than other people to kill themselves. That may be true, but there are a couple of things to keep in mind: First, these were predictive studies, each spanning almost 10 years on average, so the question is whether having depression now means you’re more likely to kill yourself over the next decade.

Second, clinical significance is not the same as statistical significance. In other words, the correlations are mathematically reliable but too weak to act on. In a given year, 13 in every 100,000 Americans will die by suicide. Even if those who attempt suicide are twice as likely as others are to later die by suicide, their probability is still only 23 in 100,000. So if I guessed you were going to die by suicide this year based on a prior attempt, I’d still be wrong more than 99.9 percent of the time. “Knowing that someone has made a prior attempt is helpful in the same way that buying two lottery tickets is helpful,” says Joseph Franklin, a psychologist at Florida State University (FSU) and the lead author of the paper. The odds are improved, but you wouldn’t bet your house—or a costly intervention—on it.

To increase the average American’s risk of suicide in a given year to even 10 percent (from .013 percent), something would have to increase their risk by a factor of 750. But in the meta-analysis, no single risk factor increased the odds of suicide by more than 3.6. For suicidal attempts, the strongest risk factor increased it by 4.2, and for suicidal ideation, 3.6. And these may be overestimates, given that weak findings likely weren’t published and thus couldn’t be included in the meta-analysis. (To be clear, a risk factor is not necessarily a cause. Sleep problems, for example, might influence suicidal behavior, or they might merely predict it by indicating a deeper issue that does influence it.)

Clinicians tend to be most interested in predicting suicidal thoughts and behaviors in the short term—say, within a week—but there’s no research addressing that question. You would need a very large pool of people who are at very high risk, such that you can measure a potential risk factor—say, job loss—and then within a week have enough of them attempt suicide to see if attempters are more likely than non-attempters to have lost their job that week. The problem (for research, not for society) is that very few of us attempt suicide in a given week. “Many people in the field—ourselves included—didn’t recognize that there weren’t studies like that, until we did the meta-analysis,” says Jessica Ribeiro, another psychologist at FSU and a co-author of the paper.

The authors also found that the ability of researchers to find factors that predict suicidal thoughts and behaviors did not improve over the 50 years they surveyed, and that some of the most popular factors to study—including mood disorders, substance abuse and demographics—are some of the weakest predictors.

Looking at a single factor at a time (such as depression), while ignoring short-term factors, has impeded the field. Suicide is complex, with many interacting variables. “Few would expect hopelessness measured as an isolated trait-like factor to accurately predict suicide death over the course of a decade,” the researchers write. “But many might expect that, among older males who own a gun and have a prior history of self-injury and very little social support, a rapid elevation in hopelessness after the unexpected death of a spouse would greatly increase suicide death risk for a few hours or days. Yet, most of the existing literature has tested the former hypothesis rather than the latter.”

The researchers recommend developing machine learning algorithms to find useful patterns among dozens or hundreds of risk factors. And a paper recently accepted for publication by Clinical Psychological Science shows the potential of doing just that.

Colin Walsh, an internist and data scientist at Vanderbilt University Medical Center, along with FSU’s Franklin and Ribeiro, looked at millions of anonymized health records and compared 3,250 clear cases of nonfatal suicide attempts with a random group of patients. To make their prediction method widely scalable, they restricted themselves to factors that would be documented in routine clinical encounters, such as demographics, medications, prior diagnoses and body mass index. Then they let a computer churn through the data and find patterns that would predict suicide attempts within various time frames, from a week to two years.

The accuracy score for each algorithm could range from 0.5 to 1, with 0.5 being no better than chance and 1 being perfect prediction. For comparison, the single factors from the meta-analysis achieved scores of about 0.58, little better than flipping a coin. The computer, however, achieved scores ranging from 0.86, when predicting whether someone would attempt suicide within two years, to 0.92, when looking ahead one week.

The researchers see a number of ways the algorithms could become even more accurate. For one, the models don’t include life events such as job loss or breakups, or abrupt changes in mood or behavior. Ribeiro is collecting data to weigh the usefulness of these factors in short-term prediction, which will help forecast not only who will attempt suicide, but when. (She’s using online forums to collect sufficient pools of subjects.) The researchers also imagine algorithms integrating social media behavior. In a recent study at the University of Pennsylvania, 71 percent of Facebook and Twitter users allowed medical researchers to access their online feeds.

Thomas Joiner, co-director of the Military Suicide Research Consortium, has high hopes for machine learning. He’s a colleague of Franklin and Ribeiro at FSU, “So I’m biased,” he says, “but I really do view them as the future.”

The technology is the easy part, researchers say. The hard part is deploying it. They’re talking with clinicians, administrators, and patients at the Vanderbilt University Medical Center to decide how machine learning might be implemented in patient care. How should medical data be shared? What’s the threshold for intervention? Who should be notified when a crisis situation is identified? Could there be unintended consequences?

Convincing clinicians to trust a computer over their instincts might be challenging, despite work going back decades showing that, due to our many biases, simple statistical models can match or beat humans at predicting job performance, academic success and psychiatric illness. “Clinical predictions are really bad,” Ribeiro, a clinician, says. “We know that, but that doesn’t mean we actually accept that our own predictions are bad.” Walsh sees a “hybrid” approach in which clinicians factor computer recommendations into their judgment. Joiner sees the reverse, in which human judgment becomes just another input for the computer.

In any case, the researchers expect that caretakers will ultimately do what’s best for their patients. “What clinicians do, day in and day out, is extremely difficult” Franklin says. “I think that they’re extremely brave and extremely hard-working. And I at the same time recognize that the rates of psychopathology in general and suicide in particular are not going down. The shift from where we are to where I think we need to go is going to be a bit of a bumpy ride.”

Joiner notes that neither the difficulty nor the importance of suicide prediction can be overemphasized. “These are massive tragedies that just devastate people, devastate families—sometimes for generations,” he says. “The message from the meta-analysis and the message from the climbing rates is, ‘We gotta do better.’”

Join the Discussion