Although it’s not possible to prevent every suicide, there are a lot things that can help lower the risk. And some of that is as close as your smartphone.
Health systems, tech companies, and research institutions are exploring how they can help with suicide prevention. They’re looking to harness technology in general – and artificial intelligence (AI) in particular – to catch subtle signs of suicide risk and alert a human to intervene.
“Technology, while it’s not without its challenges, offers incredible opportunities,” says Rebecca Bernert, PhD, director and founder of the Suicide Prevention Research Laboratory at Stanford University School of Medicine in Palo Alto, CA.
For instance, Bernert says that if AI can flag at-risk patients based on their health records, their primary care doctors could be better prepared to help them. While mental health care professionals are specially trained in this, studies show that among people who die by suicide, about 45% see their primary care doctor in their last month of life. Only 20% see a mental health professional.
Here are some of the tech advances that are in development or are already happening.
Clues From Your Voice
Researchers at Worcester Polytechnic Institute in Worcester, MA, are building an AI-based program called EMU (Early Mental Health Uncovering) that mines data from a smartphone to evaluate the suicide risk of the phone’s user.
This technology is still in development. It might have the potential to become part of a health app that you could download to your phone – perhaps at the suggestion of your health care provider.
After you grant all the required permissions, the app would deploy AI to monitor your suicide risk through your phone. Among the included features is the option to speak into the app’s voice analyzer, using a provided script or by authorizing the app to record segments of phone calls. The app can detect subtle features in the voice that may indicate depression or suicidal thoughts.
“There are known voice characteristics that human beings can’t detect but that AI can detect because it’s been trained to do it on large data sets,” says psychologist Edwin Boudreaux, PhD. He’s the vice chair of research in the Department of Emergency Medicine at UMass Chan Medical School.
“It can take the voice and all these other data sources and combine them to make a robust prediction as to whether your mood is depressed and whether you’ve had suicidal ideations,” says Boudreaux, who has no financial stake in the company making this app. “It’s like a phone biopsy.”
Smartphone data, with the user’s permission, could be used to send alerts to phone users themselves. This could prompt them to seek help or review their safety plan. Or perhaps it could alert the person’s health care provider.
Apps currently do not require government approval to support their claims, so if you’re using any app related to suicide prevention, talk it over with your therapist, psychiatrist, or doctor.
Sharing Expertise
Google works to give people at risk of suicide resources such as the National Suicide Prevention Lifeline. It’s also shared its AI expertise with The Trevor Project, an LGBTQ suicide hotline, to help the organization identify callers at highest risk and get them help faster.
When someone in crisis contacts The Trevor Project by text, chat, or phone, they answer three intake questions before being connected with crisis support. Google.org Fellows, a charitable program run by Google, helped The Trevor Project use computers to identify words in answers to the intake questions that were linked to the highest, most imminent risk.
When people in crisis use some of these keywords in answering The Trevor Project’s intake questions, their call moves to the front of the queue for support.
A Culture of Toughness
You might already know that suicides are a particular risk among military professionals and police officers. And you’ve no doubt heard about the suicides among health care professionals during the pandemic.
But there’s another field with a high rate of suicide: construction.
Construction workers are twice as likely to die by suicide as people in other professions and 5 times as likely to die by suicide than from a work-related injury, according to the CDC. High rates of physical injury, chronic pain, job instability, and social isolation due to traveling long distances for jobs all may play a part.
JobSiteCare, a telehealth company designed for construction workers, is piloting a high-tech response to suicide in the industry. The company offers telehealth care to construction workers injured on job sites through tablets stored in a locker in the medical trailer on site. It’s now expanding that care to include mental health care and crisis response.
Workers can get help in seconds through the tablet in the trailer. They also have access to a 24/7 hotline and ongoing mental health care through telehealth.
“Tele-mental-health has been one of the big success stories in telemedicine,” says Dan Carlin, MD, founder and CEO of JobSiteCare. “In construction, where your job’s taking you from place to place, telemedicine will follow you wherever you go.”
Suicide Safety Plan App
Jaspr Health's app aims to help people after a suicide attempt, starting when they are still in the hospital. Here’s how it works.
A health care provider starts to use the app with the patient in the hospital. Together, they come up with a safety plan to help prevent a future suicide attempt. The safety plan is a document that a health care provider develops with a patient to help them handle a future mental health crisis – and the stressors that typically trigger their suicidal thinking.
The patient downloads Jaspr’s home companion app. They can access their safety plan, tools for handling a crisis based on preferences outlined in their safety plan, resources for help during a crisis, and encouraging videos from real people who survived a suicide attempt or lost a loved one to suicide.
What if AI Gets It Wrong?
There’s always a chance that AI will misjudge who’s at risk of suicide. It’s only as good as the data that fuels its algorithm.
A “false positive” means that someone is identified as being at risk – but they aren’t. In this case, that would mean incorrectly noting someone as being at risk of suicide.
With a “false negative,” someone who’s at risk isn’t flagged.
The risk of harm from both false negatives and false positives is too great to use AI to identify suicide risk before researchers are sure it works, says Boudreaux.
He notes that Facebook has used AI to identify users who might be at imminent risk of suicide.
Meta, Facebook’s parent company, didn’t respond to WebMD’s request for comment on its use of AI to identify and address suicide risk among its users.
According to its website, Facebook allows users to report concerning posts, including Facebook Live videos, that may indicate a person is in a suicide-related crisis. AI also scans posts and, when deemed appropriate, makes the option for users to report the post more prominent. Regardless of whether users report a post, AI can also scan and flag Facebook posts and live videos. Facebook staff members review posts and videos flagged by users or by AI and decide how to handle them.
They may contact the person who created the post with advice to reach out to a friend or a crisis helpline, such as the National Suicide Prevention Lifeline, which this month launched its three-digit 988 number. Users can contact crisis lines directly through Facebook Messenger.
In some cases when a post indicates an urgent risk, Facebook may contact the police department near the Facebook user in potential crisis. A police officer is then dispatched to the user’s house for a wellness check.
Social media platform TikTok, whose representatives also declined to be interviewed for this article but provided background information via email, follows similar protocols. These include connecting users with crisis hotlines and reporting urgent posts to law enforcement. TikTok also provides hotline numbers and other crisis resources in response to suicide-related searches on the platform.
Privacy Concerns
The possibility of social media platforms contacting the police has drawn criticism from privacy experts as well as mental health experts like Boudreaux.
“This is a terrible idea,” he says. "Sending a police officer might only aggravate the situation, particularly if you are a minority. Besides being embarrassing or potentially traumatizing, it discourages people from sharing because bad things happen when you share.”
Privacy concerns are why the algorithm that could send Facebook posts to law enforcement is banned in the European Union, according to the Journal of Law and the Biosciences.
The consequences for people falsely identified as high risk, Boudreaux explains, depend on how the organization engages with the supposedly at-risk person. A potentially unneeded call from a health care professional may not do the same harm that an unnecessary visit from the police could do.
If you or someone you know is thinking of suicide, you can contact the National Suicide Prevention Lifeline. In the U.S., you can call, text, or chat 988 to reach the National Suicide Prevention Lifeline as of July 16, 2022. You can also call the Lifeline on its original number, 800-273-8255. Help is available 24/7 in English and Spanish.