For years, Facebook has been investing in artificial intelligence fields like machine learning and deep neural nets to build its core business—selling you things better than anyone else in the world. But earlier this month, the company began turning some of those AI tools to a more noble goal: stopping people from taking their own lives. Admittedly, this isn’t entirely altruistic. Having people broadcast their suicides from Facebook Live isn’t good for the brand.
But it’s not just tech giants like Facebook, Instagram, and China’s up-and-coming video platform Live.me who are devoting R&D to flagging self-harm. Doctors at research hospitals and even the US Department of Veterans Affairs are piloting new, AI-driven suicide-prevention platforms that capture more data than ever before. The goal: build predictive models to tailor interventions earlier. Because preventative medicine is the best medicine, especially when it comes to mental health.
If you’re hearing more about suicide lately, it’s not just because of social media. Suicide rates surged to a 30-year high in 2014, the last year for which the Centers for Disease Control and Prevention has data. Prevention measures have historically focused on reducing people’s access to things like guns and pills, or educating doctors to better recognize the risks. The problem is, for more than 50 years doctors have relied on correlating suicide-risk with depression and drug abuse. And the research says they’re only slightly better at it than a coin flip.
But artificial intelligence offers the possibility to identify suicide-prone people more accurately, creating opportunities to intervene long before thoughts turn to action. A study publishing later this month used machine learning to predict with 80 to 90 percent accuracy whether or not someone will attempt suicide, as far off as two years in the future. Using anonymized electronic health records from 2 million patients in Tennessee, researchers at the Florida State University trained algorithms to learn which combination of factors, from pain medication prescriptions to feelings of aggravation, best predicted an attempt on one’s own life.
Their technique is similar to the text mining Facebook is using on its wall posts. The social network already had a system in which users can report posts that suggest a user is at risk of self harm. Using those reports, Facebook trained an algorithm to recognize similar posts, which they’re testing now in the US. Once the algorithm flags a post, Facebook will make the option to report the post for “suicide or self injury” more prominent on the display. In a personal post, Mark Zuckerberg described how the company is integrating the pilot with other suicide prevention measures, like the ability to reach out to someone during a live video stream.
The next step would be to use AI to analyze video, audio, and text comments simultaneously. But that’s a much trickier engineering feat. Researchers have a pretty good handle on the kind of words people use when they’re talking about their own pain and emotional states.
Chief Analytics Officer Europe
15% off with code 7WDCAO17
Chief Analytics Officer Spring 2017
15% off with code MP15
Big Data and Analytics for Healthcare Philadelphia
$200 off with code DATA200
10% off with code 7WDATASMX
Data Science Congress 2017
20% off with code 7wdata_DSC2017