Facebook spots suicidal users with artificial intelligence

3 March 2017
The company developed algorithms to recognize worrisome posts and comments. When the human review team decides that a member is truly at risk, they will contact them and suggest ways to seek help. To do so, Facebook has collaborated with several U.S. mental health organisations. Members can contact these organisations with the Messenger tool.

Algorithm recognises patterns

Facebook already support those who are at risk, by letting friends push a report button when they’re concerned. The new algorithm should speed up this process and make sure receiving help isn’t depended on other members. The algorithms are trained to recognise patterns. Examples are posts in which the user talks about sadness or pain. Comments like ‘are you okay?’ and ‘I’m concerned’ will also be recognised by the algorithm.

A human review team then makes sure that the messages are truly alarming.

Contacting family and friends

At one point, Facebook wants to contact those who could support the at-risk member. This is, however, a privacy sensitive matter. After all, Facebook can’t truly know the personal dynamics within a friend group. The company is searching for a fitting solution to this problem, since having a support network is often a motivation to get help.

Advice during a Livestream

The tool should make sure that Facebook can intervene before a user makes a fatal decision. It is now possible for concerned viewers to get advice about ways to support the broadcaster. The broadcaster on the other side will receive a message from the Facebook team. Facebook is not going to cut off the Livestream, since it is important for family and friends to have to opportunity to express concern and offer help, says Jennifer Guadagno, Facebook's lead researcher on the project.

The function is being tested in the U.S. The new system should be rolled out worldwide. Facebook is searching for suited partners, to make it possible to contact crisis counsellors in other countries.