How can researchers and tech conglomerates fight the growing problem of suicide using ai algorithms? It begins with the mining of social media for warning signs in people’s online language patterns.Researchers believe people’s language patterns can offer a clear indication that suicide is possible.
This is definitely a new project for businesses like Mindstrong. Mindstrong has already begun experimentation with machine learning algorithms to test and decode the language that people use and how that influences their behavior. This behavior can affect the scrolling speeds on smartphones. Can a person’s scrolling speed on their smartphones indicate depression or other mental issues?
Mind storm will expand its research this year to focus on behavior linked to suicidal thoughts. The hope is that this technology will be a tool for healthcare providers to detect and treat patients before they decide to take their life.
Tech companies are taking notice and want to be involved. Facebook announced that they will start rolling out its own automated suicide prevention tools in areas all over the world. Apple and Google have joined the bandwagon for this problem. You can tell a lot about a person by their social media language so this venture is long overdue. If technology can help save a life it is money well spent in my opinion.
Attempted suicide is on the rise. These ai tools will hopefully help reduce these suicide attempts. Healthcare workers are hopeful.Suicide takes the number two spot for leading cause of death in the united states for people age 15 to 34.
The question we must ask is do digital interventions even work? Is there enough evidence to risk exposing our privacy to the biggest tech companies in the world? Some are skeptical. Will this project cause more harm than good? People deserve to know how our private information is being used and to what extent.
Intervention Using Machines
Determining someone’s risk of suicide is extremely difficult. Matthew Nock, a psychologist at Harvard University in Cambridge, Massachusetts says that because suicide risk is hard to identify the preventing of suicide will prove difficult. The mind of emotions is dark and mysterious place who can really know it???
Professor Nock explains that most suicidal people lie lie lie about their suicide attempts to mental health professionals. Who can blame them? No one likes to be labeled a nut job and stripped of their rights. Using social media as a science lab so to speak to enter a person’s mind without them even realizing it can prove useful is what I’m gathering. According to Professor Nock, social media is like a window into a person’s emotions but in real time. If your anything like me posting too personal information on social media is a no, no, but many people do it and even more people take it too extremes. Do you enjoy putting all your emotional drama on the front page of social media? If you do you may become a guinea pig for AI machine learning algorithms on suicide prevention just saying!
All jokes aside machine learning algorithms can help scientists and healthcare administrators to distinguish whether a social media post is a joke, normal angst, or a real suicide threat emergency. AI machine learning can distinguish patterns that a human may miss.
Bob Filbin, a chief data scientist at the Crisis Text Line in New York City explains that people who are really contemplating suicide rarely use the word suicide in their communications with the crisis line. More common are words used are ‘ibuprofen’ or ‘bridge’ which are better indicators of a suicidal person.
Mind strong’s president Thomas Insel plans to collect passive data from a persons phone using an app he says this will prove more effective than using a questionnaire.A person’s health care administrator will administer the app to a patient’s phone. The program will run in the background and create a digital profile. After the profile is developed it will be able to pick up subtle worrisome changes in a person’s behavior and thus notifying their health care team of any potential problems.Insel admits that this is more of a long-term solution to preventing suicide. A short-term crisis will be more difficult to remedy.
When and Where?
Many false positives will be somewhat of a problem. What’s the proper signal for intervention?
Nocks explains that physicians or companies using AI machine Learning to detect suicide risk will have to decide what level of positivity is required before sending help.
Another point of interest to think about is that the already established suicide prevention hotlines have little evidence that its saving life’s just like with the new AI interventions. One problem is that when someone reports another’s suicidal content on social media the person tends to block the reporter from their pages.This also makes full disclosure less likely from a suicidal person because they feel more vulnerable.
The reason this is noteworthy is that Facebook relies heavily on user reports and proprietary algorithms that find red flags in posts then notifies a user or contacts a human moderator. The moderator will decide whether it will send a link to the crisis line or send emergency responders.
Facebook has been very secretive about how the algorithms and moderators will do their work. Will Facebook double check the algorithms by contacting the users to determine the intended results of an intervention?Apparently, a Facebook representative stated that the tools were created “in collaboration with experts” and that users will not be given the option to opt out of the service.This is alarming especially since decisions have to be based on solid evidence and Facebook isn’t willing to reveal the information researchers need to judge the program accurately.
I applaud Facebook and other establishments that are interested in making the world a better place. It may work and it may fail but doing nothing will surely produce no result at all. AI machine learning is coming onto the scene and its here to stay. Stay tuned and watch the AI revolution take over our problems and help make the world a little less horrible.