Behavioral health company Talkspace uses artificial intelligence to help therapists identify patients at risk of suicide. It says its approach is roughly 83 percent accurate. That’s significant, given a rise in suicides that’s driven rates to modern highs. CEO Jon Cohen, who’s also a vascular surgeon, talked with Daniel about the new technology’s possibilities and limits. This interview was edited for length and clarity. How does your method work? We have the ability through large language models — AI — to actually monitor, essentially in real time, the conversation that is going on privately between you and your therapist. We run that through the engine, and then our proprietary algorithm will actually tell the therapist based on the model we built that the person is at risk for either suicide or self-harm. What does the therapist do then? It's an alerting mechanism. We don’t tell the therapist what to do. We don’t tell them they have to use the information. We don’t tell them how to treat. But we raise a red flag and say, "Listen, you need to know that, based on what’s going on here, this patient is at risk." Could this go beyond suicide prevention to help flag or diagnose other issues? There are a bunch of other diagnoses that we believe, at some point, we will be able to help the therapist as they move through the conversations. I’m calling it therapy-assist — so the answer is yes. There’s no timeline, but we’re leaning in on it.
|