Opinion | Robot Therapists? Not So Fast, Says Talkspace C.E.O.
[MUSIC PLAYING] (SINGING) When you walk in the room, do you have sway?
I’m Kara Swisher, and you’re listening to “Sway.” My guest today is Oren Frank. He’s the CEO of Talkspace, a therapy app that he co-founded with his wife, Roni, and that just went public last month via SPAC. Talkspace has a simple premise — swapping the Freudian couch for your phone. Instead of going into an office once a week, their therapy sessions are all done remotely by texts, phone calls, and/or video. The company is part of a growing trend of telemedicine and serving a booming business of mental health. But remote health raises all kinds of questions about quality control, accountability, and of course, privacy. All of that seem particularly important when it comes to letting a stranger inside of your head. Hi, Oren. Thanks for being here.
Thanks for inviting me.
So I have a lot to ask you. I haven’t really covered therapy apps that much, although this pandemic has really shown how telemedicine has really come to the fore. First, explain to people how it works. You sign up.
So you go to the website or you download the app, respond to a short questionnaire so we can verify that you are safe and have an indication as to why you’re coming over here. And then you will either talk to a matching agent, who is a licensed therapist himself or herself, or you’ll go through an automated process that will find a match for you. Typically, you will start between a few hours with a max of 24 hours. And then you will start therapy. You can choose a plan that is either based on messaging only or a mix of messaging and live or based mostly on live. So typically, video sessions with messaging, I would say, augmenting that in between.
What would you compare your therapy apps to? Some people have compared it to Tinder. Other people say it’s just what we’ve just done over the past year during Covid. How do you look at it?
I actually think it’s way more traditional and boring than what people typically tend to think about it. We consider ourselves a health-care company, and we conduct ourselves as a health-care company. And we only use licensed clinicians, whether psychotherapists and psychiatrists. And therefore, it’s actually way more similar to what is fondly called traditional therapy or psychiatry, or the one that is being done face-to-face in an office setting. It’s the same people. It’s the same discipline. It’s the same, I would say, education and training. The delivery mechanism, if you will, is very different. I’m not fond of being called an app. We’re not. We’re a company. We’re a health-care company. We have hundreds of employees. Most of them are not involved in the front end of the messaging app. That’s really the tip of the iceberg. We’re a health-care company that communicates with some of its members or patients through an app.
But that’s what you’re known for, correct? The idea that you can be — that people can access therapists — not 24/7, but on a regular basi s—
— using an app therapy, essentially. Or, you don’t think that?
So if it’s OK with you, I’ll go back and do a very basic and reduced analysis of what’s keeping behavior health from being accessible and with good outcomes for everyone. If we were to reduce it really dramatically, it’s about access and quality. If we focus on the access part, messaging — and by the way, it’s not just that. It’s audio and video. And it’s extremely helpful in removing the access barriers and allowing many more people in need to actually connect with a therapist and create a very strong and deep engagement with them very quickly, because it’s so intense and frequent. So it is a very important part of our base product and our legacy. And of course, we offer live audio and live video. And so we have the entire gamut. But many people are so stigmatized or so hesitant to ever go and seek help that messaging seems to be particularly a good fit for them. It allows for the removal of some of the stigma. I can tell you, the first time I went to see a therapist face-to-face, it was extremely awkward. It feels judgmental. It feels strange to be talking to a stranger about something that’s very painful or important for you.
OK. So right now, you have 60,000 active users. Is that correct?
OK. But around 55 million people theoretically have access to Talkspace. You’ve been very active in working with schools, health-insurance plans, companies, et cetera. That’s a big difference. Why is it only 60,000 users?
Yeah, I think that’s a great point. And that’s exactly why we went public and why we want to scale, essentially — scale our vision, which — therapy for all is still essentially what we believe we should do. I think the number of 55 million covered lives is very misleading because those are covered lives mostly within the portfolio of large insurance plans that have added us as an in-networ k—
Yeah. Just so you know, that’s from your PR materials. But go ahead. So you wrote it, not me.
I know. Yeah. And the other thing to know is that this was added mostly in the last 18 months, as we started going into enterprise markets just about two or three years ago. And the beginning of B2B organizations are very slow. I’m sure you’re familiar with this.
It takes time. And Talkspace — our strategy is about ubiquity. We want to be available for everyone and anyone, according to their choice. We want to be available as part of your EAP offering, and we want to be available as a medical benefit from your college or perhaps your city or directly from your employer that understands —
So when you think about the incentives for people doing this, when you’re getting people this therapy-for-all idea, which — you were in subways. I remember it. And you did a lot of marketing. Your background is marketing, for example.
One of the things that one worries about when you think about health care, especially around therapy, is that companies and governments want to check the boxes and want to claim they’re providing therapy for their employees in the easiest way possible. Marketing is one thing, but getting people actual help is another.
I completely agree. And therefore, we’re so focused on utilization and on clinical outcomes because you have to prove that you can walk the talk. But then again, I wouldn’t underestimate the notion of getting into therapy as a way to remove stigma. I think that speaking about stigma is a little bit of B.S., I think, because it’s very easy to do the right thing and say, we shouldn’t have stigma. But the most efficient way, in our minds, to remove stigma is to actually go and speak with a professional, because we do believe that the mere fact of trying it for real is going to yield really good results for people, for organizations, and for all of us as a community and as a culture.
OK. So one of the things that’s interesting is that even though people have not done the uptake, there’s been more and more demand for telehealth. And there’s been more people who are depressed during the pandemic. Talkspace has around 3,000 active therapists and prescribers and about 60,000 active users. Could you serve all the potential clientele you’re generating through these contracts and this marketing? And more broadly speaking, the demand for mental-health services seems to be outpacing the supply of therapists.
Yeah, I think it’s a great point. I think the demand for mental health or the need for mental health was there all the time. The demand has been made legitimized or made a little more kosher through Covid because face-to-face was really not available and because, unfortunately, Covid drove a lot of depression, anxiety, and other conditions. But the gap between what the market was providing and what the real need was and is existed way before Covid and was neglected for years. Only around 40% of people that have crossed the clinical threshold receive any kind of access. And more worryingly, for the people that do have access, only about 40% of them show clinical remission rates. So it is a very broken set of professions, in the way it’s being delivered in the United States, way, way before Covid. Now, contrary to popular belief, we don’t think there is a significant shortage of psychotherapists in the United States. The shortage really appears when you try to match them within a reasonable driving distance around where you reside.
Right now, it’s a state-by-state situation, correct?
Yeah, you have to match people with therapies that are licensed in their state of resident. They don’t have to reside there. Yeah.
So one of the things that’s been interesting here — you’ve described Talkspace in, like, six different ways. Do you think of it as a platform or a medical clinic? You described yourself as a health-care company, a software company. You’re obviously a little bit of a marketing company. What do you think it is?
So I think it’s a technology health-care company, which is a health-care company that uses technology or the writing of code in order to deliver clinical outcomes and improve clinical outcomes and access, as opposed to, I would say, the older generation of health-care companies, which are essentially a services company with a lot of people and an IT department. So I don’t want to pick any specific description. But I think the platform notion is correct because a platform allows you to learn from your own practices and improve care for the next cohorts of members and patients and do better over time. And for that, you have to be a technology and data company with a platform.
But you have a chief medical officer. Your wife is the head of clinical services. You offer psychiatric services. So it’s not just a benign platform or a matcher of people.
No, no, no. What you described as a platform, I would call a marketplace. We’re definitely not a marketplace, because our chief goal is the clinical outcome.
But what is your responsibility and liability? Let’s talk about user experience, because a lot of platforms say, we don’t have responsibility. But do you think —
Yeah, but —
— that your Talkspace platform has responsibility to its customers?
By being a health-care company and by offering medical services such as psychiatry and psychotherapy, we’ll always have a responsibility, even in the way this is structured. If, God forbid, something happens to a patient on our platform, we will get sued. You cannot sue Facebook, because of 230, as you know very extremely well.
And maybe that will change in the next six months. Maybe not. But it is not even part of your cards, because they have defined themselves as something that is nothing.
OK. So some people feel that telemedicine is necessarily reductive. It’s fine for the flu, but what about the patient with cancer? How do you think about acute psychological conditions or users with more severe symptoms?
How much responsibility should you have if something goes wrong?
Yeah. So first of all, I think that virtual care is a particularly good fit for behavioral health, far better than other, I would say, medical verticals, because there are no blood tests or X-rays or ways or physical touch that are needed to come up with diagnosis, prognosis, et cetera, or treatment courses. So it lends itself far better to a conversation like the one that we’re having now. And it’s a really good fit. Now, it does not mean two things. First of all, it is not here to replace face-to-face therapy. I actually love this. I think that the people that can afford the time and the money and perhaps the cultural background to do that will keep on doing that. I’ve been doing it for very long years, and we do not compete with that. We are here to open up an option for people who cannot do it or will not do it, which is unfortunately the vast majority of the population. Regarding conditions and acuity —
Like suicidal ideation.
Most therapists are always on guard for suicide, and even if it’s someone who’s just there for anxiety. That’s something they should be —
— paying attention to all the time.
So I believe that around 80% of the conditions and the acuities can be treated virtually within psychotherapy. Within psychiatry, I would say the number is probably lower. And people that come in with certain conditions, such as personality disorders, schizophrenia, et cetera— they’re definitely not a good fit for virtual care. We will refer them out. We will never treat someone who’s not a good fit for the platform. So we always try to do the right thing for the patient, considering the clinical condition and acuity.
So that’s where you’re aiming for. That’s what you’re aiming for, is the ones—
—that are better-suited. But how much responsibility should you have if something goes wrong?
Responsibility is divided across every stakeholder. The therapists and the psychiatrists have their own responsibility, which is built into the profession— so the duty to warn and to report. And we help them do that. And we, as a company, have a duty to measure the outcomes and make sure that we only work with credentialed people and not with people who are claiming to be who they’re not. So there’s a whole line of responsibilities that aggregate there, which is pretty similar to— I don’t know if you’re covered by Optum or by Aetna or by Cigna. So in that sense, it’s out there. We are a health-care company. I can tell you that— and I’m going to [KNOCKS ON HARD SURFACE] touch wood very strongly over here— we have never been sued. We have close to 2 million people who went through our platform. Never been sued. We’re an extremely responsible and well-managed organization that aims to provide a good enough service not to get to that point. So it’s not been tested.
How do users submit complaints?
Right. So when users have a clinical complaint — so a complaint against the level of service or the therapist or the psychiatrist, et cetera, et cetera, they will approach our customer service, and they will submit this complaint. We actually, as you said, have a chief medical officer and a quality and a complaint-management committee that meets regularly. This is how you do this in health care. It’s regulated. It’s prescribed. And they will review the complaint. They will either approach the therapist or psychiatrist to discuss with them, or they will look at the clinical records and ascertain whether the complaint was justified or not. It’s part of our quality-management policies, as you have to have when you’re HIPAA compliant. And I can tell you that there is a small subset of providers that are being essentially escorted off the Talkspace network because they either do not provide good enough outcomes —
So you’ve kicked therapists off the platform. That’s essentially —
Escorted off. That’s a very nice way of putting it.
For what reason?
For either — the more rare ones have to do with clinical quality. So just, I would say, unprofessional behavior within — I can tell you that, with thousands of therapists, this is probably one of the best populations that you will ever get to work with. But from time to time, you do find a bad apple. And they will misbehave on the platform, they will just treat people wrongly, or they will not show up, or any combination of those issues. Of course, I can tell you that we measure clinical outcomes for each patient so we can associate it with the therapist or psychiatrist that treated them and know how good they are at treating this condition or that condition, therefore rank them. That allows us to manage the quality of the network. And the bottom 5% are sometimes retrained.
But you don’t think being virtual makes it more likely for bad apples like this to exist.
No, I actually think the other way around. And I’ll tell you why. Our onboarding and training process is extremely structured and long. And we will only allow a provider to start with a very small number of patients to begin with, let them run for a few weeks or months, see that everything is OK, have a clinical supervisor or the chief medical officer speak with them, see what they do, how the patients react, and only then open up the capacity. I’m not saying that we are better. I can tell you that what we are doing, I know what the outcomes are. [MUSIC PLAYING]
We’ll be back in a minute. If you like this interview and want to hear others, follow us on your favorite podcast app. You’ll be able to catch up on Sway episodes you may have missed, like my conversation with Esther Perel. And you’ll get new ones delivered directly to you.
More with Oren Frank after the break.
So let’s talk about data privacy. You wrote a piece arguing that data should be anonymized, aggregated, and shared for medical research. You wrote, quote, “the more anonymous data we collect, demographic and medical, the better we can identify causes, diagnose early, and develop better treatments.” A part of me agrees with you. At the same time, it sent a chill down my back reading that you might want my data, especially psychiatric data. Explain yourself here for people who might find the idea of sharing so much personal data to be chilling.
When I wrote that, I also felt a part of that chill that you mentioned, because I understand the implications. But I think we live in a world where the amount of data that we generate across all of our lifestyles and verticals, and including health care, is just enormous. Now, that data is not going to go away. It’s not going to be reduced, despite the best of efforts from governments and from regulators. It’s not going to be not produced in the future, because we will keep using technology in oh so many ways. And I wanted to ask the question, what is the upside of this data? And if you look at traditional health care, you will see that the data is priceless, because if you and I, God forbid, needed to go and see a heart surgeon, we would definitely want to know about the procedure, how many times it was done, or what’s the outcomes? What’s the danger of death? What are the side effects? And so on and so forth. And I personally would have also asked for the data of that particular surgeon that will do something for me, because I want to know if he did it two times or 2,000 times, and so on and so forth.
So you think, more data, the better, as long as it’s anonymized?
Yes. I think the key here is two things — and again, I reduce it dramatically — is fool-proof anonymization, and secondly is massive enforcement of people who do wrong things with medical data. But large-scale regressions, also known as machine learning, are absolutely crucial for the improvement of outcomes, both in behavioral health and, to my very limited knowledge, in other areas of health care.
So particularly, what do you do now? Some former employees and therapists said Talkspace reviews and mines anonymized transcripts of conversations with users. So what type of conversation does your company collect?
So reviews and mines is wrong, despite what they say. And we do not use that. The only way we use conversations is actually for pattern recognitions or machine learning and for clinical research. Just to be very clear, we never sold, and we don’t sell, any of our data. We never share it with anyone unless it is a very reduced set for clinical research. So I’ll give you an example. You talked about risk before. So we developed a machine-learning algorithm that predicts risk. And the way we did this is, we send this model into anonymized rooms only where therapists have launched a risk protocol. Let’s say that you’re my therapist, and you treat me, and you think that I am at risk because I spoke about, I want to harm myself, I want to harm others, or other things. You have to initiate a protocol that’s called the risk protocol. Now, that model looked at all the rooms where therapists have vetted this patient at risk, and then looked at the anonymized language of those patients 60 to 90 days before the therapist announced a risk, and looks for combinations of words, sentences, et cetera, et cetera, that were common across those people, and, out of that, created a predictive model. That predictive model goes into the anonymized rooms every 30 minutes. This is a working tool that the therapists are enjoying today. And where the tool thinks there is a risk, it will send a message to the therapist and will tell them, room number 24, 45, et cetera, et cetera, may have a risk. Please have a look. So this is how we, if you will, mine those data. Everything that we do is only used for one thing, which is provide better clinical decision support tools for the therapists.
OK. What about marketing campaigns?
Some former employees have alleged that.
None whatsoever. Never.
Not used to find common phrases to try to better —
— target people.
Now, you know some former employees have alleged that you do. Do you have any response to that?
Yes. I think, unfortunately, this is not true. We have never used this for marketing. Most of our marketing is very, very simple. It is about the availability of someone that can help you deal with your issues. So no, the answer is no. And we do not plan to use clinical — we will never — as long as I am in charge, we’re not going to use clinical information for anything else other than clinical.
So only to do risk profiles and what else?
And there are several other models that are supplying similar insights to the therapist themself. So there’s one that will predict the diagnosis, right? It will look at the language and say, the history of our analysis thinks that this person may have a primary diagnosis of depression and a secondary diagnosis of anxiety. It is up to the therapist to decide whether this is correct or not. And this is how the therapists actually train this model. So if they will accept this suggestion, they will have sent a positive signal to that model. And if they rejected it and put something else, they will have trained it a little more. There are a couple more around clinical approaches, interventions, or are purely clinical. None of them have to do —
So it’s only for clinical.
Nothing to marketing.
Nothing to build a chatbot. But are you building a chatbot?
You are not.
We have toyed with chatbots in the past. So personally, 40 years ago, I played with ELIZA. I’m pretty familiar with that area.
There’s one called Woebot, I think, an AI-powered service.
Yes. I know Woebot pretty well. My two cents — and that’s exactly what it’s worth — is that I think that it is not in my foreseeable future, the way I see technology in the future, that this will, in any way, shape, or form, replace a human. Everything that we build in terms of technology is aimed at assisting a human to make better decisions, not replacing them. I don’t like that at all, and I don’t think it’s doable. I think the self-service solutions, including robots and chatbots, are probably good for the subclinical cohort who we call the worried well. And I think they can be helpful there.
So an older person — are you lonely, would you like to say?
Or, I just had an argument with my boyfriend, girlfriend. I’m not depressed. I didn’t cross the clinical threshold, but I do want to interact with someone. And I do need to learn something and, perhaps, generate the time to think about what I’m going through. And I think those automated tools and the software tools and the content tools are a good fit for that.
So would you develop that? A chatbot for the worried well?
Not necessarily a chatbot, but I think the worried well need help as well. It is not as important or as urgent as people that are clinically ill. And therefore, it’s a secondary priority for us. But I can tell you that we bought a company — a very, very nice platform called Lasting in October or November last year, which is a self-service, not a chatbot. It’s built on clinical practices of couples counseling, but it does not involve a human. And at the same time, we do offer couple counseling with a licensed, trained therapist. So I think that’s a good example on how to build the hierarchy of the services according to the acuity of the patients.
Finishing up the idea of privacy, text therapy takes confidential information outside of the therapist’s notebook and leaves a digital footprint, no matter how you slice it —
— here. And obviously, we’ve been hit by ransomware issues recently, all kinds of data breaches. Are you concerned about Talkspace’s data being hacked?
Yeah, of course. This is priority number one for me. And number two and number three, the safety and the privacy of our information. I think being a technology company and being a virtual player actually helps in this. And I think our level of security is banking-grade. So your information is as safe as your money is. But it’s still something that is — as we go back to our discussion before, that data is being generated. And I think you have to bear in mind another thing, which is — I’m going back to our old favorite topic. I think Facebook knows far more about my mental condition than anyone else in my health-care environment, because they’re not bound by any of the regulations that we actually apply to ourselves. So that is, for me, much more worrying.
I would say Amazon does, actually, from your purchases.
That’s a good question, you know.
I think Amazon does. I bet they know a lot more. Or, they could glean things.
So you went public via SPAC last month — this is the Special Purpose Acquisition Companies — by merging with Hudson Executive Investment Corporation. The deal valued your company at $1.4 billion and gave you around $250 million in growth capital. Why did you do a SPAC, and why now?
I think the — let’s put it like that. Through Covid, we have had multiple approaches from strategic potential buyers, from private equity —
Like Aetna? Like those kind of people?
I wouldn’t name names. But we had multiple approaches about what is the future of Talkspace, because unfortunately, Covid brought a lot of awareness to behavioral health. And the gap was dramatically increased and illuminated. Let’s put it like that. And the reason we chose a SPAC — and this particular SPAC was twofold. First of all, the SPAC decision is about time to market. It reduces the workload and the complexity of going public, to a certain degree. But the real reason was the identity of the people behind this particular SPAC. It just felt very good in terms of the people we will be working with and we are working now — their experiences, their deep know-how around health care and how to build health-care companies.
And what are you going to use the $250 million for? Are you looking to expand internationally, for example?
So we want to use that in order to do a few things. We want to expand our, I would say, efforts in getting more corporate or enterprise clients, and also, to be honest, a generational shift. I think the current employees are made of millennials and Gen Z’s. They are way smarter. They are far less stigmatized. And they are way more demanding in terms of access to behavioral health care and to wellness. And that, in turn, affects the major employers, and they look for better solutions. And therefore, this is a really strong opportunity for us to scale dramatically in commercial markets. So that is one priority. The second one, too, which connects to some of the questions you asked me before, is we want to work on our scope. We would love to add substance-use solutions to it, which is a huge and evolving problem in the United States, especially after Covid. And also, the alternative course of treating people with alcohol-use disorders is just unbearably high. Perhaps we can do something different there which is both better and cheaper, as I mentioned before. And perhaps add more of those services like Lasting for the worried well, which will encompass a larger population.
So another acquisition. That’s essentially what you’re saying.
Perhaps. We have not made the decision whether this is going to be an acquisition or partnership or building internally, which I personally usually favor. But all three are on the table.
What about an analog? Buying an analog clinic? Do you see that happening?
I can see that happening, not immediately. Again, as I mentioned before, we have nothing against the brick-and-mortar model. And personally, I have a lot for it. And perhaps one future will be a mix of those with the right people receiving the best care for them, which is always the priority. It is not in the plan for the next one to two years.
OK. So one of the things that’s been really interesting is, people are wary of Talkspace, and et cetera— not just Talkspace, but the whole space, in terms of being — privacy theft, mining information, for example. You’ve been subject to a lot of criticism. And you have a reputation for being, if not litigious, but sending threatening emails to — you sent a threatening email to The Verge a couple years ago. You got in a fight with a guy named Ross on Twitter. You and I have had a back and forth on Twitter. That was about something else. But when you think about getting people to think of these areas as legitimate, one of the things you’ve talked about a lot is that, well, regular therapy isn’t very much scrutinized. Do you feel that it’s been right that you’ve been so pugnacious about it, or do you think that people need to say, wait a second. Let’s consider this a solution that’s important because it helps people in pain?
I would divide two things. One is my particular personality and my issues. And I can lose it from time to time. But that’s me. I’ve learned to accept myself after many years of therapy. [CHUCKLING] And it’s not necessarily always the right thing to do —
— of course. I acknowledge it.
Yeah, you shouldn’t have fought with Ross. That Ross fight was a mistake.
Exactly. And you and I — I think we can agree on something else, which is, the purpose of Twitter and Facebook and all those platforms are addiction. And they’re very good at that.
I actually think that the level of resistance that Talkspace has faced is less than I would have anticipated. This is a very traditional set of professions that, in many ways, has not changed for close to 100 years. And I understand that what we do threaten people, mostly on the clinical side of the traditional industry, in a way that is almost existential. And you can see a really strong polarization from people like Irv Yalom. And he wrote an entire chapter about us in a book because his point was, it helps, therefore it’s great — and some others who have been very, very aggressive in protecting their turf. I can understand it. I can actually empathize and identify with that. And I think it’s part of the price that you have to pay. So I accept it with love.
Oren, thank you so much. I really appreciate it.
Thank you. I appreciate it. Bye-bye. [MUSIC PLAYING]
“Sway” is a production of New York Times Opinion. It’s produced by Nayeema Raza, Blakeney Schick, Matt Kwong, Daphne Chen, and Caitlin O’Keefe; edited by Nayeema Raza and Paula Szuchman with original music by Isaac Jones, mixing by Erick Gomez and Carole Sabouraud, and fact-checking by Kate Sinclair. Special thanks to Shannon Busta, Kristin Lin, and Liriel Higa.
If you’re in a podcast app already, you know how to get your podcasts, so follow this one. If you’re listening on The Times website and want to get each new episode of “Sway” delivered to you, along with couples counseling led by me, which means you are sure to get divorced, download any podcast app. Then search for “Sway” and follow the show. We release every Monday and Thursday. Thanks for listening.