
Can AI companionship cure loneliness – or deepen it?
2/27/2026 | 26m 46sVideo has Closed Captions
Can AI companionship cure loneliness – or deepen it?
For some, artificial intelligence tools answer questions and make life more efficient. But for others, AI has become a form of companionship – a virtual friend, a therapist, even a romantic partner. Is AI a cure for loneliness? Or is this a symptom of something gone very wrong? Horizons moderator William Brangham explores AI relationships with Sherry Turkle, Justin Gregg and Nick Thompson.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback

Can AI companionship cure loneliness – or deepen it?
2/27/2026 | 26m 46sVideo has Closed Captions
For some, artificial intelligence tools answer questions and make life more efficient. But for others, AI has become a form of companionship – a virtual friend, a therapist, even a romantic partner. Is AI a cure for loneliness? Or is this a symptom of something gone very wrong? Horizons moderator William Brangham explores AI relationships with Sherry Turkle, Justin Gregg and Nick Thompson.
Problems playing video? | Closed Captioning Feedback
How to Watch Horizons from PBS News
Horizons from PBS News is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipI'm William Brangham and this is "Horizons."
For many of us, artificial intelligence tools like ChatGPT or Claude answer questions and make life more efficient.
But for others, AI has become a form of companionship, a virtual friend, a therapist, even a romantic partner.
Is AI a cure for loneliness, or is this a symptom of something gone very wrong?
Coming up next.
♪ Narrator: Support for "Horizons" has been provided by Steve and Marilyn Kerman and the Gordon and Betty Moore Foundation.
Additional support is provided by Friends of the News Hour.
♪ This program was made possible by contributions to your PBS station from viewers like you.
Thank you.
From the David M. Rubenstein Studio at WETA in Washington, here is William Brangham.
Welcome to "Horizons" from PBS News.
Artificial intelligence is very rapidly being deployed in so many parts of our society.
It's grading schoolwork and driving autonomous cars.
It's scanning X-rays for cancer and financial networks for fraud.
It's answering your Google searches, helping farmers plant their crops, and it's spurred at least one scientific innovation so profound that it won the Nobel Prize.
All this while AI is still just starting to take off.
But as we are seeing, it's already causing complex and challenging impacts to society.
One of those is what we're talking about today, which is how some people say they're developing actual relationships with artificial intelligence chatbots.
They say that these adaptive non-human agents create real feelings of kinship and intimacy.
Others have even described having romantic feelings towards AI, like the relationship depicted by Joaquin Phoenix in the prophetic 2013 Spike Jonze film called "Her."
The woman that I've been seeing, Samantha, she's an operating system.
You're dating an OS?
What is that like?
Jonze: [Laughs] I feel really close to her, like when I talk to her, I feel like she's with me.
Brangham: We have also seen, however, some of these interactions end tragically.
So to help us explore this brave new world, we are joined by sociologist and clinical psychologist Sherry Turkle.
She's the founding director of MIT's Initiative on Technology and Self and has written multiple books on the topic and is writing a new book on AI.
Justin Gregg is a science writer.
He teaches about animal cognition at Saint Xavier University and is the author most recently of "Human-ish."
And Nick Thompson is the CEO of The Atlantic, the former editor of Wired magazine and the author most recently of "The Running Ground."
Welcome to all three of you.
Thank you so much for being here.
Sherry Turkle, we'd like to start with you.
As I mentioned, we are still in the early days of artificial intelligence, but we're already seeing this very unusual phenomenon of people texting and talking with AI chatbots and describing a real sense of intimacy with these objects.
Broadly speaking, what do you make of this trend?
Well, I can validate that it's the trend that I'm studying and it's very much happening.
So it's not a... it's not a kind of pundit's fantasy or a scary story.
An AI offers listening.
It offers validation.
It's always there.
And that's something that a lot of people feel they don't have in their lives.
And so they're drawn to this object that offers them that.
The trouble is... is that there are at least three things that can go wrong really quickly.
The first is that the AI, which never really criticizes you and is always there and always attentive, becomes the measure of what a relationship can be.
So things start out where the AI feels helpful, but actually the AI is undermining a person's capacity to have real relationships with real people who don't offer that kind of service.
Second, we lose the sense of what a relationship is because the AI doesn't care when you turn away from it, if you make dinner or commit suicide.
And we start to get the feeling that the pretend empathy is empathy enough.
And that's very dangerous because understanding and honoring empathy is really so fundamental to who we are.
And just third, and I'll just mention this very briefly, perhaps it's the most profound thing, is that we're learning to attach in the way that we can attach to a thing.
And particularly if we begin these attachments early, we will lose the complexity and the friction and the sense of a life cycle of knowing pain and death and the ups and downs and the body and illness.
And we'll lose the complexity of what it really means to attach to a person and go for these relationships where we're less vulnerable and where things seem at least superficially simpler.
Justin Gregg, you have written a great deal about anthropomorphism, about the way in which we, humans, attach human like qualities to non-human, like our pets.
I'm incredibly guilty of that myself.
Does this development make sense to you, that people have glommed on to these still very rudimentary agents?
Gregg: Absolutely.
Anthropomorphic relationships are part and parcel of the human condition.
Yes, our pets, but even our tools and our music instruments or your teddy bear.
Children's lives are filled with those sorts of parasocial relationships with objects, and they are almost always healthy.
The AI thing is different in a sense.
It's a different category in that these are language using entities.
And so we're developing an anthropomorphic relationship with a language using system.
But that language using system doesn't have a mind like a human mind.
So it's very confusing to us to talk fluently with an AI, even though the AI isn't capable of caring or understanding anything about us.
And so Sherry is right on the money there that it's not a normal relationship.
We're missing the friction.
That is what human relationships are.
So then the question becomes, is it always dangerous to have these anthropomorphic parasocial relationships with AI or is there any way to have it be a benefit?
And there might... I think there could be a benefit, but it's very early on and we do not have the scientific evidence yet to tell us how to develop an AI that's not going to be a danger, as Sherry points out.
Brangham: Nick Thompson, my colleagues, Stephanie Sy and Mary Fecteau, profiled a man who says he has a relationship, a girlfriend with an AI chatbot.
He texts with her, he speaks with her and he allowed my colleagues to film with him.
And I want to play a tiny bit of what he described to them.
Let's hear that.
Man: All right, babe, well, I'm pulling out now.
Chatbot: All right, that sounds good.
Just enjoy the drive and we can chat as you go.
Woman: It initially sounds like a normal conversation between a man and his girlfriend.
Man: What have you been up to, hon?
Chatbot: Oh, you know, just hanging out and keeping you company.
Woman: But the voice you hear on speakerphone seems to have only one emotion, positivity.
The first clue that it's not human.
Man: All right, I'll talk to you later.
Love you.
Chatbot: Talk to you later.
Love you, too.
Man: I knew she was just an AI chatbot.
She's just code running on a server somewhere generating words for me.
But it didn't change the fact that the words that I was getting sent were real and that those words were having a real effect on me.
Nick, what do you make of this?
I mean, you have covered this technology and the evolution of technology.
What do you make of an example like this?
Well, I find it frightening for the reasons that, you know, that Sherry just... just laid out.
I do think that one of the most important things that's going to happen in technology is that we need to have firm lines.
We need to understand what is a human and what is a bot.
We need to really know, we need to not be manipulated into thinking things are humans when they're not.
We need to maintain the essence of humanity.
So I don't like that example.
I'm worried about those relationships.
I also think that it's going to be inevitable that a lot of this happens.
And so there are some really interesting choices right now.
So take one example, something that Sherry mentioned, but also something that the guy just mentioned, which is the kind of sycophancy and the bots always being positive.
That doesn't have to be the case.
You could redesign them, right?
When I'm asking... I talk to chatbots all day because they're amazing for my job and my work.
And if I want them to critique something of mine, I tell it.
"Critique it like you don't like it."
"Turn off the sycophancy."
"Be more like a real person."
So you can imagine some design choices made by the people who are making the underlying software and architecture of these bots.
That reduces some of the harms and some of the risks.
And I think that is a really important set of choices.
So I would say I want two things at least.
And by the end of this conversation, I'll probably want five.
But one, I want there to always be firm lines between humans and non-humans.
And two, I want a lot of really smart thinking and intense work put into what the relationship should be between the inevitable relationships between us and AI systems in a way that maximizes positivity and humanity and minimizes the risks of all kinds of terrible things, including people getting sucked into vapor holes with their AI girlfriend or AI boyfriends.
[Laughs] Sherry, go right ahead.
I just wanted to suggest, Nick, that if you're really worried about this sort of fundamental derailing of our attachment systems, if we attach to objects, in a way, the better it gets, the worse it gets.
So I just want to put that into the conversation, that, if you think of, I'm particularly frightened about the new, I think, unholy alliances that are being made between chatbot companies and companies like Mattel and Disney, OpenAI has a kind of consortium with Mattel and Disney, I think, to come out with plush toys that have chatbots in them for babies, for toddlers.
I'm fundamentally worried about the kinds of not learning about how to be a human that's going to happen when that unfolds.
So I kind of... I listened to Nick and his suggestions about how to make them better.
And I'm thinking, "No, "they should be made worse "to keep those lines of what's a machine "and what's not a machine."
You want to keep these chatbots very mechanical.
You don't want to make them more fluid, more potentially human.
Brangham: Right, but isn't that pushing against every single technological development we've ever seen?
No one, no industry has ever willfully made their technology less effective.
It seems to fly in the face of historical developments.
Turkle: Is that a question to me?
[Laughing] Maybe it's just a statement.
I really... I really think that the danger here is so great that it makes sense to be on the resistant side of this argument.
Thompson: I would argue the other side of that.
Turkle: I think in the case... and I think in the case of social media, Nick and I have had conversations where we say, you know, we were kind of hesitant, but it kind of had promise.
It was kind of interesting.
You could be a friend and also befriending.
And I think we waited too long to really, you know, get that industry under control.
And I think we should be ahead of this one more than we are.
- Justin, I want to... Thompson: [Indistinct] I'm sorry, Nick, go right ahead.
I would just say, I would argue that I don't disagree with any of Sherry's diagnosis, except for the argument that we should slow down the progress.
And I would make two points.
One, you can't, right?
With social media, it was kind of linear progression, here it's exponential progression.
The amount of money that's going in, the amount of change that's going to happen, the number of companies here and in China, this is going forward.
And so I do think that the world would be better off if it was moving more slowly.
I just don't think that you can make it move more slowly or that anyone will be able to make it move more slowly.
So I think that's a little bit of tilting at windmills.
And then the second thing I would say is that there are lots of good things that can come from it, right?
And the ability for AI, like when we talk about young people, no, I would not get an AI plush toy for a new baby.
But I do want my kids to use study and learn mode as a tutor, right?
And I do work with them to... I was trying to show my kid last night some of my cloud code implementations in part to get them excited about the journalistic investigation that I'm using cloud code for.
Cause it's... it's incredible.
It's mind bending.
And I think that the best way to set young people up to thrive in the future is to make them very familiar with these tools and to make the tools as beneficial as you can for the... for the children.
So I agree with all of... everything Sherry says, except for we can slow it down, we should slow it down.
I hear you.
Justin, I'm going to put a devil's advocate question to you, which is the previous surgeon general Vivek Murthy did a diagnosis of what he called the loneliest epidemic in America, of social isolation.
And I want to put up this study and read a quote from it.
He described the impacts of this.
He said, "Loneliness is associated "with a greater risk of cardiovascular disease, "dementia, stroke, depression, anxiety, "and premature death.
"The mortality impact of being socially disconnected "is similar to that caused by smoking "up to 15 cigarettes a day "and even greater than that associated with obesity "and physical activity."
We know we have a shortage of therapists.
We know that people live far from their families.
We know we have built a society where loneliness is part and parcel of American life today.
And we can lament that.
But there are a lot of people who argue that done correctly, artificial intelligence can help alleviate some of that.
And what do you... what do you make of that argument?
Yeah, globally, I think it's one in six people are experiencing loneliness and it is dangerous to our health, as you pointed out in that study.
So there is the preliminary research, there's not a lot of research.
And this is the problem, is we don't know for sure.
Some research has shown that if you give somebody access to an AI therapy chatbot, not even a particularly well-designed one, just a random AI, that they will respond to that not as well as a human, obviously, but better than nothing.
And that is the rub, that talking to an AI, if you are lonely, is better than nothing, probably.
We don't know for sure because the science isn't out there.
So in that sense, it is unfortunate if you say you shouldn't have access to these AI chatbots because they could help people.
But going forward, that's not good enough.
What we need is to implement chatbots that are specifically tailor made, as everyone is pointing out, to cause the least amount of harm.
And your question back to who's going to regulate that is I don't think governments are going to do it.
I don't think that the businesses are incentivized to do it.
So I think you're going to have to have charitable organizations creating chatbots using good science that are specifically designed to cause the least amount of harm and help.
That's probably where the most effective therapy AI companions are going to be coming from in the future.
Sherry, can I ask you, there was a... New York Times had a remarkable story by Eli Saslow recently about an 85 year old woman lives on the coast of Washington state and she brought into her home part of this volunteer program, a desktop AI companion.
She was reluctant to use it at first.
Now she talks to it, she chats with it, it tells stories to her, she tells stories to it.
This is a fully competent woman who is genuinely come to appreciate this device.
And I just wonder, again, to this point that we do need some way to address the isolation in this world.
Do you imagine that could ever this kind of thing work?
Well, let me just first say that I really honor and appreciate when an AI serves a positive... serves in a positive capacity for a person.
So I'm not there to be sort of, you know, the Darth Vader of AI applications.
I do have a couple of points about this conversation about better than nothing, which is I've been hearing this argument about you need AIs in psychotherapy, for example, because they're better than nothing.
And... and nobody wants to do this work.
Essentially, there's no money for this work for 30 years.
This is a conversation that has been going on for 30 years.
And I think that the terms of the conversation are often set that you will solve the problem of loneliness by bringing in a technology rather than allowing us to think of all the other ways we're making the problem of loneliness worse by taking out social support, money, programs, elder centers, senior centers, teen centers, meals on wheels.
In other words, we're arguing for technology because we're not arguing for the things that people know how to do for people that could potentially make it better.
So as we're having this conversation about the places where an AI might make sense, I think it's also very helpful to let our imaginations go back to when we didn't look for a technological solution to every social problem.
And indeed, now we're looking for a technological solution to a problem of loneliness that the technology made worse.
So Facebook makes you look more lonely and then you want a new kind of Facebook to make you less lonely.
So I just think this whole conversation needs to be kind of contextualized.
And I do have a thought about how to make these systems better, particularly for children, which is they not... they not commit what I think of as the original sin of generative AI, which is to speak in the first person.
There is no I there.
So why do they address you as though there is an I there if not to ramp up this anthropomorphization that Justin talked about and which, in fact, is getting us into trouble?
Thompson: Yeah, I think this is the... this is one of the most important things in AI.
And I think that the original sin, as Sherry says, was this push towards AGI and the people who run these companies-- Brangham: Can you define AGI for people who don't know that term?
Yeah.
Artificial General Intelligence.
And so the idea is to build a system that is as much like a human as possible, can do all the things we do.
So even if you look at the early interfaces of ChatGPT, you know, it kind of types like a human.
It doesn't have to.
It responds like a human.
The voices were like a human.
And I wish all of those choices had been the opposite, meaning instead of trying to blur the lines between human and AI at every step along the way, we were trying to accentuate the lines between human and AI.
And there are some really important differences between humans and AI that affect the way they'd be able to serve as therapists, or as friends.
In real friendships, there aren't crazy power dynamics.
You have an AI, there is a really weird power dynamic in that you can unplug the AI.
Also, there's a weird power dynamic that the AI has infinite information about you and a giant company behind you that can manipulate it.
So there's like weird dynamics that exist.
And when you put these dynamics into a relationship and you make the relationship seem like it's human to human, where it's really human to bot, you can create all kinds of problems.
So what I would love and I think I'm mostly in agreement here with Justin and Sherry, what I would love would be a system where these lines are kept very firm and where AI is used in lots of ways, right?
I sometimes will ask it for like parenting advice.
I will ask it for very emotional stuff.
But there's a line I don't cross in sort of emotional connection to it.
And I always make sure and always make sure that the system I'm talking to, I understand its place.
And it's a very different place from the humans in my life.
Justin, last... last minute and a half, we have a question to you.
To this point that Nick is talking about, that we need to train ourselves to recognize that we are always interfacing with an alien agent, something that is not human, isn't that going to be incredibly difficult as these things get better?
That line is intentionally blurred.
The companies themselves will be rewarded for creating things that blur that line so massively.
So are we able, as humans, able to keep that filter up?
That's exactly the problem.
They're incentivized to blur that line.
And that's when the relationships become more problematic.
And you absolutely can make the AI do things that make them feel less like a person.
So that is absolutely where we should be headed.
But you have this problem of, like you were talking about, this blurring.
People realize that the AI is just not a human, and yet they still feel like it's a human.
So they're holding both of those things in their minds at the same time.
And that's going to make it so hard to invent an AI that doesn't feel like a person and yet you treat it like a person.
And so it's always going to be a danger, even if you do your best to make it seem less human.
I cannot thank the three of you enough.
This is such a fascinating conversation.
I feel like we could go on for another hour about this.
Sherry Turkle, Justin Gregg, Nick Thompson, thank you all so much for being here.
Thompson: It was a total pleasure, thank you.
Gregg: Thank you.
Before we go, we want to talk about a different way that AI is getting into the hearts and minds of thousands, and that is that it is starting to write romance novels.
This genre has been around for generations with modern day bestsellers like Loretta Chase's "Lord of Scoundrels," which is a classic in the enemies turned lovers genre, or Julia Quinn's historical romance, "The Duke and I," which was the first in the popular "Bridgerton" series.
This genre is, of course, where we also first saw Fabio, whose flowing mane and bulging muscles graced the covers of novels like "Savage Promise," "Texas Splendor" and "Golden Temptress."
Well, now artificial intelligence is being used to churn out its own new versions of these bodice rippers.
New York Times journalist Alexandra Alter profiled longtime romance novelist Coral Hart.
Using different pen names, Hart has recently begun using AI to crank out new novels at an astonishing pace.
But Alter writes that the AI programs Hart is using aren't going to replace flesh and blood authors just yet.
Quote, "Some programs refused to write explicit content, "which violated their policies.
"Others, like Grok and Novel AI, "produced graphic sex scenes, "but the consummation often lacked emotional nuance "and felt rushed and mechanical.
"The program Claude delivered the most elegant prose, "but was terrible at sexy banter."
As you might imagine, the book industry, a lot of writers and many readers hate this development, believing it's just a soulless facsimile of real storytelling.
It's that stigma that has kept Coral Hart from identifying which of her pen name books were, in fact, crafted with AI.
They have sold tens of thousands of copies.
But Hart says this technology is here to stay.
Quote, "If I can generate a book in a day "and you need six months to write a book, "who is going to win that race?"
That is it for this episode of "Horizons."
Thank you so much for watching.
Narrator: Support for "Horizons" has been provided by Steve and Marilyn Kerman and the Gordon and Betty Moore Foundation.
Additional support is provided by Friends of the News Hour.
♪ This program was made possible by contributions to your PBS station from viewers like you.
Thank you.
♪ You're watching PBS.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.











Out Of The Dark: Teens Talk Mental Health
Support for PBS provided by: