Mark Zuckerberg Thinks AI ‘Friends’ Can Solve The Loneliness Crisis. Here’s What AI Experts Think.
Feeling lonely? Mark Zuckerberg thinks maybe it’s time you send an AI bot a friend request.
Last week, the Meta CEO sat down for an hour-long conversation with podcaster Dwarkesh Patel and argued that it’s only a matter of time before society sees the “value” in AI friendships.
“There’s this stat that I always think is crazy,” Zuckerberg says in a clip going around social media. “The average American, I think, has, I think it’s fewer than three friends. Three people that they consider friends. And the average person has demand for meaningfully more. I think it’s like 15 friends or something, right?”
Advertisement
While Zuckerberg doesn’t argue that AI can replace actual friends, he does say it can get people longing for “connectivity” closer to that 15 number. (Especially “when the personalization loop starts to kick in and the AI starts to get to know you better and better,” he said.)
The tech billionaire also suggested there may be untapped potential in AI girlfriends and therapists, both of which are a whole different ethical can of worms.
Zuckerberg’s remarks quickly went viral, with commenters online accusing him of being out of touch and not comprehending the true nature of friendship. Some called his ideas “dystopian.”
Advertisement
“Nothing would solve my loneliness like having 12 friends I made up,” TV writer Mike Drucker joked on Bluesky.
Yet the tech CEO is, at least, attempting to offer solutions for a known problem. The loneliness epidemic ― especially isolation among teen boys ― is a growing public health concern, with significant individual and societal health implications.
Advertisement
According to a 2023 Gallup study, nearly 1 in 4 people worldwide ― approximately 1 billion people ― feel very or fairly lonely. (The number would have undoubtedly been higher had the pollsters asked people in China, the second-most populous country in the world.)
That said, as many tech media outlets noted, the argument in favor of AI friends is interesting coming from Zuckerberg, given Meta’s poor track record with implementing AI bots on its own platforms.
Stefano Puntoni, a marketing professor at the Wharton School who’s been studying the psychological effects of technology for a decade, pointed this out as well.
Advertisement
“Given what we know, I am not sure I’d want to delegate the job [of solving the loneliness epidemic] to such companies, considering their track record on mental health and teenage wellbeing,” Puntoni said. “Social media companies are currently not doing much to help most people, especially the young, forge meaningful and healthy connections with themselves or others.”
Just last week, Futurism reported that Facebook’s ad algorithm could detect when teen girls deleted selfies so it could serve them beauty ads ― a claim that was made in former Facebook employee Sarah Wynn-Williams’s tell-all, “Careless People.”
There have been cases (and subsequent lawsuits) where kids using AI companions through services like Character.AI, Replika and Nomi, have received messages that turn sexual or encourage self-harm. Meta’s chatbots have similarly engaged in sexual conversations with minors, according to an investigation from The Wall Street Journal, though a Meta spokesperson accused the Post of forcing “fringe” scenarios. (Proponents of AI like to talk about it like it’s a neutral tool ― “AI as the engine, humans as the steering wheel,” they’ll say ― but cases like that complicate the idea.)
Advertisement
Still, AI experts like Puntoni aren’t entirely against the idea of AI companionship. When used in moderation and with built-in boundaries in place, they say it has some benefits. In his recent research, Puntoni found that AI companions are effective at alleviating momentary feelings of loneliness.
Those who used the companion reported a significant decrease in loneliness, reporting an average reduction of 16 percentage points over the course of the week.
Puntoni and his colleagues also compared how lonely a person felt after engaging with an AI companion versus a real person, and surprisingly, the results were pretty much the same: Contact with people brought a 19-percentage-point drop in loneliness levels, and 20 percentage points for an AI companion.
Advertisement
“In our studies, we didn’t test the long-term consequences of AI companions ― our longest study is one week long. That should be a priority for future research,” Puntoni explained.
“My expectation is that AI companions will turn out to be very good for the wellbeing of some people and potentially very bad for the long-term wellbeing of others,” he said.
“One person even claimed that their best friend was their AI companion despite having several human friends and a real-life husband.”
– Dan Weijers, a senior lecturer in philosophy who studies AI at the University of Waikato in New Zealand
Advertisement
And a lot will obviously depend on the decisions made by AI companies, Puntoni said. Take Elon Musk’s X, for instance. A couple of months ago, Grok ― X’s AI bot ― released an X-rated AI voice called “unhinged” that will scream and insult users. (Grok also has personalities for crazy conspiracies, NSFW roleplay and an “Unlicensed Therapist” mode.)
“Those examples don’t exactly inspire confidence,” Puntoni said.
There’s privacy concerns to consider when it comes to AI buddies, too, said Jen Caltrider, a consumer privacy advocate. Relationship bots are designed to pull as much personal information out of you as they can to tailor themselves into being your friend, therapist, sexting partner or gaming buddy.
But once you put all those hyper-personal thoughts out into the internet ― which AI is part of ― you lose control of them, Caltrider said.
Advertisement
“That personal information is now in the hands of the people on the other end of that AI chatbot,” she said. “Can you trust them? Maybe, but also, maybe not. The research I’ve done shows that too many of the AI chatbot apps out there have questionable, at best, privacy policies and track records.”
Dan Weijers, a senior lecturer in philosophy who studies ethical uses of technology at the University of Waikato in New Zealand, also thinks we should be skeptical about any pronouncements about AI from any profit-taking company spokesperson.
But he concedes that AI “friendship” can provide some things that human friendship could never: 24/7 availability (and the instant gratification that comes with that) and the ability to tailor AI to be the perfect, always agreeable companion.
Advertisement

Maria Korneeva via Getty Images
That agreeableness is a polarizing feature. OpenAI recently withdrew an update that made ChatGPT “annoying” and “sycophantic” after users shared screenshots and anecdotes of the chatbot giving them over-the-top praise.
Others don’t mind the kissing up. Weijers, who visits a lot of forums reading about human-AI companion interactions as part of his research, said there are those cases where a person falls in love with their AI companion, not unlike the scenario in Spike Jonze’s 2013 film “Her.”
Advertisement
“A minority of users of AI companions have romantic relationships with their AI but some will even say they are married to them,” Weijers said. “On one online forum, one person even claimed that their best friend was their AI companion despite having several human friends and a real-life husband.”
Still, isn’t part of friendship hearing the thoughts and opinions of someone who’s different from us? That’s what Sven Nyholm, a professor of the ethics of artificial intelligence at Ludwig Maximilian University of Munich, wonders about these bonds.
“AI chatbots can simulate conversation and produce plausible-sounding text outputs that resemble the sorts of things friends might say to us,” Nyholm said, but that’s about it.
Advertisement
“As humans, we want to be seen and recognized by others. We care about what other people think about us,” he said. “Other people have minds, whereas AI chatbots are mindless zombies.”
“It’s scary to think there might be more money going into training AIs to understand humans than for humans to understand AIs.”
– Jen Caltrider, consumer privacy advocate
Valerie Tiberius, professor of philosophy at the University of Minnesota and the author of the forthcoming book “Artificially Yours: AI And The Value Of Friendship,” thinks AI companions supplementing friendships could still be healthy. Supplanting your friends is another story.
Advertisement
“Challenging, messy human friendships that contain friction and disagreement help us develop into interesting people; they enrich our lives beyond just improving our mood,” she said.
If you only had chatbot friends that are programmed to be unerringly supportive and positive, “you wouldn’t learn how dumb some of your own ideas are,” Tiberius said. “I also appreciate that my friends sometimes ‘check’ me in ways that a chatbot wouldn’t do.”
What AI chatbots “say” to us is based on impressive machine learning programs, but if you care about getting true recognition, Nyholm thinks they’re a poor substitute.
Advertisement
“I also really think we should perhaps start talking about the ‘AI-ization’ of life: When it is suggested that any problem — including loneliness ― should be solved with the help of AI, then we might be trapped in a mindset where it is assumed that for any problem we might have, AI is the solution.”
If people are lonely and need friends, instead of telling them AI can be their friend, Nyholm thinks tech companies should be using technology to connect them with other lonely people who are also looking for friends.
One thing is clear to Caltrider, the privacy advocate: As more and more people use these AI companions, we’re going to need some serious AI literacy training to learn how to navigate this new, so-far unwieldy territory.
Advertisement
“I just read an article about a developing field of AI psychiatry to help AIs overcome their mistakes,” she said. “It’s scary to think there might be more money going into training AIs to understand humans than for humans to understand AIs.”
For the time being, Caltrider isn’t trusting AI to be her friend.
“Everyone has to make their own decisions here, though,” she said. “And honestly, I’ve asked ChatGPT some questions I probably wouldn’t want the world to know. It’s just easy and, yes, kind of fun.”
Comments are closed.