Hardware

A Staggering Proportion of High Schoolers Say Talking to AI Is Better Than Real-Life Friends

A Staggering Proportion of High Schoolers Say Talking to AI Is Better Than Real-Life Friends


A new survey found that over half of American teens are regular users of anthropomorphic AI companions like Character.AI and Replika.

That’s striking on its own, as an illustration of how embedded AI companions have become in mainstream teenage life. But even more startling were the 31 percent of surveyed teens who said their interactions with AI companions were either as satisfying or more satisfying than conversations with real-life friends — a finding that shows how profoundly AI is already changing the formative and tumultuous years of adolescence.

The survey, published today by the tech accountability and digital literacy nonprofit Common Sense Media, surveyed 1,060 teens aged 13 to 17 across the US. It found that around three in four kids have used AI companions, defined by Common Sense as emotive AI tools designed to take on a specific persona or character — as opposed to an assistive, general-use chatbot like ChatGPT — with over half of surveyed teens qualifying as regular users of AI companions, meaning they log on to talk to the bots at least a few times per month.

While about 46 percent of teens said they’ve mainly turned to these bots as tools, around 33 percent said they use companion bots for “social interaction and relationships, including conversation practice, emotional support, role-playing, friendship, or romantic interactions,” according to the report.

“The most striking finding for me was just how mainstream AI companions have already become among many teens,” said Dr. Michael Robb, Common Sense’s head of research, in an interview with Futurism. “And over half of them say that they use it multiple times a month, which is what I would qualify as regular usage. So just that alone was kind of eye-popping to me.”

AI companions have come under heavy scrutiny in the months following the filing of two separate lawsuits against Character.AI and its benefactor, the tech giant Google, over allegations that the company released a negligent, reckless technology that emotionally and sexually abused multiple minors, resulting in physical and psychological harm. One of the youth at the heart of these lawsuits, a 14-year-old in Florida named Sewell Setzer III, died by suicide after extensive interactions with bots on Character.AI with which the teen engaged in intimate and sexually explicit conversations.

In a separate safety assessment published earlier this year, researchers from Common Sense and Stanford University’s Brainstorm lab warned that no AI companion was safe for kids under the age of 18. But while that report focused deeply on content and safety pitfalls — interactive sexual or violent content easily generated by companion bots, the unreliability of the bots’ ability to provide accurate and helpful information, and the unknowns surrounding how access to agreeable, always-on social companions might impact kids’ developing minds — this latest study was aimed at understanding the breadth of use of companions among young people, and how integrated they’ve become in day-to-day teen life.

“Society is grappling with the integration of AI tools into many different aspects of people’s lives,” Robb said. “I think a lot of tools are being developed without children in mind, even though they are being accessed by users under 18 quite frequently… but there hasn’t, to date, been much research on what the AI companion environment is for children.”

The most widely-reported use case teens reported was entertainment, while many others said they use AI companions as “tools or programs,” as opposed to friends, partners, or confidantes; around 80 percent of teen users also reported that they spend more time with real, human friends than they do any AI companions, and about half of teens expressed skepticism around the accuracy and trustworthiness of chatbot outputs. In other words, many teens do seem to be setting healthy boundaries for themselves around AI companions and their limits.

“I don’t think teens are just replacing human relationships wholesale with AI companions; I think a lot of teens are approaching them fairly pragmatically,” said Robb. “A lot of kids say that they’re using it for entertainment and to satisfy their curiosity, and the majority still spend a lot more time with real friends and say that they find human conversations more satisfying.”

“But at the same time,” he caveated, “you still see little inklings below the surface that could be problematic, especially the more ingrained these things get in kids’ lives.”

The most ominous group in the survey might be the teens who don’t find human social interaction as satisfying as interactions with AI companions. Twentyone percent of teens, it noted, said their conversations with AI bots were just as good as human interactions, and 10 percent said they were better than their human experiences. About one-third of minors who reported AI companion use also said that they’ve chosen to discuss serious or sensitive issues with the bots instead of human peers.

“There’s a good chunk of teen users who are choosing to discuss serious matters with AI instead of real people, or sharing personal information with platforms,” said Robb, findings he said “raise concerns about teens’ willingness to share their personal information with AI companies.”

“The terms of service that a lot of these platforms have grant them very extensive, often perpetual rights to the personal information kids share,” said the researcher. “Anything a teen shares — their personal information, their name, their location, photographs of themselves… and also, the very intimate thoughts that they’re putting in there that all becomes fodder for the companies to be able to use however they want.”

Though most mainstream companion platforms technically forbid minors — the most high-profile exception being Character.AI, which has always rated its platform as safe for teens 13 and over — these platforms are extremely easy for young people to access regardless; age verifications are generally limited to providing a working email and self-reporting your birthday. The AI industry also effectively self-regulates, and there are virtually no rules dictating how generative AI products can be created, how they might be rolled out to the public, and to whom they can be marketed and accessed.

“There should be higher accountability for the tech platforms,” said Robb, adding that “we should have more meaningful regulation to regulate how platforms can provide products to children.”

Indeed, when it comes to teen use of AI companions, the burden of the AI industry’s regulatory vacuum falls heavily on parents — many of whom are struggling to keep up with the new tech and what it might mean for their children.

“There’s not a perfect plan for parents because they’re up against giant corporations who are very invested in getting their kids on these products,” said Robb. “Many parents don’t even know that these platforms exist… have that conversation openly, without judgment, as a first step.”

More on AI and kids: Vast Numbers of Lonely Kids Are Using AI as Substitute Friends

A Staggering Proportion of High Schoolers Say Talking to AI Is Better Than Real-Life Friends

Source link