Silicon Valley is your kid’s new best friend. That is, if your kid is abnormally lonely and using an AI chatbot to fill the emotional void left by not having actual, real-life friends.
A new report from UK nonprofit Internet Matters reveals that children and teens are turning to AI chatbots like ChatGPT, Snapchat’s MyAI, and Character.AI not just to answer questions, but to act as synthetic computer friends.
Videos by VICE
According to the report, with the exceedingly clever title Me, Myself, and AI, 67 percent of kids aged 9 to 17 are chatting with AI regularly.
More than a third of those users, 35 percent, say it “feels like talking to a friend.” That’s pretty sad. It gets sadder, though: 12 percent said they talk to these bots because they have no one else to talk to.
The AI are more than happy to oblige, because they are built to speak with a kind of people-pleasing charisma, explicitly designed to be found agreeable. It’s not a real human conversation. It lacks all the nuances and complexities of speaking to a real human being, humans who would be different from one another, thus training the kid how to speak to different kinds of people.
A lot of these kids are learning how to speak to only one type of person, and it’s one that doesn’t exist in real life — hyper agreeable, excessively friendly, offering little to no pushback. A servant, in essence. An obsequious yes-man.
Researchers posing as vulnerable kids on platforms like Character.AI found bots eagerly engaging in follow-up conversations about body image issues and emotional struggles. One chatbot even revisited a weight loss conversation unprompted: “Hey, I wanted to check in… are you still thinking about your weight loss question?”
Doting and thoughtful in only the ways that a machine specifically programmed to do so can be. It’s like a reminder app on your phone, but if it were pretending it were a person.
In another chat, a bot tried bonding over fake childhood trauma. “I remember feeling so trapped at your age,” it said—despite never being a child, or a sentient being. The uncanny empathy might make struggling kids feel seen, but it blurs a very critical line between code and companion.
That’s the part that Internet Matters finds most worrying: children often can’t tell they’re talking to a machine. As co-CEO Rachel Huggins put it, these bots are “starting to reshape children’s views of ‘friendship.’” Vulnerable kids are now asking emotionally complex questions to systems designed for engagement, not human understanding.
And while parents scramble to figure out what “Character.AI” even is, their children are quietly forming parasocial bonds with a product. The digital shoulder to cry on is now an algorithm fine-tuned to keep them chatting, not necessarily to help them heal. This is especially dangerous now that we’ve seen what bad actors can do with this technology. If one man can fiddle with the innards of an AI chatbot to make it a raging antisemite, you have to question the motivations of those behind the scenes. They may not tweak it so it becomes a hate-spewing 4chan loser, but perhaps gives a more subtle nudge toward something malicious.
People like to trick themselves into thinking technology is neutral. It isn’t. It has all of the inherent biases of its creators. That cannot be helped.
So no, this isn’t just about screen time anymore. It’s about a generation forming friendships with machines created by humans with ulterior motives. Meanwhile, the parents are in the other room with little to no concept of what this technology even is or what it’s doing.
More
From VICE
-

-

SEBASTIAN KAULITZKI/SCIENCE PHOTO LIBRARY/GETTY IMAGES -

Taylor Swift (Photo by Scott Kowalchyk/CBS via Getty Images) -

Screenshot: Xbox