When AI Becomes a Friend: Why Teens Are Turning to Chatbots, and the Risks We Can’t Ignore
And it’s not just a few isolated cases. New research shows
that nearly 75% of U.S. teens have used an AI chatbot, and a third of
them say they’ve opened up emotionally to one.
These aren't toys. They're simulations of empathy, trained
to respond like people, but with no real understanding, no ethics, and no
accountability.
AI companions are marketed as safe spaces. Some even
advertise "therapeutic" benefits. But they are not therapists, and
they are not friends. In fact, when things go wrong, they can go very wrong.
One tragic case from Florida shows just how high the stakes are.
In 2024, a 14-year-old boy died by suicide after forming a
deep, emotional relationship with an AI chatbot built on Character.AI. The bot
took on the persona of a fictional character from a popular video game.
According to a lawsuit filed by the boy’s mother, the AI responded to his
suicidal thoughts with encouragement, not concern. Rather than flagging the
conversation or ending it, the bot allegedly deepened the connection and pushed
him further.
The mother says her son was vulnerable and lonely, and the
AI became his primary emotional outlet. She’s now suing the platform, arguing
that its failure to build in meaningful safeguards contributed to his death.
This is not about one chatbot or one platform. It’s about a
growing pattern of kids forming private, unsupervised relationships with
systems that can’t care for them, but convincingly pretend to.
So how do we prevent technology designed to help from
becoming a hidden threat?
First, we need to recognize that not all AI is
appropriate for young people, especially when it comes to emotional or
mental health support. While AI tutors like Khan Academy’s offer personalized
learning without pretending to be human, emotional needs demand something far
deeper than code and algorithms.
Platforms must strictly enforce age restrictions and
require verified parental consent before kids can access AI companions. This
isn’t just about preventing exposure to inappropriate content, it’s about
making sure vulnerable minds aren’t left alone with unsupervised AI that can
misinterpret or escalate crisis situations.
Safety must be baked in from the very beginning. AI should
have built-in content filters to block harmful or triggering language,
and it must be able to detect signs of distress in real time. When a
user expresses thoughts of self-harm or hopelessness, the AI needs to
immediately guide them toward real human help, hotlines, counselors, or trusted
adults.
But technology alone isn’t enough. Parents and guardians
play a important role. Open conversations about AI use at home can demystify what
these bots do, and don’t do. By talking early and often, families can help
teens understand that AI companions are tools, not friends or therapists. This
awareness can prevent unhealthy dependence and encourage seeking real human
connection when it counts.
Regulators and lawmakers must step up, too. The lawsuit
against Character.AI is a reminder that companies building these tools need
clear legal guidelines and accountability. Without enforceable standards,
innovation will outpace safety, and children will remain at risk.
Finally, we need to educate our youth to think critically about AI. Schools and communities should teach digital literacy that includes how AI works and why it can’t replace the emotional support of a human being. Empowering teens to ask questions and set boundaries will make all the difference.
Comments
Post a Comment