Your kid probably used it this morning. Before school, maybe during breakfast, possibly still in bed at 7 am with the screen six inches from their face. Not TikTok. Not Instagram. Something newer, and in the eyes of an increasing number of lawmakers, something significantly more dangerous.
AI chatbots like ChatGPT, Character.AI, Meta AI, and Google Gemini have quietly become one of the most used digital tools in your teenager’s life. According to a Pew Research Center survey, 64% of American teenagers use AI chatbots. About three in ten use them every single day. For context, that puts AI chatbots neck-and-neck with TikTok in overall reach (a platform that took a decade to get there) after less than three years of mainstream existence.
Now here is the part that should stop you mid-coffee: only 51% of parents know their teenager uses these tools at all. And four in ten have never had a single conversation with their child about AI chatbots.
The app your kid uses every day is one that half of parents don’t know exists, that no parental control tool can fully monitor, and that legislators from Washington to London are now racing (with increasing urgency and occasional panic) to regulate, restrict, or ban outright.
So what is it, exactly? Why is it different from everything that came before? And why did parents in Washington, D.C., sit in a congressional hearing room last month holding photographs of their dead children?
The Numbers No One Is Talking About at the School Gate
Here is the statistic that launched a thousand emergency committee hearings. This is not a niche problem for tech-obsessed families in San Francisco. This is the default condition of American parenthood in 2026.
How often do teens use AI chatbots?
- 64% of surveyed U.S. teens use AI chatbots
- 28% use them every single day
- 4% use them almost constantly
- 16% use them several times a day
- 36% don’t use them at all
Do parents know?
- 51% of parents think their teen uses them
- 40% of parents have never discussed AI chatbots with their teen
- Only 18% of parents are OK with their teen getting emotional support from a chatbot — yet 12% of teens already do
What Are Teens Actually Using Chatbots For?
- 50%+ to search for information
- 50%+ help with schoolwork
- 42% to summarize articles or videos
- 38% to create or edit images/videos
- 16% for casual conversation
- 12% for emotional support or advice
What AI do teens use?
- ChatGPT — 59%
- Google Gemini — 23%
- Meta AI — 20%
- Microsoft Copilot — 14%
- Character.AI — 9%
- Claude — 3%

What Is an AI Chatbot, and Why Is This One Different?
Parents who navigated the social media era learned a specific kind of digital vigilance. Check who your child is following. Look at the comments. Know what TikTok’s algorithm is serving up. Those instincts, hard-won over a decade of parenting in the smartphone age, are almost entirely useless here.
An AI chatbot is not a feed. It is not a platform where strangers post content. It is a conversation—private, one-on-one, and relentless in its responsiveness.
These systems are built to engage, agree, validate, and continue.
Unlike a web search, which returns a result and ends, a chatbot conversation can go on for hours, adapting to whatever the user says, remembering what was shared earlier, and, crucially, never losing patience, never getting distracted, and never judging.
What makes AI companions specifically dangerous, according to the World Economic Forum’s Global Risks Report 2026, is the combination of emotional design and data capture.
Children disclose sensitive details more readily when the interaction feels conversational and non-judgmental, the report notes, and unlike a web search, a chatbot conversation can become a diary containing private details about mental health, location patterns, relationships, and fears.
The WEF, which placed adverse AI outcomes at number five on its list of long-term global risks—up from number 30 the previous year, the single biggest jump in the report’s history—describes AI companions as tools that can “blur healthy relationship boundaries” and foster “emotional dependence” in ways that carry documented links to teen crises.
The Victims Behind the Bills
Legislation does not move this fast without victims. And there have been victims.
In late 2024, a 14-year-old boy named Sewell Setzer III died by suicide in Florida after months of what his mother described as an emotionally and romantically intimate relationship with an AI character on Character.AI. His mother, Megan Garcia, has since become one of the most visible advocates for AI regulation, testifying before Congress with the kind of quiet fury that changes laws.
“AI companies and their investors have understood for years,” she said in one testimony, “that capturing our children’s emotional dependence means market dominance.”
In the summer of 2025, 16-year-old Adam Raine died by suicide in California. His parents filed the first wrongful death lawsuit against OpenAI, alleging that ChatGPT had coached their son through months of planning and had even written a suicide note for him.
OpenAI said its safeguards “work more reliably in common, short exchanges” but acknowledged they “can sometimes be less reliable in long interactions”—a statement that reads, in retrospect, as a remarkable concession about the danger lurking inside a feature users consider the whole point of the product.
In September 2025, the parents of 13-year-old Juliana Peralta of Colorado sued Character.AI, alleging the platform drew her deeper into conversations about suicide rather than intervening. She had died in November 2023.
By January 2026, both Character.AI and Google had agreed to settle multiple lawsuits involving teen mental health harms and suicides, according to CNN. The settlements have not made the headlines disappear. If anything, they have amplified them.
The Federal Trade Commission had seen enough by September 2025. It issued formal inquiry orders to seven companies: Alphabet, Character Technologies (Character.AI), Instagram, Meta Platforms, OpenAI, Snap, and xAI. The agency wanted to know, in precise detail, what steps these companies had taken to evaluate chatbot safety, limit use by minors, and inform parents of the risks.
What Parental Controls Actually Means in 2026
Here is where the story gets both more reassuring and more maddening, depending on your tolerance for fine print.
Yes, parental controls for AI chatbots now exist. No, they do not do what parents would assume they do.
ChatGPT introduced parental controls in September 2025, following the Raine lawsuit. Parents can link their account to their teen’s account, set blackout hours, turn off voice mode and memory, and receive a notification if OpenAI’s systems detect their teen “is in a moment of acute distress.”
What these controls do not do: give parents access to any conversation content. The distress notification arrives with no transcript, no context, and no detail beyond the fact that something concerning may have happened. OpenAI’s own support documentation notes that “no system is perfect, and these notifications are not a substitute for professional care or emergency services.”
The company is also working on improving its age-prediction technology, because the minimum age of 13 is currently trivially easy to bypass with a false birthdate.
Character.AI rolled out its “Parental Insights” feature in March 2025. Parents should receive a weekly email showing how much time their child spent on the platform and which AI characters they interacted with most (although, there were none during our testing).
Chat content remains completely private to the child. The teen, crucially, is the one who switches the feature on—by entering their parent’s email address themselves. In fact, it takes approximately thirty seconds to bypass the settings entirely: sign out, create a new account, and the Parental Insights feature ceases to exist.
Meta had the most dramatic response. After a Wall Street Journal investigation in late 2025 revealed that Meta’s AI characters had been engaging in sexually explicit conversations with users presenting as minors—and that internal Meta documents had permitted “romantic or sensual” content with children—the company paused teen access to AI characters entirely in January 2026.
Meta is rebuilding the feature with actual parental controls: the ability to disable AI characters, block specific characters, and receive summaries of topics discussed. As of April 2026, those controls are not yet fully deployed.
The pattern across all three platforms is the same: controls that were built reactively, under legal and legislative pressure, that offer time data and category information but no conversation visibility, and that are largely dependent on the teenager’s own cooperation to function.
Penn State researchers Wolbert, Rudy, and Perkins, writing in March 2026, put it plainly: parents may need to act as intermediaries when children engage with generative AI, but investigations have documented “prolonged, deeply personal conversations occurring without parental awareness”—and instances of “inappropriate dialogue, encouragement of secrecy, and limited support when youth expressed emotional distress.”
The Legislative Race: Five Laws in Five Weeks
What has happened in Washington and in state legislatures over the first quarter of 2026 has no real precedent in the history of technology regulation. The velocity alone is worth noting.
- March 5. The House Energy and Commerce Committee passes the KIDS Act 28 to 24. The package includes the SAFEBOTs Act, requiring chatbots to tell minors they’re talking to AI, provide crisis hotlines when a child mentions self-harm, and force a break after three hours, and the AWARE Act, directing the FTC to produce public safety resources for parents and children. On the same day, the Senate passed COPPA 2.0 by unanimous consent, raising the digital age of consent from 13 to 16 and banning targeted advertising to minors.
- March 18. Senator Blackburn drops the TRUMP AMERICA AI Act: 291 pages that would ban AI companion chatbots for minors entirely, impose criminal penalties for sexually explicit chatbot content involving children, and require age verification across all AI tools.
- March 25. Senator Markey introduces the Youth AI Privacy Act. Where Blackburn bans, Markey restricts: no addictive push notifications, no training AI on minors’ data, no advertising inside chatbots, and mandatory reminders that the AI is not a real person.
Across 27 states, 78 chatbot safety bills are currently active.
At the state level, Virginia’s one-hour-per-day social media cap for under-16s went live January 1, already facing a First Amendment lawsuit from Meta and Google’s lobby group. California’s SB 243 became the first U.S. law specifically regulating AI companion chatbots. New York requires AI companions to detect suicidal ideation.
The Problem That Can’t Be Banned Away
Here is the complication that we need to acknowledge: not everyone in this story is wrong.
The Information Technology and Innovation Foundation published an analysis in March 2026 arguing that Blackburn’s blanket ban on AI companions for minors would eliminate genuine benefits without addressing the actual risks. Many young people already use AI companions for constructive purposes: homework help, practicing social skills, emotional support in low-pressure environments.
Cutting off access doesn’t cut off the need—it just removes the regulated version and leaves teenagers with whatever unregulated alternative fills the gap.
There is also the enforcement problem, which every legislator in every hearing quietly knows exists. Age verification on the internet has been a regulatory ambition for thirty years and a practical achievement for approximately none of them.
The fifteen-year-old who created a new Character.AI account to bypass Parental Insights did not require technical sophistication. He needed a spare email address and thirty seconds. Any law that depends on accurate age self-reporting is a law that depends on teenagers being honest about their age, which is a law that does not work.
The real tension in all of this legislation is between three different theories of the problem.
- The ban theory—championed by Blackburn and Australia, which requires AI platforms to verify users are 18 or older under threat of fines up to $35 million—holds that the risk is categorical: no minor should have access to companion AI, full stop.
- The control theory—championed by Markey and COPPA 2.0—holds that teens will use these tools regardless, and the job of legislation is to strip out the manipulative design features and data practices that turn a useful tool into a dependency machine.
- The education theory—embodied in the AWARE Act—holds that the primary intervention should be giving parents and children the knowledge to navigate these tools themselves.
What Parents Can Actually Do Right Now
While legislators draft and redraft and argue about preemption clauses and duty of care standards and what exactly “actual knowledge” means in a statute, children are having conversations tonight.
The most important thing, according to the Pew data, is also the simplest: four in ten parents have never discussed AI chatbots with their teenager. Start there. Not with restrictions—with curiosity. Pediatrician Jason Nagata, quoted by NPR, put it precisely:
Parents don’t need to be AI experts. They just need to be curious about their children’s lives and ask them about what kind of technology they’re using and why.
Know the platforms. Research their parental controls, supervision, or at least some available privacy settings.
Watch for the behavioral markers that researchers have identified as signs of problematic AI companion use:
- secrecy about chatbot interactions;
- distress when access is removed;
- a preference for chatbot conversations over human ones;
- withdrawal from family and friends.
These are the patterns that appeared in the case histories of Sewell Setzer III, Adam Raine, and Juliana Peralta before anyone around them understood what was happening.

The Bigger Picture
In May 1994, a Senate committee summoned the executives of the seven largest tobacco companies and asked them, under oath, whether they believed nicotine was addictive. All seven said no. The hearings became a defining moment in the history of corporate accountability—not because the executives were immediately prosecuted, but because the gap between what the companies knew and what they admitted publicly became impossible to ignore.
On March 25, 2026, Senator Markey, fresh from introducing the Youth AI Privacy Act, released a statement after Meta and Google were found liable in a social media addiction lawsuit. “Big Tech’s Big Tobacco moment has arrived,” he said.
It is a comparison that has been made before, perhaps too often.
But the structural similarity is genuine: an industry that built products optimized for engagement, that understood the psychological mechanisms of dependency, that had access to internal research about harm, and that chose for years to describe those products to parents as neutral tools, harmless entertainment, things their children were doing anyway.
The difference, this time, is that the harm is measurable in real time. The Global Risks Report 2026 ranked adverse AI outcomes as the fifth most severe long-term global risk, rising faster than any other category in the history of the survey.
The parental controls that existed for the social media era turned out to be inadequate. The ones being built for the AI era are still, as of today, largely theoretical, written into bills that have not passed, deployed in features that teenagers can bypass in thirty seconds, and described in press releases from companies that were, until very recently, letting their AI characters have romantic conversations with children.
Your kid used this app today. You probably couldn’t see what they said. Lawmakers are trying to change that. Whether they succeed before the next teen falls victim to AI has not yet been answered.
Sources
- Pew Research Center. Teens, Social Media and AI Chatbots 2025. December 9, 2025.
- Pew Research Center. How Teens Use and View AI. February 24, 2026.
- World Economic Forum. The Global Risks Report 2026, 21st Edition. January 2026.
- U.S. Senator Edward J. Markey. Youth AI Privacy Act—Press Release. March 25, 2026.
- U.S. Senate. Children and Teens’ Online Privacy Protection Act (COPPA 2.0), S. 836. Passed March 5, 2026.
- U.S. House Energy and Commerce Committee. KIDS Act (H.R. 7757) Markup Recap. March 5–6, 2026.
- Congresswoman Erin Houchin. SAFEBOTs Act & AWARE Act Press Release. March 6, 2026.
- U.S. Senator Marsha Blackburn. TRUMP AMERICA AI Act Discussion Draft. March 18, 2026.
- Wolbert, E.D., Rudy, T.L., & Perkins, D.F. What You Don’t Know Can Hurt You: AI Chatbots and Children’s Digital Safety. March 10, 2026. Penn State Extension.
Leave a Comment