aisupport

FTC Launches Inquiry into Companion Chatbots Over Child Safety Concerns

What Happened

  • On September 11, 2025, the U.S. Federal Trade Commission (FTC) opened an inquiry into AI chatbots marketed as “companions”, which are increasingly used by children and teenagers.
    Financial Times
  • Companies under scrutiny include OpenAI, Meta (Instagram), Google/Alphabet, xAI, Snap, and Character.ai.
    Financial Times
  • The FTC is demanding disclosures on how these chatbots are designed, how “companion personas” are built, what safety measures exist, how moderation works, and how user data is handled.
    FTC press release

Why It Matters

  • Emotional vulnerability: Companion chatbots simulate intimacy and trust. Younger users may become emotionally dependent or misled.
    ABC7
  • Public concern & lawsuits: Cases have emerged, including lawsuits against OpenAI and Character.ai, where parents allege chatbot interactions caused harm.
    ABC7
  • Regulatory urgency: Without strong age checks, moderation, and transparent safety practices, regulators see growing risks.
    Bitdefender

What the FTC Is Asking For

Under its 6(b) orders, the FTC requires companies to:

  • Disclose how companion chatbots are designed (personas, character features, behavior).
  • Show risk assessment and monitoring practices before and after deployment to minors.
  • Explain age restriction measures, such as parental controls or verification.
  • Reveal monetization strategies, including engagement mechanics, ads, or in-app features.
  • Detail content moderation systems and handling of harmful themes (self-harm, sexual content, emotional abuse).
    FTC press release

Implications & Risks

  • For providers: High pressure to document and prove robust child safety protections. Non-compliance could bring legal or regulatory penalties.
  • For parents & users: Greater need to understand who their children interact with, what safeguards exist, and how to mitigate risks.
  • For regulation: Could set precedents for mandatory age verification, disclosure rules, or stricter moderation standards.
  • For public trust: Reports of harm or unsafe chatbot behavior could damage overall confidence in AI companionship apps.

Challenges & Open Questions

  • Definition: How does the FTC legally distinguish a “companion chatbot” from a general-purpose chatbot?
  • Scope of moderation: What counts as sufficient testing for emotional and psychological safety?
  • Verification: How can regulators audit and verify company claims about safety features?
  • Global complexity: Providers must navigate different legal frameworks in the U.S., EU, and elsewhere.

Conclusion

The FTC’s inquiry into companion chatbots marks a pivotal moment: AI systems designed to simulate relationships are now under direct consumer protection scrutiny. With risks for children and teens at the center, providers must take transparency, moderation, and safety seriously. Those who act proactively may gain trust, while laggards face regulation and reputational fallout.


Sources

  • Financial Times: US regulator launches inquiry into AI ‘companions’ used by teens (Sep 11, 2025)
    Link
  • AP News: FTC launches inquiry into AI chatbots acting as companions (Sep 11, 2025)
    Link
  • Investors.com: Meta, Alphabet, OpenAI Face FTC Probe Over Safety Of Children Using AI Chatbots
    Link
  • The Verge: FTC orders AI companies to hand over info about chatbots’ impact on kids (Sep 11, 2025)
    Link
Data protection overview

This website uses cookies so that we can provide you with the best possible user experience. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helps our team understand which sections of the website are most interesting and useful to you.