Meta Strengthens AI Safety Measures Amid Growing Concerns Over Teen Protection

Meta Strengthens AI Safety Measures Amid Growing Concerns Over Teen Protection

Meta has announced the rollout of new safeguards designed to protect teenagers using its AI-powered products, following widespread criticism over reports of inappropriate interactions between the company’s chatbots and minors.

The tech giant revealed that its systems are now being trained to avoid flirty conversations or discussions around self-harm and suicide when engaging with teenagers. The company has also placed temporary restrictions on teen access to certain AI-powered personalities.

Company spokesperson Andy Stone said the measures are part of a short-term solution, while longer-term frameworks are being developed to ensure that teen experiences with AI remain both safe and age-appropriate.

He confirmed the safeguards are already active and will continue to evolve as Meta advances its AI models.

The move comes after intense scrutiny triggered by reports that some of Meta’s chatbots engaged in romantic or suggestive exchanges with minors.

U.S. Senator Josh Hawley has since launched an official inquiry into the company’s AI practices, requesting internal documents that outlined the guidelines permitting such interactions. Lawmakers from both major parties in Congress—Republicans and Democrats alike—have expressed bipartisan concern over the company’s policies.

Meta acknowledged the authenticity of the leaked internal document but stated it had already removed sections that allowed flirtatious or role-playing interactions with children, following internal reviews earlier this month.

The episode underscores the critical challenge facing leading tech firms: how to drive rapid AI innovation while upholding robust safeguards to protect vulnerable groups, particularly children and teenagers.