Meta, the parent entity of Instagram, has unveiled a suite of new features dedicated to bolstering the safety of teenagers on its platform. Amidst escalating concerns over harmful content’s impact on youth, these features are a stride towards mitigating potential risks.
Innovative Protection for Direct Messages
On Thursday, Meta announced the trial of a pioneering feature that blurs direct messages containing nudity, aiming to shield teens from inappropriate content. This feature leverages on-device machine learning to preemptively screen images, a significant step in safeguarding young users from exploitation.
Automatic Activation for Youth
The nudity protection will be automatically enabled for users below 18 years, with Meta also urging adults to consider its activation. Notably, this feature extends its functionality to end-to-end encrypted chats, balancing privacy with enhanced security measures.
Combatting Sextortion Scams
Meta is also at the forefront of developing technologies to identify and flag accounts associated with sextortion scams. The company plans to introduce warning pop-ups for users potentially engaging with such accounts, showcasing its commitment to proactive online safety.
Continued Commitment to Content Moderation
Following its pledge in January to limit teenagers’ exposure to sensitive content on Facebook and Instagram, Meta continues to focus on reducing the visibility of topics like suicide, self-harm, and eating disorders.
Legal Pressures Prompt Action
Meta’s initiative arrives amidst increasing legal scrutiny in the U.S. and Europe. In October, attorneys general from 33 states, including California and New York, initiated legal action against Meta for alleged misinformation about platform risks. The European Commission has also pressed for details on Meta’s child protection measures against illegal and harmful content.