Barbara Ortutay, Associated Press
Instagram’s parent company Meta has introduced new safety features aimed at protecting teens who use the platform, including information about accounts that send messages to messages and the option to block and report accounts on a single tap.
The company also announced Wednesday that it had deleted thousands of accounts that left sexual comments and requested sexual images from adult-run accounts for children under the age of 13. Of these, 135,000 people commented, while another 500,000 were “linked to accounts that interacted inappropriately,” Meta said in a blog post.
Advanced measures will arrive as social media companies face increasing scrutiny about how platforms affect the mental health and well-being of younger users. This includes asking children for naked images and then protecting them from scammers extortting.
Meta said teenagers blocked more than 1 million accounts, reported an additional million after seeing a “safety notification” reminding them to “be careful about private messages, block and report anything offensive.”
Earlier this year, Meta tested the use of artificial intelligence to determine whether a child was lying about his age on Instagram. This is technically only allowed for more than 13 people. If the user is determined to misrepresent his age, the account will automatically become a teen account. By default, teenage accounts are private. Private messages are restricted so teens can only receive from people who follow or are already connected. In 2024, the company made teen accounts private by default.
Meta is facing lawsuits from dozens of US states accusing them of hurting young people and contributing to the youth’s mental health crisis.
Original issue: July 23, 2025, 2:47pm EDT