Instagram will begin notifying parents if their teenagers repeatedly search for suicide or self-harm related content, marking the first time owner Meta has proactively flagged search behaviour rather than simply blocking it.
From next week, parents and teenagers enrolled in Instagramâs âTeen Accountsâ supervision programme in the UK, US, Australia and Canada will receive alerts if a young user searches for harmful terms within a short period of time. The feature will be rolled out globally at a later stage.
Previously, Instagram restricted access to certain harmful material and redirected users to support resources. The new measure goes further by directly alerting parents via email, text message, WhatsApp or within the Instagram app itself, depending on available contact details.
Meta said the alerts are designed to flag sudden changes in search patterns that may indicate distress. Notifications will be accompanied by guidance and expert-backed resources to help parents navigate what are likely to be sensitive conversations.
The move has been met with sharp criticism from the Molly Rose Foundation, established by the family of Molly Russell, who died in 2017 aged 14 after viewing self-harm and suicide content online.
Chief executive Andy Burrows described the announcement as âfraught with riskâ, warning that âforced disclosures could do more harm than goodâ.
âEvery parent would want to know if their child is struggling,â Burrows said, âbut these flimsy notifications will leave parents panicked and ill-prepared to have the sensitive and difficult conversations that will follow.â
He added that the onus should be on preventing harmful content from appearing in the first place, rather than shifting responsibility onto families after the fact.
The foundation previously published research claiming Instagram was still actively recommending content related to depression, suicide and self-harm to vulnerable young people. Meta rejected those findings, saying they misrepresented its safety efforts.
Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomed the attempt to increase transparency but argued that it did not address deeper systemic issues.
âParents contact us every day to say how worried they are about their children online,â he said. âThey donât want to be warned after their children search for harmful content, they donât want it to be spoon-fed to them by unthinking algorithms.â
âErring on the side of cautionâ
Meta said the system is designed to âerr on the side of cautionâ and acknowledged that parents may occasionally receive alerts even when there is no serious cause for concern.
The company said the feature builds on broader Teen Account protections, which include automatically limiting exposure to sensitive material, restricting who can contact teens, and blocking certain harmful searches outright.
Two in-app screenshots released by Meta show alerts titled âAlert about your teenâs safetyâ followed by a screen offering advice on âHow you can support your teenâ.
Sameer Hinduja, co-director of the Cyberbullying Research Center, said the impact of the new feature would depend heavily on the quality of guidance provided alongside the alert.
âYou canât drop a notification on a parent and leave them on their own,â he said. âWhat matters is the immediate support and context that follows.â
Meta also confirmed that it plans to introduce similar parental alerts in the coming months if teenagers discuss self-harm or suicide with Instagramâs AI chatbot. The company said young people are increasingly turning to AI tools for advice and emotional support.
The expansion comes amid heightened scrutiny of social media companiesâ impact on childrenâs mental health.
Australia recently passed legislation banning social media access for under-16s, while policymakers in Spain, France and the UK are considering similar measures. In the US, Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri have faced legal challenges and congressional hearings over allegations the companyâs platforms were designed to attract and retain younger users.
For now, Instagramâs new alert system represents a shift in Metaâs child-safety strategy â moving from passive content restriction to active parental notification. Whether that approach proves protective or problematic will likely depend on how families, regulators and mental health experts respond in the months ahead.
Read more:
Instagram to alert parents if teens search for suicide and self-harm content
