Meta – Trust & Safety

Redesigning the reporting experience for 1B+ users across Messenger and Instagram

Context

For 2 years I led one of Instagram & Messenger’s integrity teams for high severity harms, affecting +1 billion users. 

As Senior Product Designer on Messenger’s integrity team, I owned the end-to-end redesign of the reporting flow – from research and strategy through to shipped product. The work spanned Messenger and Instagram, and was built in close collaboration with policy, legal, engineering, and child safety NGOs.

Only 2.6% of users who experience harm report it. The rest give up, or block and move on.

“I don’t think our reports are taken seriously, because no one from FB ever gets back to let us know if they’ve addressed the issue. So it’s a waste of time. I’d rather just block the person and get on with my life.” – FB Messenger user

Problem

We reframed the problem around trust – and distilled it into three things users actually needed:

  1. Voice
  2. Transparency
  3. Support

Solution

The solution centred on rebuilding trust at every stage of the reporting journey.

  1. Evidence selection: Giving users a way to highlight and contextualise the harm they experienced
  2. Post-reporting communications: A progress flow that closed the feedback loop and surfaced outcomes in real time
  3. Support tools: A prevent-control-care framework providing reachability settings, remediation options, and emotional support resources

Outcome

Increasing harm enforced by 3x and reducing child exploitive imagery by 65%.

  • 3x harm enforced (2.8B → 8.4B yearly reports)
  • 65% reduction in child exploitative imagery
  • 170% increase in reports from the general population
  • 60% increase in reports from minors