
Africa’s digital space is growing rapidly — and so is the need for stronger, context-aware content moderation systems.
Recent developments following Meta’s decision to end its content moderation contract with Sama in Kenya have raised important questions about how harmful content in African contexts will be identified, reviewed, and addressed at scale.
This is about more than technology. It is about user safety, trust, and effective platform accountability.
The ACHPR’s Resolution 630, adopted in March 2025, also raised concerns about declining human moderation and the increasing reliance on automated systems that may struggle to understand African contexts.
Our publication, “We Found No Violation: When Harm Speaks, and Platforms Don’t Understand,” highlights how gaps in moderation can weaken trust in reporting systems and leave harmful content unaddressed.
When harmful content is consistently overlooked:
📍 Victims may lose confidence in reporting systems
📍 Harmful behaviour can become normalized
📍 Trust in digital platforms and safety mechanisms weakens
As Africa’s digital ecosystem continues to grow, there is an opportunity for all stakeholders to strengthen collaborative approaches to online safety
Key areas for action include:
✔️ Invest in African language moderation systems
✔️ Build local, culturally aware moderation teams
✔️ Ensure complex cases are reviewed by human experts — not just AI
✔️ Building responsive reporting and accountability mechanisms
A safer digital future for Africa requires systems that are not only scalable but also context-aware, inclusive, and responsive to local realities.
#OnlineSafety
#ContentModeration
#DigitalRights
#TrustAndSafety
#AfricaTech
#PlatformGovernance
#InformationIntegrity
#TechPolicy
#DigitalSafety
#Techsocietal
English






















