The Algorithm Decides Who Gets Heard

When Meta replaced its fact-checkers with Community Notes in January 2025, I assumed we would know fairly quickly whether it worked. A year on, what is clearer to me is that the moderation layer was never the real issue. The research shows Community Notes reduces engagement once attached, but arrives too slowly for viral content and can be gamed by coordinated users on polarised topics. What I missed was that platforms make money from engagement, which means the systems decide what spreads are built for attention, not accuracy. Swapping the moderation layer does not change what it is sitting on. Looking back, that was just as true when the fact-checkers were still in place.

Conservatives called it a win for free speech. Critics said Meta was just walking away from its responsibilities. Both had a point, which is usually a sign that the real question is somewhere neither of them was looking.

So what is the real question? Not whether platforms should moderate content. They already do, and the scale of it is what often goes unnoticed. Platforms now make billions of moderation calls every day, far more than any editorial team could review. Those decisions are made by systems trained on criteria that users cannot see or challenge, through processes that are not publicly visible. That is not a conspiracy. When decisions that shape public discourse are made at that scale by systems nobody can see or challenge, that is not just a technical problem anymore.

The EU's Digital Services Act (DSA), which came into force in 2024, requires large platforms to be more transparent about how their moderation works and to give users more recourse when content is removed. That is a genuine attempt to put some accountability into a system that currently has very little.

After a year of DSA enforcement, what the design can and cannot fix has become clearer to me. Transparency rules can show you the criteria a platform uses. They cannot change what those criteria are built to serve. As long as platforms profit from engagement, that goal will shape what spreads regardless of what the compliance documents say.

Whether it is a government issuing executive orders or an algorithm trained on engagement data, the question is the same: who is accountable for what gets heard, and to whom? A year ago I thought the answer would get clearer once we saw how Community Notes performed. It did not. But I have a clearer sense now of where the problem actually sits. And it is not in the layer anyone is arguing about.

Previous
Previous

Who Gets to Tell the Story?

Next
Next

Tuition Culture: The Unequal Playing Field