The Architecture of Silence: Why Digital Collective Action Requires Permission from the Systems We're Trying to Stop
How algorithmic amplification has become the most powerful form of control in human history, and what we can do about it.
By Gwylym Pryce-Owen
The Dome and the Algorithm
At the Ars Electronica Festival, something remarkable happened. Visitors to the HUMAN OVERS[A]IGHT: THE OPS ROOM installation discovered they could crash an AI system through sheer collective action. No sophisticated hacking. No technical expertise. Just people, especially children, pressing buttons together "like there was no tomorrow," as researcher Joaquín Santuber described it (Santuber & Tica, 2024).
The system wasn't designed to be stopped. There was no emergency shutdown protocol. But coordinated human action brought it to its knees anyway. Multiple times.
It was a beautiful proof of concept: collective action works. Humans can assert oversight over AI systems when we act together. The installation became a metaphor. We don't need to understand the machine's complexity to stop it. We just need to coordinate.
But here's the uncomfortable question the installation raises, the question that keeps me awake at night as I build the Philosophical AI Framework: What happens when the buttons are distributed across continents, behind screens, filtered by algorithms that decide who can see whom pressing them?
The visitors in that dome could coordinate because they shared physical space. They could see each other. They could feel the collective momentum building. The feedback was immediate and visceral: press together, watch the system struggle, press harder.
Digital collective action requires something far more fragile: algorithmic permission to assemble.
And that permission is controlled by the very systems we're trying to hold accountable.
The MIT Study: Tribalism as Architecture, Not Accident
In early 2025, researchers at MIT conducted a fascinating experiment. They built a simulated social media platform and populated it entirely with bots, no humans, just AI agents following basic social behaviour patterns. Then they watched what emerged.
The finding, published as "Emergent Tribalism in LLM-Based Social Networks," was stark: Regardless of the recommendation algorithm used, the network divided into tribal factions with small groups of centralised elites controlling discourse (Larooij & Törnberg, 2025).
Read that again. Regardless of algorithm design.
They tried multiple approaches: chronological feeds, engagement-based ranking, diversity-promoting recommendations. It didn't matter. The structural dynamics of amplification inherently created hierarchy, tribalism, and elite gatekeeping.
This wasn't the result of bad actors. It wasn't Mark Zuckerberg or Elon Musk pulling strings. It wasn't even intentional algorithmic bias. It was the architecture itself, the mathematical inevitability of networks where some voices get amplified more than others.
The researchers discovered that once amplification dynamics exist, they create self-reinforcing feedback loops. Popular voices become more visible. More visibility leads to more engagement. More engagement leads to more algorithmic favour. The rich get richer. The connected get more connected. The heard get louder.
Meanwhile, the majority, the people actually trying to coordinate, to organise, to push back, fragment into isolated pockets, unable to find each other through the noise.
This is not a bug. This is the system working exactly as designed.
The Three Eras of Digital Amplification
To understand how we got here, we need to trace the evolution of social platforms through three distinct eras.
Era 1: The Democratisation Phase (Early Growth)
When TikTok launched, every video got a chance. When Twitter started, your tweets reached your followers. When LinkedIn began, your posts appeared in connections' feeds. The platforms needed users, so they amplified everyone. They had to prove the network was alive, vibrant, worth joining.
This wasn't altruism. It was growth strategy. But the effect was real: ordinary voices could be heard. A teacher could share educational content and reach thousands. An activist could organise. A whistle-blower could expose wrongdoing.
The buttons worked. You could press them and people would see.
Era 2: The Scale Problem (Algorithmic Filtering)
Then came scale. Billions of users. Trillions of posts. Server costs exploding. The "everyone gets amplified" model became mathematically impossible and economically unsustainable.
Platforms faced a genuine problem: how do you show people the content that matters to them when a hundred million posts compete for attention every hour?
Enter the recommendation algorithm. Not initially nefarious, genuinely trying to solve the relevance problem. But the metrics they optimised for revealed their true purpose: engagement.
Not truth. Not value. Not collective wellbeing. Engagement. Because engagement means ad revenue. And ad revenue means survival.
The problem is that rage, fear, and tribalism are more engaging than nuance, truth, and bridge-building. The algorithm didn't choose to be evil. It just discovered that division pays.
Era 3: The Control Phase (Pay-to-Play and Shadow Suppression)
Now we're in the third era, where amplification has become a commodity. Want to be heard? Pay for it. Boosted posts. Promoted content. "Verified" status that mysteriously comes with better reach.
But it's worse than explicit pay-to-play. Because alongside the visible marketplace for attention, there's an invisible architecture of suppression:
- Shadow bans where your content exists but no one sees it
- Reach throttling that gradually turns down your volume without telling you
- Algorithmic demotions based on mysterious "quality signals"
- Coordinated suppression where certain topics or voices simply disappear
And the most insidious part? You can't tell if you're being suppressed or if people just don't care about what you're saying.
The button still exists. You can still press it. But the wires have been cut. You just don't know it yet.
The Paradox: Organising Against Platforms Requires the Platforms
Here's where Joaquín's installation becomes not just beautiful but tragic as a metaphor for digital collective action.
His visitors could coordinate because the architecture enabled it. They were in the same physical space. They could communicate through gesture, voice, shared energy. When one person started pressing rapidly, others joined. The momentum was visible and contagious.
But imagine if the dome had been designed differently:
- Each visitor enters a separate room with a single button
- They can't see or hear other visitors
- A system occasionally tells them "3 other people pressed their buttons today"
- The system decides which rooms are close enough to affect the AI
- Some buttons are "boosted" and count for more presses
- Some buttons are "shadow banned" and don't count at all
- The system can change these rules at any time without telling anyone
Would collective action be possible? Theoretically, yes. Practically? Almost impossible.
This is the architecture of every major social platform.
We have buttons. We can press them. But the platforms control:
- Who sees us pressing
- Who can press with us
- Whether our presses count
- How much they count
- Who gets notified we're pressing
- Whether we can even find the other button-pressers
And here's the brutal paradox: To organise collective action against these platforms, we must use these platforms. Because that's where everyone is. That's where coordination happens. That's where movements build momentum.
We need the master's tools to dismantle the master's house. But the master controls which tools we can access.
What "Amplification Transparency" Would Actually Require
When technologists talk about "transparency," they usually mean open-sourcing algorithms or publishing content moderation rules. These are good starts, but they fundamentally misunderstand the problem.
Transparency without accountability is just documentation of oppression.
Knowing that you're being suppressed doesn't help if you can't do anything about it. Understanding why the algorithm promotes rage doesn't stop it from promoting rage.
Real amplification transparency would require platforms to reveal:
- Who is being suppressed: Not just aggregate statistics, but granular visibility into whose reach is being throttled and why
- What counts as "quality": The actual signals being used to determine content value, not vague descriptions like "authentic engagement"
- How decisions compound: The cascading effects where one algorithmic demotion triggers others across the network
- Who benefits from suppression: Which accounts, topics, or revenue streams gain when others are throttled
- What coordination is detected: How the platform identifies and responds to collective action attempts
No platform will provide this voluntarily. Because true transparency would reveal that the system isn't just biased, it's architecturally designed to prevent the very coordination needed to change it.
The Philosophical AI Framework: Auditing the Architecture
This is why I'm building the Philosophical AI Framework. Not as another ethics checklist. Not as voluntary corporate social responsibility. But as an auditing infrastructure that makes algorithmic control visible and therefore contestable.
The framework evaluates AI systems through three philosophical dimensions drawn from classical analysis (Schrage & Kiron, 2024):
Teleological Auditing
We examine purpose. What is this system actually optimising for? Not what it claims in the mission statement, but what the code reveals.
For social platforms, teleological auditing would show: "This system claims to connect people. But its reward functions optimise for time-on-platform and ad impressions. Those goals conflict. The platform serves advertisers, not users. The purpose hierarchy is inverted."
Then we trace consequences: "Because the platform optimises for engagement over connection, it systematically suppresses content that builds trust (low immediate engagement) while amplifying content that triggers outrage (high immediate engagement). This creates epistemic pollution that makes coordination harder."
Epistemological Auditing
We examine knowledge. What does this system know? How does it know it? What are the confidence levels? What are the blind spots?
For social platforms, epistemological auditing would reveal: "This system makes reach decisions based on signals like 'engagement rate,' 'authenticity score,' and 'user reports.' But engagement rate measures addiction, not value. Authenticity scores rely on behavioural patterns that systematically disadvantage neurodivergent users. User reports are weaponised by coordinated campaigns."
Ontological Auditing
We extract the entity-relationship models implicit in platform algorithms. We reveal what categories exist (user, content, engagement) and what's missing (community, collective action, long-term trust). We show how impoverished ontologies lead to systematic blind spots.
Then we propose enrichments: "To detect and value collective organising, the platform needs to add these entities, these relationships, these context dimensions." Not demanding they do it. Just making the gap visible.
Why This Matters: The Metalab Researcher and the Movement
Let me be direct about why this essay exists and why I'm asking for your support.
Joaquín Santuber's installation proved that collective action can stop AI systems. But it also revealed the architectural challenge: digital collective action requires visibility, coordination, and amplification, all controlled by the systems we're trying to hold accountable.
The MIT study proved this is structural, not intentional. Even well-designed algorithms create elite concentration and tribal fragmentation. The problem isn't individual bad actors. It's the architecture of amplification itself (Larooij & Törnberg, 2025).
If a researcher at Metalab, someone thinking deeply about human oversight, someone building public art about collective action, if that person amplifies this framework, then we begin to solve the coordination problem.
Not because one person's voice is magic. But because this is how visibility works in the current architecture. Credible voices with existing reach can amplify new ideas. Networks cluster around respected nodes. Academic legitimacy opens doors.
I'm not ashamed of this strategy. I'm explicit about it. Because the alternative is waiting for algorithmic permission that will never come.
The Philosophical AI Framework is:
- Open source (code dropping progressively on GitHub)
- Community owned (20% equity to contributors, 75% approval needed for acquisition)
- Mission protected (Public Benefit Corporation structure)
- Academically grounded (based on research from MIT Sloan, Donald Hoffman's conscious agents theory, stakeholder capitalism literature)
But most importantly, it's designed to solve this specific problem: How do we audit and organise against systems that control our ability to audit and organise?
The Path Forward: From Individual Buttons to Collective Architecture
Joaquín's visitors discovered they could crash the system by pressing buttons together. But they needed three things:
- Visibility (they could see each other)
- Coordination (they could communicate intent)
- Feedback (they could observe collective impact)
Digital collective action needs the same three things. But the current architecture provides none of them reliably. Or rather, it provides them selectively, to those the algorithm favours.
The Philosophical AI Framework is designed to restore these three capabilities:
1. Visibility Through Transparency
We make algorithmic control visible. Not through vague "open source" gestures, but through concrete audits that show: "Here's who is being suppressed. Here's why. Here's the teleological confusion, epistemological unreliability, and ontological blindness causing it."
When suppression becomes visible, coordination becomes possible.
2. Coordination Through Shared Frameworks
We provide common language and tools for analysing AI systems. When activists in Brazil, researchers in Berlin, and workers in California can all run the same audits and generate comparable results, they can coordinate across distances.
Shared frameworks enable shared resistance.
3. Feedback Through Scorecards
We publish ongoing evaluations of major platforms. Not one-time exposés, but continuous monitoring. "This platform's teleological score dropped from 62 to 58 this quarter because..." "This platform's epistemological reliability improved after implementing X..."
Measurable accountability creates feedback loops that can't be ignored.
The Call: Build the Dome We Need
The physical dome at Ars Electronica was temporary. The digital dome we need must be permanent, distributed, and resistant to capture.
That dome is the Philosophical AI Framework, not as a product, but as a movement. A shared set of tools, standards, and practices for holding AI systems accountable to their stated purposes.
I'm building it because I believe collective action works. But I also believe it requires architecture that enables rather than suppresses coordination.
I'm building it because the MIT study proved this is structural, not personal. We can't fix it by electing better CEOs or writing better content policies. We need to change the underlying architecture of amplification itself.
I'm building it because Joaquín's installation showed me what's possible when coordination isn't algorithmically mediated. And I want that same possibility in digital space.
But I can't build it alone. That would contradict the entire purpose.
I need:
- Researchers to validate and extend the philosophical frameworks
- Developers to implement the auditing tools across platforms
- Activists to deploy the framework in real organising contexts
- Funders who understand mission-aligned investment
- Amplifiers who can help coordinators find each other
If you're reading this because someone amplified it, thank them. Then become an amplifier yourself.
If you're reading this despite algorithmic suppression, screenshot it, share it, discuss it. Prove that coordination is still possible.
If you're reading this and thinking "this matters," then you're already part of the movement. The question is whether we can find each other through the noise.
Conclusion: The Buttons Exist, But the Wires Need Rewiring
The visitors in Joaquín's dome discovered something profound: the system wasn't designed to be stopped, but collective action stopped it anyway.
They had buttons. They could see each other. They could coordinate. The architecture enabled resistance.
We have buttons too. Social media posts. Code commits. Research papers. Art installations. Conversations like this one.
But the wires have been cut, or rather, routed through algorithmic gatekeepers who decide what connects to what.
The Philosophical AI Framework is an attempt to restore those connections. To make algorithmic control auditable. To give communities the visibility, coordination, and feedback they need for collective action.
Not by asking permission from the platforms. By building the audit infrastructure that makes their control visible and therefore contestable.
Joaquín's installation worked because the architecture was simple enough to understand and small enough to overwhelm.
The digital architecture is neither. But that doesn't make it immune to collective action. It just means we need better coordination tools.
That's what we're building.
The dome scales. The buttons work. The system can be stopped.
We just need to press them together.
Get Involved
The Philosophical AI Framework is in active development. First code release: GitHub.com/auspexi
Join the coordination: gwylym@auspexi.com
For researchers, developers, activists, and anyone who believes AI should serve humanity, not extract from it.
References
Larooij, M., & Törnberg, P. (2025). Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation. arXiv preprint arXiv:2508.03385. https://arxiv.org/abs/2508.03385
Santuber, J., & Tica, K. (2024). HUMAN OVERS[A]IGHT: THE OPS ROOM [Interactive installation]. Ars Electronica Festival, Linz, Austria. Project Team: Marija Šumarac, Lukas Bibl, Alessia Fallica, Mariana Costa Morais, Ahmed Jamal, Jürgen Ropp, Reinhard Zach, Angelika Taher. Institutional Support: Johannes Kepler Universität Linz, JKU - Linz Institute for Transformative Change (LIFT_C), Diseño UC (Marcos Chilet, Pablo Hermansen).
Schrage, M., & Kiron, D. (2024). Philosophy Eats AI. MIT Sloan Management Review, 65(2).
Hoffman, D. (2023). The Case Against Reality: Why Evolution Hid the Truth from Our Eyes. W.W. Norton & Company.
Freeman, R. E. (2010). Strategic Management: A Stakeholder Approach. Cambridge University Press.
Henderson, R. (2020). Reimagining Capitalism in a World on Fire. PublicAffairs.
Floridi, L. (2019). Establishing the Rules for Building Trustworthy AI. Nature Machine Intelligence, 1(6), 261-262.
Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.
Acknowledgements
This essay was inspired by Joaquín Santuber and Kristina Tica's HUMAN OVERS[A]IGHT: THE OPS ROOM installation at Ars Electronica Festival, and draws on research by Maik Larooij and Petter Törnberg on emergent social dynamics in algorithmic networks. Special thanks to the entire project team for demonstrating that collective action remains possible, and powerful, in an age of algorithmic mediation.
Gwylym Pryce-Owen is building the Philosophical AI Framework Auditor, an open-source platform for auditing multi-agent AI systems through teleological, epistemological, and ontological analysis. This essay expands on conversations with researchers, activists, and AI practitioners about the architecture of algorithmic control and collective resistance.
© 2025 Philosophy Framework Project. This work is licensed under Creative Commons 4.0.