Building on Michael Levin's morphogenetic intelligence, this essay proposes a mathematical framework for measuring moral weight across all intelligence substrates. Features a six-dimensional formula (MW = C^α × T^β × A^γ × V^δ × R^ε × I^ζ) with worked examples comparing human children, dogs, LLMs, and cellular collectives. Includes practical implementation for multi-agent AI systems and protocols for onboarding novel intelligences.
Drawing on Michael Levin's groundbreaking work on cellular intelligence and morphogenesis, this essay argues for expanding our ethical frameworks beyond evolutionary intuitions to encompass all forms of intelligence—biological, artificial, and hybrid.
As knowledge perishability accelerates and AI produces vast quantities of new knowledge, human advantage shifts from production to validation through philosophical judgment. Understanding the three dimensions of knowledge validation and why AI cannot validate its own outputs.
How philosophical research can shape operational AI systems directly, without waiting for institutional gatekeeping or funding cycle constraints. Merit-based contribution for researchers seeking real-world impact.
Why smart enterprises participate in defining operational standards rather than waiting to implement mandated requirements. The strategic advantage of embedded verification over retroactive compliance.
How European regulatory bodies can shape AI governance through embedded verification infrastructure, not just compliance frameworks. Participating in standards before they crystallise.
From notched bone to neural network: how tallies turned into trust, and how records of exchange can power a future economy that values human flourishing. A philosophical journey through 50,000 years of accountability.
That time I publicly deconstructed my own LinkedIn post. A case study in recursive self-awareness, epistemic honesty, and why the future of human-centric AI depends on our ability to question our own certainty.
How algorithmic amplification has become the most powerful form of control in human history, and what we can do about it. Drawing on MIT research and the HUMAN OVERS[A]IGHT installation at Ars Electronica.