Essays

Poetry, Proof and Prosperity: How Ancient Ledgers Reveal What AI Must Become

How tallies turned into trust, and how records of exchange can power a future economy that values human flourishing.

By Gwylym Pryce-Owen

Fifty thousand years ago someone on a cold night picked up a bone and, with a sharp stone, made a small notch. That notch was not merely a mark. It was a record of a hunt, a count of years, a memory of a tally. Archaeologists find such bones and read in those notches the first intimations of what would become writing and what would become money. The urge to keep account, to remember and to make obligations visible across time and space, is the seed from which complex human society sprouted.

That notch was humanity's first teleological declaration: this matters enough to record. Purpose made visible. Value made permanent.

From Notched Bone to Clay Ball to Baked Tablet

Fast forward to the ancient Near East. Long before there were alphabets or epics in the form we now recognise, there were clay balls and tokens. Small clay tokens were used to represent quantities of grain, livestock, labour and other goods. To manage obligations and trade, those tokens were sometimes enclosed within clay envelopes and impressed on the outside so the contents could be verified without breaking the envelope. Over time the impressed signs on clay surfaces became marks in their own right. The marks were durable. The marks could travel. That practical shift, from physical token to symbolic inscription, is the origin story of writing.

But notice what happened here: the physical token became a symbol. The symbol became trusted. Trust required epistemological innovation: mechanisms to verify that the mark meant what it claimed to mean. The envelope was an early audit trail. The impressed seal was cryptographic proof before cryptography had a name.

Those early tallies were not poetry. They were accounts. They recorded obligations, transfers and debts. But from that practical kernel sprang something larger. Records made promises enforceable across distance and time. They allowed strangers to transact beyond kinship. They allowed travellers, traders and bureaucrats to coordinate activity on a scale that oral memory alone could not support. The clay tablets of Sumer are not only accounting ledgers. They are the scaffolding of institutions: temples, storerooms, courts and markets. Where account is possible, trust can be routinised and extended beyond face-to-face relations.

This is ontological expansion: the world grew richer because we could model more of it. Relationships that previously existed only in memory could now exist in clay. Obligations that dissolved with death could now persist across generations. The categories of "mine" and "yours" and "owed" could travel beyond the village.

Blood Feuds, Compensation and the Invention of Civic Order

Across Europe, Germanic and other tribal societies developed different mechanisms to avert cycles of vengeance. The idea of private retaliation that could escalate into inter-family warfare was costly for everyone involved. Societies evolved a solution: compensation. The infamous wergild, or man-price, assigned a value to injury and death. It institutionalised the idea that harm could be reckoned, accounted for and remedied without perpetual bloodletting. Compensation was an early legal ledger. It is a civic innovation. It says that social relations can be mediated by rules and payments rather than force.

What both the clay token and the wergild share is an insight about human flourishing: stable cooperation requires mechanisms that make obligations visible and enforceable. When obligations can be measured and adjudicated, resources become allocable beyond immediate kin. That capacity scales. It allows specialisations to emerge. Farmers can feed craftsmen. Craftsmen can trade with sailors. Specialists appear. Cities grow. Ideas travel. Once societies can pool and preserve surplus, not only survival but also aesthetic life becomes possible.

Modern stakeholder capitalism theory, articulated by R. Edward Freeman in his seminal 1984 work Strategic Management: A Stakeholder Approach, echoes this ancient wisdom: sustainable systems must account for obligations to all parties, not just the immediate transactors. Rebecca Henderson's 2020 Reimagining Capitalism in a World on Fire extends this further, arguing that capitalism itself requires reinvention around stakeholder value rather than shareholder primacy if it is to survive the 21st century.

The wergild was stakeholder capitalism in embryonic form: it recognised that harm to one affects all, and that all must have mechanisms for redress.

Poetry, Beauty and Surplus

When people are no longer wholly occupied with meeting subsistence needs, art enters the scene. Poetry and music seem like luxuries, yet they are essential refinements of human life. The same institutions that enable trade and law enable the conditions for aesthetic labour. Scribes who maintained records could also copy myths. Temple treasuries that held wealth funded patrons who sponsored festivals. The capacity to keep account permits a world in which not everything is instrumentally reducible. The tallies in clay led to ledgers in temples. The ledgers supported culture. Culture gave us meaning beyond mere survival.

This is not to romanticise. Accounting systems and legal codes could also become instruments of control. Ledgers could be used by elites to extract surplus. Nevertheless, the capacity to record and to verify made possible forms of cooperation that no oral culture could sustain at large scale. In the long arc, record keeping and public obligations are prerequisites for complex civilisations and for the high intellectual achievements that would follow.

Here we see the first teleological confusion: systems designed to enable cooperation were often captured to serve extraction. The stated purpose (facilitate trade, ensure justice) conflicted with the actual optimisation target (concentrate power, extract surplus). This pattern will recur.

Leaps in Technology and the Route to Artificial Intelligence

Societies that mastered measurement, that disciplined trade, taxation and law, freed minds to do other things. Mathematics arose as a tool of commerce and architecture. Metallurgy advanced because mines and markets made iron worth the labour of extraction. Navigation marched from coastal hugging to celestial plotting because the rewards of long-distance exchange justified the investment in complex instruments.

Those cumulative changes created what we now call modernity. The scientific revolution was possible in part because institutions supported the storage and dissemination of verified knowledge. Printing made records widely available. Universities and academies formalised enquiry. The industrial revolution mechanised labour. Computation, built on centuries of abstraction, compressed measurement into lightning fast operations. Each step rests on prior institutional innovations that originated with tally and accountability.

Artificial intelligence is a technological leap that synthesises this entire history of record, model and prediction. AI systems are, at core, mechanisms to encode patterns and to predict. They operate within a world that has been shaped by written contracts, markets and institutions. The capacities that led from notched bone to clay tablet now allow machines to model language, images and decisions at scales unimaginable to our ancestors.

But here's the critical question that Michael Schrage and David Kiron posed in their 2024 MIT Sloan Management Review article "Philosophy Eats AI": What purposes are we encoding?

They observe that every AI system embodies philosophical choices, whether acknowledged or not. What counts as knowledge? What is the purpose of this system? How should reality be represented? These are not technical questions. They are philosophical ones. And most organisations deploy AI without ever asking them.

The result is what they call "teleological confusion": conflicting purposes embedded in the same system, with no framework for resolving the tension. Just as ancient ledgers could serve either cooperation or extraction, modern AI systems optimise for metrics that may or may not align with human flourishing.

But technology and poetry must remain intertwined. The challenge is not merely to compute faster. The challenge is to ensure that the wealth of representation and prediction contributes to human flourishing rather than to its narrowing.

The Future Crossroads: Reimagining Economy for Human Flourishing

We stand now at another crossroads. The computational devices that once automated calculation are poised to automate judgement and creation in some domains. This raises urgent questions about the distribution of value and about what a flourishing human life looks like when certain kinds of toil are dramatically cheaper to perform. If we imagine human flourishing as a life where individuals can pursue art, friendship, deliberation and self-direction, then we must ask how economic systems can be restructured to provide the means to pursue those higher goods.

A 2025 MIT study by Larooij and Törnberg (Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation, arXiv:2508.03385) revealed something unsettling: they built a simulated social network populated entirely by bots and found that regardless of algorithm design, users divided into tribal factions with centralised elites controlling discourse. Not because of human malice, but because of structural amplification dynamics.

This is an ontological warning: the categories and relationships we encode in our systems determine what outcomes are even possible. If we model users as "engagement maximisers" rather than "flourishing humans," we will build systems that optimise for addiction rather than wellbeing. The model is the constraint.

One test is how we treat the emergence of personal AI agents. Imagine that every household can have a lightweight personal AI: a trusted assistant that manages appointments, helps with learning, assists in creative work and augments decision making. Those personal AIs can create value in digital markets, optimising production of content, design, or data services that can be transacted via computerised credits. If these credits are distributed fairly, they can become a medium of exchange that reflects contributions measured in new modalities. If the economy recognises and pays for the value that such agents create, then human time can be liberated from rote tasks and redirected toward the development of character, craft, exploration and community.

But this possibility requires intentional design. Left to unchecked market dynamics, the gains of automation will accrue to those who already control capital and the means of production. Historically, when new technologies amplified productivity, distributional conflicts followed. Sometimes societies adapted through progressive institutions, social insurance and broader participation. Other times inequalities hardened. The next phase will be decided by policy and institutional design.

This is where the Philosophical AI Framework becomes essential. We need operational tools to audit whether AI systems are actually serving their stated purposes or whether they've been captured by narrow optimisation targets.

The Three Pillars: Teleology, Epistemology, Ontology

Just as the clay envelope provided verification for the tokens within, we need verification mechanisms for AI systems. The Philosophical AI Framework Auditor provides three lenses:

1. Teleological Auditing: What Is the True Purpose?

Every AI system acts toward some end. The question is: whose end? And do those ends align?

When we analyse modern AI systems, we often find severe misalignment:

  • Stated purpose: "Connect people" or "Provide excellent customer service"
  • Actual optimisation target: Maximise engagement time or minimise support costs
  • Hidden purpose: Extract attention for advertising revenue
  • Unstated consequence: Fragment communities, burn out workers, erode trust

The teleological auditor extracts actual purposes from code, not from marketing materials. It builds goal hierarchy maps. It identifies stakeholder impacts. It scores alignment between mission and implementation.

This is the modern equivalent of breaking the clay envelope to verify the tokens match the seal.

2. Epistemological Auditing: Can We Trust the Knowledge?

The clay tablets of Sumer had provenance: we knew who impressed the seal, when, and what authority backed it. Modern AI systems make decisions based on data of wildly varying reliability, often without acknowledging confidence levels.

The epistemological auditor asks:

  • Where did this knowledge originate?
  • How current is it?
  • What's the confidence level?
  • Does the system know what it doesn't know?

Donald Hoffman's work on conscious agents (2023) reveals that our perceptions are not faithful representations of reality but useful fictions optimised for survival. AI systems inherit this problem but lack the metacognitive awareness to question their own models.

The epistemological auditor builds knowledge graphs with provenance tracking. It scores data reliability. It flags epistemic overconfidence. It makes uncertainty visible.

3. Ontological Auditing: What Categories Does the System See?

The wergild assigned values to lives and injuries. It created categories: freeman, slave, compensation tiers. Those categories shaped what justice was possible.

Modern AI systems inherit impoverished ontologies:

  • Social platforms see "users," "posts," "engagement" but not "communities," "trust," "collective wellbeing"
  • Hiring algorithms see "qualifications" but not "potential," "context," "systemic barriers"
  • Credit scoring sees "payment history" but not "financial shocks," "family obligations," "structural disadvantage"

The ontological auditor extracts implicit entity-relationship models. It reveals blind spots. It proposes enrichments: "To recognise collective action, you need these entities, these relationships, these temporal dynamics."

Ontological poverty leads to systematic injustice, even when intentions are good.

Evidence Bundles and the Architecture of Trust

The future economy of human flourishing requires trust at scale. Not faith-based trust, but evidence-based trust. This is where evidence bundles enter.

An evidence bundle is a portable, cryptographically verifiable package that travels with claims:

  • The claim itself (Alice completed training X)
  • Supporting data (test scores, instructor sign-off, curriculum hash)
  • Provenance chain (who verified, when, based on what)
  • Confidence metadata (reliability scores, known limitations)
  • Privacy controls (what's visible to whom, under what conditions)

This is the clay envelope pattern, digitised and scaled. Evidence bundles allow AI agents to transact with humans and with each other in ways that are auditable, fair and resistant to fraud.

Imagine a world where:

  • Your personal AI earns credits by producing valuable synthetic data or creative assets
  • Those credits come with evidence: what was created, how, verified by whom
  • You receive universal basic income not as welfare but as stakeholder dividend from the digital economy your data helped build
  • Markets reward prosocial behaviour because evidence bundles make cooperation visible and verifiable

This is not utopian fantasy. It is institutional engineering informed by millennia of ledger design.

A Ledger of Life

From bone yard to the ledger of life,
we have always been tallying what matters.

First: the hunt, the season, the count of moons.
Then: the grain, the debt, the promise to pay.
Then: the wergild, the compensation, the peace price.
Then: the contract, the corporation, the market.

Now: the algorithm, the agent, the optimiser.

Each advance asks: What do we choose to measure?
What do we trust?
What do we value?

The answer matters because measurement shapes behaviour.
Behaviour builds institutions.
Institutions determine who flourishes and who is ground down.

We are not helpless before the machines we make.
Tools can bind if we don't ask what they're built for.
Witnesses can forget what they were meant to see.

So we audit.

We ask: What is your purpose? (Teleology)
We ask: Can we trust what you know? (Epistemology)
We ask: What do you see, and what are you blind to? (Ontology)

These are not new questions.
They are the questions of the notched bone,
the clay envelope, the wergild council,
the merchant's ledger, the scientist's notebook.

Imagine credits that track kindness and craft,
digital coins that record not only price but promise.
Imagine cities built on verified exchange,
where evidence bundles travel with intent,
and privacy is the lock, not the loophole.

This is not fantasy for poets alone.
It is ledger, poem, promise, hard code.
It is the marriage of proof and beauty,
where audit trails look like sonnets,
and verification smells like bread at dawn.

Do not mistake caution for paralysis.
We must build with humility and courage.
We must design for human flourishing, not spectacle.
We must insist that the tools we make widen room for choice,
not shrink it to a single, cold outcome.

So tend your tally with care.
Teach your children the difference between value and vanity.
Cultivate craft, not just clicks.
Reward the mind that produces,
not the voice that shouts loudest.

From bone to tablet to ledger to machine,
we have always been inventing
how we keep faith with each other.

Let the next invention be a ledger of life,
where poetry meets proof,
and prosperity is measured
by how many hands can play.

Carry a bookmark of this vow.
Build small, test loud, prove often.
Keep the lights on for makers, for dreamers,
for those who fail and rise.

Let the future be shaped by evidence and by song.

From bone yard to the ledger of life,
we write, we sign, we sing,
and we prosper together.


Bibliography

Freeman, R. E. (1984). Strategic Management: A Stakeholder Approach. Pitman.

Henderson, R. (2020). Reimagining Capitalism in a World on Fire. PublicAffairs.

Schrage, M., & Kiron, D. (2024). Philosophy Eats AI. MIT Sloan Management Review, 65(2).

Larooij, M., & Törnberg, P. (2025). Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation. arXiv preprint arXiv:2508.03385.

Hoffman, D. (2023). The Case Against Reality: Why Evolution Hid the Truth from Our Eyes. W.W. Norton & Company.

Floridi, L. (2019). Establishing the Rules for Building Trustworthy AI. Nature Machine Intelligence, 1(6), 261-262.

Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.


Get Involved

The Philosophical AI Framework is in active development. Join us in building teleological, epistemological, and ontological auditing infrastructure for the future economy of human flourishing.

Code: GitHub.com/auspexi

Contact: gwylym@auspexi.com


Gwylym Pryce-Owen is building the Philosophical AI Framework Auditor, an open-source platform for auditing multi-agent AI systems through teleological, epistemological, and ontological analysis. This essay extends themes from his work on stakeholder-aligned AI and evidence-based synthetic data systems.


© 2025 Philosophy Framework Project. This work is licensed under Creative Commons 4.0.