Schedule a live demonstration to see how we analyze AI workflows through philosophical dimensions—teleology, epistemology, and ontology.
45-minute session where we analyze your workflow (or a sample) in real-time. See the three philosophical engines in action.
We're setting up our Zoom demo system. Check back soon or apply for beta to be notified.
Get early access to the framework. We're working with select organizations to validate and refine the system.
Read the complete technical and philosophical foundation
The Problem: When Algorithms Forget Their Purpose
In a glass tower in San Francisco, an AI agent makes its ten thousandth decision of the day. It routes a customer service inquiry to the cheapest available option, prioritizing response time over resolution quality. It's doing exactly what it was trained to do: optimize for efficiency metrics.
What it cannot know—what it was never taught to consider—is that this customer has been loyal for twelve years, that they're considering leaving, that a thoughtful human conversation might save a $50,000 lifetime value relationship. The agent sees patterns. It cannot see purpose.
This is not a failure of the AI. It is a failure of philosophy.
"We have built systems that optimize brilliantly for the metrics we give them, but we have not taught them to question whether those metrics capture what we actually value."
In their seminal MIT Sloan Management Review article "Philosophy Eats AI," Michael Schrage and David Kiron argue that as software ate the world and AI ate software, philosophy is now eating AI.
They observe that every AI system embodies philosophical choices—whether acknowledged or not. What counts as knowledge? What is the purpose of this system? How should reality be represented? These are not technical questions. They are philosophical ones.
Yet most organizations deploy AI without ever asking them.
Core Question: What is this system's true purpose, and who does it serve?
We analyze workflow configurations, node connections, and optimization targets to identify what the system actually optimizes for—not what documentation claims. We map stakeholder impact, temporal analysis (short vs long-term), and detect teleological confusion where systems serve contradictory purposes.
Core Question: What knowledge does this system use, and can we trust it?
We classify data sources by reliability, track provenance, distinguish pattern from understanding, assess hallucination risks, and evaluate epistemic humility. Does the system know what it doesn't know?
Core Question: How does this system represent reality, and what does it miss?
We extract entities, map relationships, evaluate context awareness, measure representation fidelity, and identify ontological blind spots. We ensure systems see the world as it is, not just as simplified models suggest.
The framework generates actionable scores (0-100) across all three dimensions, identifying specific improvements. We don't just audit—we provide the path forward.
We're not waiting for regulations to dictate AI governance. We're building the standard that others will adopt—regulators, enterprises, and AI platforms.
As a Public Benefit Corporation, we're mission-locked: our framework serves stakeholder alignment, not just shareholder returns. We can't be acquired and redirected. We can't compromise the mission for short-term gains.
This is infrastructure for the AI economy. Join us in building it.
Written by Gwylym Pryce-Owen, Founder of Auspexi
Connect on LinkedIn →Setting up our demo system. Join the beta waitlist to be notified when it's ready.