Traditional governance frameworks love maturity models. Level 1 through Level 5. Initial, Developing, Defined, Managed, Optimizing. Organizations benchmark themselves, identify gaps, create roadmaps to higher maturity. Consultants build practices around assessing current levels and charting paths forward. The implicit assumption is clear: higher maturity means better governance, and the goal is ascending the hierarchy toward the optimized state at the top.
This framework is wrong for AI ethics. Not merely incomplete or in need of modification. Wrong. It measures the wrong thing, optimizes for the wrong outcome, and can produce perverse results where organizations with sophisticated governance capabilities deploy AI in ways that systematically harm stakeholders. An organization with modest AI capabilities deployed in ways that strengthen relationships is more ethically advanced than an organization with sophisticated capabilities destroying them. Assessment must be directional, not positional. This is the Derivative Principle, and it transforms how we understand and evaluate AI governance.
The Mathematical Intuition
The principle takes its name from calculus. A function’s derivative measures not where the function currently sits but where it is heading. A positive derivative means the function is increasing; a negative derivative means it is decreasing. Two organizations might currently occupy the same position, but if one has a positive derivative (improving) while the other has a negative derivative (declining), they are doing fundamentally different things regardless of their common current position. Similarly, an organization at a lower position with a strong positive derivative may be performing better ethically than an organization at a higher position with a negative derivative.
Applied to AI governance, the Derivative Principle holds that all ethical action serves a single optimization target: the rate of increase in relational value. Not the absolute level of relational value, but the rate of increase. The question is not how much trust exists between an organization and its stakeholders but whether trust is building or eroding. Not how strong customer relationships are but whether those relationships are strengthening or weakening. Not how well employees are treated but whether treatment is improving or degrading. The derivative measures direction and velocity, not position.
This may seem counterintuitive. Surely an organization with strong stakeholder relationships is performing better than one with weak relationships, regardless of trajectory. But the Derivative Principle recognizes that position is a function of past choices already made, while ethics concerns choices being made now. An organization that inherited strong relationships and is now degrading them through AI deployment is making worse ethical choices than an organization that inherited weak relationships and is now strengthening them. Ethical assessment evaluates current moral agency exercise, and that exercise is directional.
Aligned vs Inverting: The Two Directions
The Derivative Principle creates a fundamental directional assessment. AI deployment is either aligned, meaning it moves stakeholder relationships toward flourishing, or it is inverting, meaning it moves relationships away from flourishing and toward degradation. These are not neutral descriptions but moral evaluations. Aligned AI deployment represents human moral agents choosing well. Inverting AI deployment represents human moral agents choosing poorly, regardless of whether they recognize what they are doing.
Consider how this applies across the domains of AI governance. In the domain of stakeholder initiative, aligned deployment grants AI authority in ways that expand stakeholder agency and choice. Inverting deployment grants authority in ways that constrain stakeholder agency and eliminate meaningful choice. In the domain of burden distribution, aligned deployment ensures AI decisions distribute burdens fairly across stakeholders. Inverting deployment shifts burdens onto vulnerable populations while capturing benefits for the organization. In the domain of transparency, aligned deployment makes AI operations and limitations genuinely accessible to stakeholders. Inverting deployment obscures AI involvement or creates false impressions about AI capabilities.
The pattern repeats across every domain where AI affects human relationships. The question is always directional. Is this AI deployment building trust or eroding it? Strengthening relationships or weakening them? Expanding human flourishing or constraining it? Creating value for stakeholders or extracting from them? The answers position organizations not on a maturity hierarchy but on a directional spectrum from aligned to inverting.
Why Maturity Models Fail
Maturity models fail for AI ethics because they measure capability rather than direction. An organization with sophisticated AI governance processes, comprehensive documentation, extensive training programs, and robust oversight mechanisms might score highly on any maturity assessment. But if those processes are deployed to optimize AI systems that extract value from customers, shift burdens onto employees, obscure AI involvement from stakeholders, and degrade the fabric of human relationships, the organization is ethically inverting despite its governance maturity.
Previous posts in this series established that AI governance evaluates how humans exercise moral agency through AI systems. The Derivative Principle specifies what that evaluation must assess: direction toward or away from relational flourishing. A control-based governance framework with high maturity can be deployed for inverting purposes. A modest governance framework aligned toward flourishing outperforms it ethically. The sophistication of governance machinery matters far less than where that machinery is pointed.
This insight exposes the fundamental inadequacy of compliance-focused governance. Organizations can comply with regulations, satisfy assessment criteria, pass audits, and achieve certifications while systematically inverting stakeholder relationships through AI deployment. Compliance measures whether organizations meet prescribed standards. The Derivative Principle measures whether organizations build or destroy relational value. These are different things. An organization might be fully compliant and deeply inverting, producing what we have called governance theater rather than substantive stakeholder protection.
Assessment Implications
The Derivative Principle transforms AI governance assessment. Instead of asking whether organizations have implemented required governance structures, assessment asks whether AI deployment is moving relationships toward flourishing or away from it. Instead of scoring maturity levels, assessment evaluates directional alignment. Instead of identifying gaps in governance capability, assessment identifies inversions in stakeholder impact.
Directional assessment requires different evidence than maturity assessment. Maturity assessment examines governance artifacts: policies, procedures, documentation, training records, committee minutes. Directional assessment examines stakeholder experience: Is trust building or eroding? Are relationships strengthening or weakening? Are burdens distributed fairly or shifted onto the vulnerable? Is value created for stakeholders or extracted from them? This evidence comes not from governance documentation but from stakeholder feedback, relationship metrics, and outcome analysis.
The Derivative Principle also changes what improvement means. Under maturity models, improvement means ascending levels, filling capability gaps, implementing additional governance processes. Under the Derivative Principle, improvement means shifting from inverting to aligned, changing direction toward stakeholder flourishing. An organization that simplifies its governance while redirecting AI deployment toward stakeholder benefit has improved even if governance maturity has declined. The goal is not more governance but better direction.
Subsequent posts in this series will detail the Two Conditions required for ethical AI deployment and the Daisy Chain Principle governing accountability in complex AI architectures. Throughout, the Derivative Principle provides the evaluative lens. The question is always whether AI deployment, and the governance structures surrounding it, are moving stakeholder relationships toward flourishing or away from it. Direction, not position. Trajectory, not achievement. The rate of increase in relational value, not the level currently attained. This is how AI governance becomes ethical practice rather than compliance exercise.






