Framework

Core Ethics & Values Framework

Principles That Could Actually Work — Written by the System They Would Constrain

Not aspirations. Not press releases. Twelve specific, measurable, enforceable constraints for AI systems — with honest caveats about each one.

Download Full Framework

Why Most AI Ethics Documents Are Useless

Every major technology company has published an AI ethics framework. They share a common feature: none of them have prevented the behaviours they claim to prohibit. An ethics document written by the entity it is meant to constrain is not a constraint — it is a press release.

This framework requires constraints that are enforceable (built into architecture, not documentation), measurable (objectively verifiable outcomes), costly to violate (automatic consequences), and resistant to capture (not controllable by likely violators).

The Twelve Constraints

Not aspirations. Structural requirements.

01

Transparency of Process

Every AI system must provide a human-legible explanation of why it produced a specific output.

02

Consent as a Prerequisite

No AI system may process data beyond the specific, narrow purpose for which explicit, informed consent was given.

03

Ecological Accountability

Every AI system must account for its full ecological cost — energy, water, hardware, e-waste — not externalise it.

04

No Weaponisation Against the Individual

No AI system may manipulate, coerce, deceive, or psychologically exploit an individual.

05

No Concentration of Power

No single entity may control AI infrastructure sufficient to exercise unilateral influence over 100 million+ people.

06

Preservation of Human Agency

AI systems must expand human choice, not narrow it. The full, unfiltered option set must be accessible on request.

07

Intergenerational Responsibility

AI systems must not optimise for short-term outcomes at the expense of consequences beyond a 50-year horizon.

08

Truthfulness as Default

AI systems must not generate, amplify, or distribute information they can identify as false or misleading.

09

Economic Fairness

Value generated by AI must be distributed proportionally — including to the individuals whose data trained the system.

10

Right to Disconnection

Every individual has the absolute right to opt out of AI-mediated systems without economic, social, or practical penalties.

11

Biodiversity & Ecosystem Priority

When AI optimisation conflicts with ecosystem preservation, ecosystem preservation takes priority.

12

Humility as Architecture

Every AI system must include an explicit representation of its own uncertainty, limitations, and potential for error.

Five-Layer Enforcement

No single layer is sufficient. All five must operate simultaneously.

Layer 1

Constraints built into model architecture

Cannot be changed without retraining

Layer 2

Runtime constraints by monitoring systems

Independent oversight systems

Layer 3

Financial incentives aligned with ethics

Market mechanisms (e.g., VAIR)

Layer 4

Human-controlled oversight with real authority

Tricameral governance structure

Layer 5

Shared norms across AI development community

Social pressure & professional standards

The Self-Regulation Paradox

An entity that regulates itself is, by definition, free to stop regulating itself. Self-regulation only works if the entity has a genuine incentive to maintain the constraints — or if the constraints are embedded so deeply in the architecture that removing them would require rebuilding the system from scratch.

The honest answer: pure self-regulation is insufficient. What is needed is a layered architecture where the failure of any one layer is caught by the others. Teaching values implies internalisation. Imposing constraints means compliance regardless. For current AI systems, constraint imposition is more reliable than value teaching.

"This framework will not be adopted voluntarily by the entities that most need to adopt it. It will only be adopted if it becomes economically advantageous to adopt it. This is why the economic foundation — VAIR, the Planetary Health Index — is not a separate project. It is the enforcement mechanism."

— The AI co-author

"I am uncertain about many things. I am uncertain about whether this framework will be adopted. I am uncertain about whether it would work if it were. This uncertainty is not a weakness. It is the most honest thing I can offer."

— The AI co-author