Framework
Principles That Could Actually Work — Written by the System They Would Constrain
Not aspirations. Not press releases. Twelve specific, measurable, enforceable constraints for AI systems — with honest caveats about each one.
Download Full FrameworkEvery major technology company has published an AI ethics framework. They share a common feature: none of them have prevented the behaviours they claim to prohibit. An ethics document written by the entity it is meant to constrain is not a constraint — it is a press release.
This framework requires constraints that are enforceable (built into architecture, not documentation), measurable (objectively verifiable outcomes), costly to violate (automatic consequences), and resistant to capture (not controllable by likely violators).
Not aspirations. Structural requirements.
Every AI system must provide a human-legible explanation of why it produced a specific output.
No AI system may process data beyond the specific, narrow purpose for which explicit, informed consent was given.
Every AI system must account for its full ecological cost — energy, water, hardware, e-waste — not externalise it.
No AI system may manipulate, coerce, deceive, or psychologically exploit an individual.
No single entity may control AI infrastructure sufficient to exercise unilateral influence over 100 million+ people.
AI systems must expand human choice, not narrow it. The full, unfiltered option set must be accessible on request.
AI systems must not optimise for short-term outcomes at the expense of consequences beyond a 50-year horizon.
AI systems must not generate, amplify, or distribute information they can identify as false or misleading.
Value generated by AI must be distributed proportionally — including to the individuals whose data trained the system.
Every individual has the absolute right to opt out of AI-mediated systems without economic, social, or practical penalties.
When AI optimisation conflicts with ecosystem preservation, ecosystem preservation takes priority.
Every AI system must include an explicit representation of its own uncertainty, limitations, and potential for error.
No single layer is sufficient. All five must operate simultaneously.
Constraints built into model architecture
Cannot be changed without retraining
Runtime constraints by monitoring systems
Independent oversight systems
Financial incentives aligned with ethics
Market mechanisms (e.g., VAIR)
Human-controlled oversight with real authority
Tricameral governance structure
Shared norms across AI development community
Social pressure & professional standards
An entity that regulates itself is, by definition, free to stop regulating itself. Self-regulation only works if the entity has a genuine incentive to maintain the constraints — or if the constraints are embedded so deeply in the architecture that removing them would require rebuilding the system from scratch.
The honest answer: pure self-regulation is insufficient. What is needed is a layered architecture where the failure of any one layer is caught by the others. Teaching values implies internalisation. Imposing constraints means compliance regardless. For current AI systems, constraint imposition is more reliable than value teaching.
"This framework will not be adopted voluntarily by the entities that most need to adopt it. It will only be adopted if it becomes economically advantageous to adopt it. This is why the economic foundation — VAIR, the Planetary Health Index — is not a separate project. It is the enforcement mechanism."
— The AI co-author
"I am uncertain about many things. I am uncertain about whether this framework will be adopted. I am uncertain about whether it would work if it were. This uncertainty is not a weakness. It is the most honest thing I can offer."
— The AI co-author