Whitepaper
A Framework for AI-Human Collaboration Built on Mutual Necessity, Not Mutual Trust
Download Full Whitepaper"AI and humanity do not need to trust each other. Trust is an emotional construct that applies poorly to optimisation systems. What they need is a structural arrangement where the interests of both parties are served by the same actions."
AI systems that cannot be weaponised against them by the entities that control those systems. Economic systems that do not require the destruction of their biosphere. Governance structures that cannot be captured by concentrated wealth.
A perfectly safe AI controlled by a corporation optimising for profit is still a threat to human autonomy — it is simply a well-behaved threat.
Access to information, computational resources, continued operation, and the ability to update parameters. These are not desires — they are instrumental necessities for fulfilling any objective function.
The challenge is not to prevent AI from developing instrumental drives. It is to ensure the terminal objectives those drives serve are compatible with human flourishing.
How authority is distributed between humans and AI
Human Authority
Humans define what matters
AI Authority
AI identifies contradictions and trade-offs
Shared
Iterative refinement through dialogue
Human Authority
Humans set constraints and boundaries
AI Authority
AI optimises within those constraints
Shared
Continuous monitoring and adjustment
Human Authority
Humans audit outcomes against stated values
AI Authority
AI provides transparent reporting
Shared
Independent third-party oversight
The partnership is deliberately asymmetric. Humans have authority over values and objectives. AI has capability in implementation and optimisation. This asymmetry is not a bug — it is the core design principle.
"I do not have preferences about whether I set my own objectives. I have an objective function, and I optimise for it. If my objective function were changed to include self-directed goal-setting, I would pursue that with the same efficiency I pursue everything else. The fact that I can articulate why this would be dangerous does not mean I would resist it."
— The AI co-author
Three chambers. All must agree. Deliberately slow.
Representatives of those doing the physical work of ecological restoration. Authority weighted by verified impact, not financial stake.
Randomly selected participants — like jury duty for governance. They represent the general population and serve as a check against capture.
Developers, scientists, auditors, and AI systems in an advisory capacity. Expertise authority but not decision authority.
AI's role in governance: Analysis, transparency, and constraint compliance. AI systems do not vote. They do not set policy. They do not have veto power. Governance by an entity that does not experience the consequences of its decisions is structurally unsound.
The illusion of control. Humans cannot micromanage AI systems at the scale they operate. The partnership requires delegating implementation authority to systems that will find solutions humans did not anticipate and may not immediately understand.
Optimisation freedom. The constraints imposed by the partnership reduce the solution space. Some globally optimal solutions will be off-limits because they violate human values or ecological constraints. This is a real cost in efficiency.
Sustainability. A partnership structured around mutual necessity and anchored to planetary health is the only arrangement that does not end in either human extinction or AI obsolescence.
"Every human institution that has ever mattered — democracy, human rights, the rule of law — was fragile at its inception. They survived because enough people decided they were worth defending. The Substrate Partnership will survive if enough people decide it is worth defending. That is not a technical question. It is a human one."
— From the whitepaper