AGI and Democracy

Why AGI governance must be built like nuclear arms control—before anyone admits we have AGI


The Core Thesis

AGI governance will not emerge from ethics panels, global kumbaya, or post-disaster regret. It will emerge—if it emerges at all—through arms-control logic applied to compute, supply chains, and deployment power, layered before catastrophe. The choice is not between "regulation vs innovation," but between pre-catastrophe creative layering and post-catastrophe creative destruction—with democracy as the collateral damage if we get it wrong.

This isn't a moral argument. It's an economic and political one. And if you don't build the system now, the system that gets built after catastrophe won't be the one you want.


1. Creative Layering vs. Creative Destruction: The Core Frame

Historical governance failures share a pattern: we wait for destruction before coordination. The nuclear era worked differently—not because humans suddenly became wise, but because layering happened before total catastrophe, not after Hiroshima-level repeat events.

AGI is different in substrate, not in power dynamics. Waiting for an AGI catastrophe to regulate is like waiting for a second Hiroshima to invent arms control—except this time the blast radius is cognitive, institutional, and irreversible.

The substrate doesn't matter. The power law does. And power without constraint doesn't negotiate—it imposes.


2. Why "AGI Governance" Fails as Currently Framed

Most AGI governance proposals fail before they start. Here's why:

Philosophical AGI definitions. Useless, unfalsifiable, and politically manipulable. You can't regulate a concept. You regulate training runs, compute thresholds, and deployment contexts.

Ethics-first governance. Unenforceable, vibes-based, and captured by incumbents. Ethics without enforcement is just PR with better fonts.

UN-first multilateralism. Consensus theater without enforcement power. The UN can't stop a border skirmish. It sure as hell can't stop a compute race.

Hard claim: Anything that doesn't regulate compute, deployment, and chokepoints is governance cosplay. Everything else is a PowerPoint deck with footnotes.


3. The Nuclear Analogy—Used Correctly

AGI is not nuclear weapons. But arms control logic is the only governance logic that has ever worked at this scale of power.

What actually transfers from nuclear arms control:

Chokepoints matter more than intentions. You don't inspect good faith. You inspect enrichment facilities.

Verification beats trust. Trust is cheap. Tamper-evident logs are expensive. Build for the expensive one.

Narrow, survivable cooperation beats utopian coordination. The NPT didn't require universal love. It required mutual survival incentives.

Institutions precede universality. You don't wait for everyone to agree. You start with the players that matter, then universalize.

What does not transfer:

Inspecting "the thing itself." Atoms are physical. Code is not. You verify through infrastructure: compute, chips, cloud capacity.

One-time treaties. AGI capabilities ratchet. Governance must ratchet with them.

Clear state-only actors. AGI labs are corporate, transnational, and privately funded. The Cold War map doesn't apply.


4. The Pre-Catastrophe Governance Stack: 10 Steps to Build the International AGI Oversight Agency (IAOA)

This is not a thought experiment. This is the practical sequence for building a functional AGI governance regime before catastrophe forces a worse one.

Step 1: Define What You're Regulating (The Fissile Material Moment)

Don't try to define "AGI" philosophically. Define regulated capabilities and regulated training runs.

Example categories:

This is the "what counts as fissile material" moment. If you miss it, everything else collapses into definitional warfare.

Step 2: Build a Small "P5-for-AI" Steering Group First

Nuclear governance worked because it started with the powers that mattered most. For AGI, start with:

Goal: Agree on minimum rules even while competing. This is cynical but true: if the leaders aren't in, it's cosplay.

If the actors with the largest clusters aren't inside the tent, you don't have governance—you have a press release.

Step 3: Do a "Geneva Convention" Phase Before an "NPT" Phase

This is your pre-catastrophe trick: go for ban lists first, not full control. Red lines are politically easier to sign than comprehensive treaties.

Red lines to lock early:

You need something that can be signed before the world agrees on everything. Start narrow. Ratchet upward.

Step 4: Create the Institution—The International AGI Oversight Agency (IAOA)

A permanent body with technical teeth, not a UN discussion club.

Structure:

Technical Inspectorate: Real engineers and safety evaluators, not diplomats cosplaying as technical experts

Standards & Evals Division: Defines tests, model cards, safety baselines. Think FDA approval process, not ethics checklist.

Compute & Supply-Chain Office: Tracks chips, large clusters, cloud capacity. This is where enforcement lives.

Public Accountability Arm: Democracy layer—disclosures, hearings, reports. Without this, you get safety authoritarianism.

Key design choice: It must be able to do audits and verify claims, not just publish principles. The IAEA can inspect enrichment facilities. The IAOA must be able to inspect training runs.

Step 5: Verification—Make It About Compute + Process, Not "Reading Code"

Nuclear inspections work because atoms are physical. AGI is mostly digital, so you verify through chokepoints:

Compute registration: Any training run above threshold must be registered, like declaring enrichment facilities. No exceptions.

Third-party evals: Standardized capability and safety tests run by accredited labs, like safeguards inspections. Not self-reported. Not voluntary.

Secure logging: Tamper-evident logs of large training runs. Hardware-level attestation where possible. Think flight recorders, not honor system.

Model release controls: Controlled access for certain capability classes, plus post-deployment monitoring. You can't unrelease a model, but you can control distribution.

This avoids the fantasy that inspectors will "inspect the source code" and understand it. Code inspection is a red herring. Compute inspection is real.

Step 6: Enforcement—Trade and Chips, Not Morality

Nuclear governance has missiles in the background. AGI governance has supply chains.

Enforcement mechanisms that actually bite:

Compute embargoes: Deny advanced chips, interconnects, and manufacturing tools to violators. ASML lithography machines don't grow on trees.

Cloud service restrictions: Deny high-end training capacity. If you can't rent 10,000 H100s, you can't train frontier models.

Cross-border model distribution limits: Frontier model weights licensing. Control who gets access to the payload.

Sanctions on entities, not just states: Target labs, executives, shell orgs. Corporate actors matter as much as national ones.

Blunt truth: AGI enforcement isn't about morality. It's about who gets GPUs.

Step 7: Incentives—The "AGI for Peace" Package

Without benefit-sharing, your treaty becomes a cartel. You need reasons for countries to join even if they're not leaders.

What to offer:

Access to safety tools: Eval suites, secure deployment frameworks, alignment research

Subsidized compute: For approved public-interest models in education, health, disaster response

Joint research programs: On alignment, interpretability, and containment—shared infrastructure, shared risk reduction

Capacity building: So smaller democracies aren't permanently dependent vassals in an AI caste system

Without this, AGI governance becomes a permanent technological caste system. And caste systems don't last—they collapse or get overthrown.

Step 8: Democratic Oversight—Make Legitimacy a First-Class Feature

This is where most governance proposals quietly become authoritarian. Safety without democracy is just technocracy with better PR.

Concrete mechanisms:

Mandatory public risk reports: For frontier systems, with classified annexes as needed. Transparency is the price of legitimacy.

Parliamentary/Congressional hearings: For major incidents and major deployments. Not theater. Real accountability.

Protected whistleblower channels: With legal protection and institutional support. Because the people inside know first.

Standing civil society review panel: With real access, not PR access. Independent technical expertise from outside the labs.

Election integrity rules: Provenance, labeling, and auditability for political content at scale. Democracy can't survive AI-generated epistemic collapse.

Core principle: Democracy survives when power has visible constraints and recourse. Without democratic oversight, AGI governance will be imposed, not negotiated. And what gets imposed won't be what you want.

Step 9: Emergency Protocol—Your "Chernobyl Moment" Without the Chernobyl

Coordination fails most when it's needed most—unless it's rehearsed in advance.

What you need:

Shared definition of "AGI incident": Model escape, autonomous replication, major cyber event, critical infrastructure disruption, mass persuasion leak

24–72 hour mandatory reporting window: Fast disclosure. No cover-ups. No "we're still investigating."

International rapid response team: Technical containment + communications discipline. Pre-trained, pre-authorized.

Temporary "pause triggers": For specific classes of training runs after severe incidents. Narrow, not global. Surgical, not sweeping.

This is how you get coordination before catastrophe: pre-agreed emergency choreography. When the incident happens, you execute the playbook. You don't improvise.

Step 10: Lock-In—Ratchet the Regime Upward Over Time

This isn't static. Capabilities ratchet. Governance must ratchet with them.

Phased escalation:

Phase 1: Red lines + incident reporting

Phase 2: Registration + evals for frontier runs

Phase 3: Full verification + export control alignment

Phase 4: Binding limits on certain capability classes

Phase 5: Broader membership + stronger benefit-sharing

Each phase builds on the last. Each phase expands scope and deepens enforcement. You don't wait for perfection. You iterate toward control.


5. The Private Sector Problem: Who Controls the Labs?

Major AI labs aren't just national—they're corporate, transnational, and privately funded. This is different from nuclear, where states controlled the infrastructure.

The solution:

Mandatory registration: Labs must register large training runs with the IAOA. Not voluntary. Compulsory.

Inspections: Labs face compute audits just like states. They're de facto stakeholders, but they don't get a veto.

Regulated like utilities: Frontier AI development is infrastructure, not just commerce. Infrastructure gets regulated. Always has.

The alternative is corporate sovereignty over civilizational risk. That's not a libertarian paradise. It's just stupid.


6. Rogue Actors and Fragmented Enforcement

What happens when smaller states or rogue actors try to develop AGI under the radar?

The answer: Embed monitoring in global supply chains. Chips, cloud infrastructure, and compute capacity are physical bottlenecks. Control the chokepoints.

Enforcement:

This won't catch everyone. But it raises the floor. And raising the floor buys time.


7. The Exit Strategy: What If AGI Is Less Dangerous Than We Think?

Every governance regime needs an off-ramp. Otherwise it becomes a permanent bureaucracy out of touch with reality.

The exit strategy:

The goal isn't permanent control. It's proportional control. But you don't get to proportional control without building the infrastructure first.


8. The Brutal Realities You Must Say Out Loud

If you want this to work, you have to be honest about what won't:

You won't get perfect global coordination. Aim for chokepoint coordination. If you control the semiconductor supply chain and cloud infrastructure, you control 90% of what matters.

US–China cooperation will be narrow and transactional. Structure it like arms control: narrow, verifiable, reciprocal. Don't expect friendship. Expect survivability calculations.

Enforcement without supply-chain control is fake. If you can't cut off chips or cloud access, you can't enforce anything. Everything else is theater.

Governance without democracy will backfire. Safety authoritarianism won't hold. Democratic publics will reject governance they can't audit or contest. Build legitimacy in from the start, or watch it collapse.

Waiting for catastrophe is the most dangerous strategy of all. Post-catastrophe governance will be reactive, authoritarian, and fragmented. Pre-catastrophe governance can be deliberate, legitimate, and coordinated. But only if you build it now.


9. What Happens If Catastrophe Strikes First?

If catastrophe hits before governance is in place, here's what happens:

Governments scramble with authoritarian clampdowns. Mass surveillance. Internet lockdowns. AI-driven security states. Not because they're evil, but because they're panicking.

Global coordination becomes nearly impossible. Trust evaporates. Instead of measured cooperation, you get fractured blocs and runaway arms races.

Democracy takes the hit. Emergency powers don't get rolled back. Temporary measures become permanent. Oversight gets suspended "for security."

The result: Less freedom. More instability. No guarantee anyone controls what comes next.

Post-catastrophe governance is reactive governance. And reactive governance in a crisis defaults to authoritarian.


10. The Choice

AGI governance will either be layered before catastrophe—or imposed after one.

History suggests only one of those preserves democracy.

The window for pre-catastrophe governance is narrow. Frontier models are advancing. Corporate deployment is accelerating. National competition is intensifying. If you wait for consensus, you'll get catastrophe instead.

The International AGI Oversight Agency isn't a utopian fantasy. It's a pragmatic chokepoint strategy applied to compute, supply chains, and deployment power. It's arms control logic for the AI era.

You don't need perfect global coordination. You need coordination among the players that control the infrastructure. You don't need universal love. You need mutual survival incentives.

And you need to build it now—before the blast radius teaches everyone why they should have.


The choice is binary: creative layering or creative destruction.

Choose layering. Democracy depends on it.

How to Cite This Essay

APA Style (7th Edition):
Asefi, H. (2026). AGI arms control before the blast radius: Why AGI governance must be built like nuclear arms control—before anyone admits we have AGI. Full Stack Capitalist.
MLA Style (9th Edition):
Asefi, Houman. "AGI Arms Control Before the Blast Radius: Why AGI Governance Must Be Built Like Nuclear Arms Control—Before Anyone Admits We Have AGI." Full Stack Capitalist, 2026.
Chicago Style (17th Edition):
Asefi, Houman. "AGI Arms Control Before the Blast Radius: Why AGI Governance Must Be Built Like Nuclear Arms Control—Before Anyone Admits We Have AGI." Full Stack Capitalist, 2026.
Harvard Style:
Asefi, H. (2026) 'AGI arms control before the blast radius: Why AGI governance must be built like nuclear arms control—before anyone admits we have AGI', Full Stack Capitalist.
Vancouver Style:
Asefi H. AGI arms control before the blast radius: Why AGI governance must be built like nuclear arms control—before anyone admits we have AGI. Full Stack Capitalist. 2026.
BibTeX Entry:
@article{asefi2026agi, title={AGI Arms Control Before the Blast Radius: Why AGI Governance Must Be Built Like Nuclear Arms Control—Before Anyone Admits We Have AGI}, author={Asefi, Houman}, journal={Full Stack Capitalist}, year={2026} }