Skip to main content

Future Concept

Open-Source AGI

Community-governed foundation models with verifiable training.

Open-Source AGI describes the horizon in which foundation models at or near general capability are openly auditable in weights, data, and training process, and are governed by some coalition broader than a single firm. It is the AI analog of Coordination Abundance: the underlying capability is necessary but insufficient; the question is who holds the keys. The trajectory from LLaMA through Mistral, DeepSeek, and subsequent open releases has made the baseline open model more capable than last year's frontier closed one.

What "open" has to mean

Open weights alone are necessary but not sufficient. A credible open-source AGI also requires an open training corpus (or at least an auditable provenance trail), reproducible training recipes, and third-party evaluation. Projects such as EleutherAI's Pythia and the LAION and OLMo efforts have pushed toward that stricter standard. The wiki uses "open" in this fuller sense, not the marketing sense.

Governance models

Open-source AGI requires a governance layer: who decides what the next training run optimizes for, who can fork, and who bears liability when the model is misused. Candidates range from standards-body consortia (analogous to the IETF) to credibly neutral non-profits with supermajority decision rules to on-chain governance experiments. None has yet scaled to the capital requirements of frontier training. This is the coordination frontier of the field.

Risks and open questions

The core objection — that open weights accelerate misuse — is serious and unresolved. The counter-argument is that closed concentration is also an alignment failure, in which legitimacy and accountability are ceded to a handful of firms. The wiki treats both risks as load-bearing and does not presume either side has already won. The practical test is whether open-source AGI can match frontier capability on a budget the public sector is willing to fund.