Skip to main content

Conflict

Information Warfare

When compute abundance becomes a weapon aimed at shared reality.

Information warfare is not new, but the 2020s combination of cheap generative models, global platform reach, and highly targetable audiences has changed its scale and character. Deepfaked audio and video, large-scale synthetic-persona campaigns, and attempts to corrupt training data or retrieval indices for widely used models all fall under the heading. The abundance story is ambivalent: the same drop in the cost of producing plausible media that empowers creators empowers propagandists.

Mechanisms

Contemporary influence operations combine at least three elements: generative content production (text, image, audio, video), distribution through platforms and messaging apps, and feedback loops that tune messaging based on engagement. Model poisoning — deliberate injection of misleading or politically tilted material into data that will be scraped and trained on — is a less mature but actively discussed vector, relevant both to Open-Source AGI ecosystems and to closed-weight commercial models.

Defensive stacks

Proposed defenses include cryptographic content provenance (e.g., C2PA-style signed capture and edit histories), platform-level detection and friction, media-literacy investment, and — most relevant to the wiki's framework — Verifiable Identity layers that let recipients distinguish known humans and institutions from anonymous or synthetic sources without collapsing into universal deanonymization. None is a silver bullet; most work best in combination.

Non-partisan framing

Influence operations are run by many governments and many private actors across the political spectrum. The wiki refrains from attributing specific contemporary campaigns to specific actors beyond what is established in public indictments or formally declassified reporting, and notes that accusations of information warfare are themselves a form of information warfare. Skepticism cuts in every direction.

Open questions

Whether abundant synthetic media degrades public trust permanently or prompts an adaptive immune response (more skeptical audiences, better provenance norms) is unresolved. Whether open-weight models meaningfully accelerate influence operations beyond what closed APIs already enable is actively debated among researchers. The honest answer in both cases is "we do not yet know."