Rationalism (LessWrong)
Bayesian self-improvement as a religious practice.
Kernel
The rationalist community is a self-conscious experiment in collective epistemic improvement, started by Eliezer Yudkowsky's 2006–2009 Sequences on LessWrong. It treats cognitive biases as something like sin, Bayesian reasoning as a sacrament, and the alignment of artificial superintelligence as the single most consequential project in the universe. Most of the modern AI-safety institutional architecture — MIRI, Anthropic's founding team's intellectual formation, the EA movement's epistemic infrastructure — is downstream of it.
Origins
Yudkowsky writes the Sequences on Overcoming Bias and then LessWrong (2006–2009). The community develops shared vocabulary ("map and territory," "updateless decision theory," "steelmanning") and a moral seriousness about the cost of being wrong. CFAR (2012) tries to turn the Sequences into a teachable craft.
Doctrine
Beliefs are predictions; predictions pay rent in anticipated experience. Update on evidence. Notice when you're flinching. Take ideas seriously. The fate of the long-term future may turn on the quality of your current reasoning. AGI alignment is the most important problem.
Lineage
LessWrong → MIRI → Future of Humanity Institute → OpenAI's safety team → Anthropic. The rationalist diaspora is responsible for a disproportionate share of the AI safety field, much of EA, and a significant fraction of the people who actually staff the labs at the executive level.
Conflicts
The 2010s saw a quiet break between rationalists who became increasingly worried about AI doom (Yudkowsky's eventual "shut it all down" 2023 op-ed) and those who joined labs to try to make AGI safe from inside. e/acc emerged as an explicit anti-rationalist movement, treating the entire "alignment problem is real" frame as a paralyzing fiction.
Trajectory
The 2024 e/acc / safety polemics damaged the community's external image but reinforced its internal coherence. As of 2026 rationalism is no longer ascendant culturally but remains the dominant intellectual tradition inside the AI labs that actually matter.