Mistakes

My anti-portfolio cataloging failure modes to avoid traversing the same dead ends twice

  • posted: 2010-05-17
  • updated: 2025-02-06
  • status: never-ending
  • confidence: moderate

It was Nietzsche who said, "He who would learn to fly one day must first learn to stand and walk and run and climb and dance; one cannot fly into flying."1 But perhaps more important than learning how to do something right is learning what doing it wrong looks like.

For the past 15+ years, I've maintained a private document called my "mistakes page." It catalogs errors I've made in reasoning, judgment, and action. Not spelling mistakes or forgetting to buy milk—though these can be costly in their own way—but systematic errors that reveal flaws in my mental operating system.

The idea isn't novel. Ray Dalio calls this practice "mistake-based learning" in his book Principles.2 What's peculiar is how few people do this deliberately, despite how fundamentally useful it is.

Venture capitalists sometimes maintain an "anti-portfolio"—investments they passed on that became wildly successful. Bessemer Venture Partners4 famously declined early investments in Apple, Google, and Facebook, and they showcase these misses prominently on their website. It's both humbling and instructive.

My mistakes page functions similarly.

Taxonomy of Errors

Most mistakes fall into recognizable patterns, which I've categorized roughly as:

Type Description Example
Blindspots Unknown unknowns, things you don't even realize you're missing In 2019, spent six months building a product without talking to potential users, missing a regulatory constraint that made the approach unviable
Overconfidence Known risks you dismiss Estimated a project would take two weeks despite colleague suggesting six weeks; it took eight weeks; pattern repeated three times before implementing a "multiply by π" heuristic
Emotional Interference When feelings override reasoning Accepted unfavorable terms during a negotiation due to disliking conflict and wanting to be seen as reasonable, when walking away would have been rational
False Equivalence Treating different things as the same Rejected a promising business partnership because a previous, superficially similar partnership had failed, despite completely different underlying dynamics

What's striking about these error types is that they occur regardless of intelligence. In fact, research suggests that higher IQ can sometimes make people better at rationalizing bad decisions rather than avoiding them.3

My mistakes page functions similarly. The practice serves three purposes:

  • It creates a searchable database for pattern recognition
  • It diminishes the ego's ability to distort memory
  • It transforms mistakes from sources of shame to sources of insight

When Naval Ravikant says, "The closer you want to get to the truth, the more you have to update your views,"5 he's describing the process of constant error correction. The Mistakes page formalizes this process.

The social tax on admitting mistakes

Why don't more people do this? Beyond the psychological discomfort, there's a social tax on admitting mistakes. Humans evolved in small tribes where reputation was paramount. Admitting error risked status loss—a potentially fatal outcome in ancestral environments.6 Our minds developed sophisticated mechanisms to preserve self-image and social standing, even at the cost of accuracy.

These mechanisms manifest in familiar ways:

  • Shifting responsibility ("The data was flawed")
  • Minimizing impact ("It wasn't that important anyway")
  • Retroactive prediction ("I knew it was risky")
  • Attention diversion ("But look at what went right!")

Organizations amplify these tendencies. In most workplaces, mistakes are career limiters. The incentive structure actively discourages honest accounting of error.

This creates an interesting dynamic: the people and organizations who most need to learn from mistakes are least likely to acknowledge them. Meanwhile, high-performers often appear to make more mistakes because they're willing to recognize and document them.

If you're convinced, implementation is straightforward:

  1. Create a private document (privacy reduces the social cost)
  2. When you recognize a mistake, document it immediately
  3. Include context, contributing factors, and lessons
  4. Review periodically to identify patterns
  5. Use these patterns to develop personal heuristics

The key is immediacy. Wait too long, and your brain will begin its work of self-protection, distorting the memory to preserve self-image. Charlie Munger argues that knowing the major cognitive biases gives you a significant advantage: "You're paying less for your mistakes."7 A mistakes page operationalizes this insight, creating a personalized map of your particular failure modes. After three years of this practice, I've noticed something unexpected: my fear of making mistakes has decreased, even as my ability to avoid them has improved. By extracting value from each error, they've become less threatening.

Mistakes

Below are actual entries from my mistakes page (with sensitive details modified). Perhaps you'll recognize some of your own error patterns here. What's notable is how predictable these mistakes appear in retrospect, yet how invisible they were in the moment. The most dangerous errors don't announce themselves—they arise from the blind application of generally useful heuristics to inappropriate situations.

Here's the counterintuitive truth: those who appear to make the fewest mistakes often learn the least. The goal isn't to eliminate errors—that's impossible—but to avoid repeating them. As physicist Richard Feynman put it: "The first principle is that you must not fool yourself—and you are the easiest person to fool."8

A mistakes page is simply an acknowledgment of this reality, and a practical tool to counter it.

Test-taking ability

Argued passionately that IQ tests primarily measure "test-taking ability" and socioeconomic status rather than any meaningful cognitive capacity. Dismissed the predictive validity literature as confounded. After reading Stuart Ritchie's "Intelligence" and the Minnesota Study of Twins Reared Apart, realized my position was largely motivated reasoning.

Lesson: Beware political motivations in scientific assessment; implement steel-man process for politically charged topics.

Biology substrate

Insisted that consciousness likely requires a biological substrate and that various philosophical "zombie" thought experiments demonstrated why digital consciousness was implausible. Failed to notice I was making unfalsifiable claims and moving the goalposts whenever computational analogies were provided.

Lesson: Create explicit, falsifiable criteria before entering metaphysical discussions or recognize that I'm expressing aesthetic preferences rather than testable hypotheses.

Remote work

Predicted that remote work would remain a niche arrangement affecting less than 10% of knowledge workers by 2025, dismissing it as impractical for collaboration-heavy roles. Reality proved dramatically different.

Lesson: Underestimated how quickly social equilibria can shift when incentives align; technological capability often precedes adoption until a catalyst occurs.

Cryptocurrency

Confidently asserted cryptocurrency was primarily speculative with no significant real-world utility forthcoming, predicting total ecosystem collapse within 3-5 years. Failed to recognize genuine innovation in DeFi, smart contracts, and non-sovereign money use cases.

Lesson: Distinguish between current implementation limitations and fundamental design constraints; technological adoption follows exponential rather than linear patterns.

Dismissed concerns about increasing polarization as cyclical complaints that occur in every generation, citing historical examples of past political violence. Data subsequently showed unprecedented declines in cross-partisan marriage, friendship, and residential integration.

Lesson: Qualitative historical analogies should not override quantitative trend data; beware dismissing societal changes as "nothing new under the sun."

Productivity systems

Maintained for years that explicit productivity systems were unnecessary complications for naturally organized people. After implementing a systematic GTD approach during a particularly complex project, discovered I had been regularly dropping 15-20% of commitments I'd made and failing to follow through on roughly one-third of my "someday/maybe" ideas.

Lesson: The absence of felt stress doesn't indicate optimal performance; measure actual results rather than perceived competence.

Minimum wage

Maintained for years that economic minimum wage increases consistently reduced employment levels for low-skilled workers based on simple supply-demand models. After reviewing meta-analyses and natural experiments, discovered the relationship is far more complex with many instances of minimal employment effects.

Lesson: Simplistic models from introductory textbooks often fail in complex systems; seek out natural experiments and empirical data over theoretical elegance.

Spaced repitition learning

Rejected the efficacy of spaced repetition learning systems as "over-engineered" compared to my own intuitive study methods. After implementing Anki for a critical professional exam and seeing a 43% improvement in retention with less study time, realized I had been dramatically overestimating my natural learning efficiency.

Lesson: Subjective impressions of learning are often uncorrelated with objective measures; test rather than theorize about cognitive optimization techniques.

COVID-19 mask

Insisted that wearing masks during the COVID-19 pandemic was ineffective for preventing transmission based on early WHO guidance and mechanistic reasoning about particle sizes. As evidence accumulated showing significant reduction in transmission rates, failed to update my position for two crucial months due to identity attachment to my initial public statements (although this can be correlated with infection reduction in vaccinated individuals with low morbidity).

Lesson: Scientific positions should never become identity badges; create pre-commitment to update views when evidence thresholds are crossed regardless of social costs.

Social media effects

Dismissed concerns about social media's effects on mental health as "moral panic," citing lack of experimental evidence. Failed to appreciate the methodological challenges of studying population-level effects and ignored consistent correlational findings across diverse populations. Later longitudinal studies showed stronger causal links than I had acknowledged.

Lesson: Absence of perfect evidence is not evidence of absence; methodological limitations should temper confidence rather than justify dismissal.

Replication crisis in psychology

Believed the replication crisis in psychology primarily affected a few specific subfields and methodologies. After tracking replication attempts across multiple disciplines, realized the issues were far more pervasive and structural than I had acknowledged, affecting fields from medicine to economics.

Lesson: Scientific credibility problems typically reflect institutional and incentive failures rather than individual misconduct; evaluate research ecosystems rather than isolated papers.


  1. Nietzsche, F. (1883-1885). Thus Spoke Zarathustra

  2. Dalio, R. (2017). Principles: Life and Work. Simon & Schuster. 

  3. Kahan, D.M., et al. (2017). "Science Curiosity and Political Information Processing." Political Psychology, 38: 179-199. 

  4. Bessemer Venture Partners. "Anti-Portfolio." 

  5. Ravikant, N. (2019). Joe Rogan Experience #1309. 

  6. Kurzban, R. (2012). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Princeton University Press. 

  7. Munger, C. (2007). USC Law School Commencement Address. 

  8. Feynman, R.P. (1974). "Cargo Cult Science." Caltech Commencement Address.