Philosophy

Why Simple?

Every organization drowns in data. Dashboards multiply. Reports stack up. Models grow more complicated. Data lakes turn into data swamps. And yet decisions don't get better. They get slower, less confident, and harder to explain and defend.

And what happens when decisions begin to fail from a complex process? Finding the root causes might take months, if at all. Let the finger-pointing begin.

This isn't a failure of intelligence. It's a failure of design. The evidence is clear, and it comes from fields where the stakes couldn't be higher: when it matters most, simple decision systems consistently outperform complex ones. Not despite their simplicity, but because of it.

The Nobel Laureate Who Ignored His Own Formula

When the future is uncertain, the most dangerous thing one can do is take the past too seriously.

In 1990, Harry Markowitz received the Nobel Prize in Economics for proving that an optimal portfolio exists, one that maximizes return while minimizing risk. The math is elegant. It accounts for correlations between assets, expected returns, and variance. It is, by every academic standard, the right answer.

So when Markowitz invested his own retirement savings, he ignored it. Instead, he split his money equally across a handful of funds, a strategy so plain it barely qualifies as a strategy at all.

Was he being lazy? Researchers later tested this question head-to-head. They pitted twelve sophisticated optimization models (Bayesian, non-Bayesian, the full arsenal of modern portfolio theory) against Markowitz's simple equal-split rule across seven real-world allocation problems. None of the twelve models could beat it.

The reason is counterintuitive but important. The complex models were exceptionally good at explaining the past. They fit historical data beautifully. But markets are noisy, and what those models were really doing was memorizing the noise: overfitting patterns that wouldn't repeat. The equal-split rule, because it estimates nothing, couldn't overfit anything. It was more robust precisely because it was less precise.

The researchers calculated that with 50 assets, the optimization models would need 500 years of data before they could reliably outperform the simple rule.

DeMiguel, V., Garlappi, L., & Uppal, R. (2009). Optimal versus naive diversification. Review of Financial Studies, 22(5), 1915–1953. As discussed in: Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1), 20–29.

Three Questions That Outperformed a Roomful of Doctors

A tool that people can understand and trust in the moment will always outperform a tool that's theoretically superior but practically ignored.

In a Michigan hospital, the coronary care unit had a crowding problem. Physicians, understandably cautious, were sending 90% of patients with chest pain to intensive care. The unit was overwhelmed, and the patients who truly needed critical attention were harder to reach.

The hospital had tried the sophisticated approach: a system called the Heart Disease Predictive Instrument, which gave physicians a chart of roughly 50 probabilities to check against patient symptoms, feeding results into a calculator. It was thorough. It was evidence-based. And it wasn't working, because no one under time pressure could use it reliably.

Researchers replaced it with a fast-and-frugal decision tree: three yes-or-no questions. Does the ECG show a specific anomaly? Is chest pain the chief complaint? Is any one of a short list of additional risk factors present? Each question either triggers an immediate decision or leads to the next question. A physician can walk through it in seconds.

The result: the simple tree was more accurate at predicting actual heart attacks than both the complex probability model and the physicians' own clinical judgment. It cut the false-alarm rate nearly in half while catching more of the real cases. The physicians at that Michigan hospital adopted the tree, and years later, they were still using it.

Green, L.A., & Mehr, D.R. (1997). What alters physicians' decisions to admit to the coronary care unit? Journal of Family Practice, 45, 219–226. As discussed in: Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1), 20–29.

The Checklist That Cut Surgical Deaths by a Third

The right few things, done consistently, beat comprehensive knowledge applied inconsistently.

Surgeon and researcher Atul Gawande made a distinction that should haunt every organization: the difference between errors of ignorance and errors of ineptitude. Errors of ignorance are mistakes we make because we don't know enough. Errors of ineptitude are mistakes we make because we fail to use what we already know.

Modern medicine's biggest problem, Gawande argued, is the second kind. The knowledge exists. But with thousands of things to track, steps get skipped. Critical actions fall through the cracks, not because anyone is incompetent, but because the system doesn't enforce the basics.

His solution was almost absurdly simple: a surgical checklist. Ninety seconds long. Targeting the three biggest killers: infection, bleeding, and unsafe anesthesia. Confirm the patient's identity. Confirm the procedure. Confirm antibiotics were given on time. Confirm the team has discussed potential complications.

When the WHO tested this checklist across eight hospitals in eight countries, major complications dropped by over a third. The checklist didn't teach surgeons anything new. It simply ensured that what they already knew got done, every single time.

Haynes, A.B., Weiser, T.G., Berry, W.R., et al. (2009). A surgical safety checklist to reduce morbidity and mortality in a global population. New England Journal of Medicine, 360(5), 491–499.

The Team That Bought a Pennant Race for 1/3 the Price

Knowing what to ignore is as important as knowing what to measure.

In 2002, the Oakland Athletics had the lowest payroll in Major League Baseball, at roughly $44 million, against teams spending three times that. By every conventional measure, they had no business competing.

General Manager Billy Beane's insight wasn't to find better data. It was to ignore the wrong data. Baseball's traditional evaluation system was rich with information: batting averages, scouting reports, a player's build, his swing mechanics, whether he "looked like" a ballplayer. Beane stripped all of it away and focused on two metrics the market systematically undervalued: on-base percentage and slugging percentage. These two numbers, more than any others, predicted a team's ability to score runs. Everything else was noise dressed up as expertise.

Scouting reports were explicitly set aside. Batting average, the sport's most iconic statistic, was treated as misleading. The A's front office endured ridicule for drafting players that traditional scouts dismissed.

Oakland won 103 games that season, including a historic 20-game winning streak. They matched the performance of the New York Yankees, who spent $125 million to win 103 games of their own.

Lewis, M. (2003). Moneyball: The art of winning an unfair game. W. W. Norton & Company.

The Principle

"The more unpredictable a situation, the more information you need to ignore."

— Gerd Gigerenzer

These stories span finance, medicine, surgery, and sports. The environments are different. The stakes are different. But the pattern is the same, and Gigerenzer spent decades studying it.

This cuts against every instinct. When faced with uncertainty, the natural impulse is to gather more data, add more variables, build a more comprehensive model. But in uncertain environments (which is to say, nearly all real-world environments), additional complexity doesn't add signal. It adds noise. And noise, once it's embedded in a model, looks exactly like signal until it fails.

Simple doesn't mean simplistic. It means doing the hard analytical work to identify the few factors that actually drive outcomes, and then build systems that make those factors impossible to miss and easy to act on. It means having the discipline to ignore everything else.

The organizations that make the best decisions aren't the ones with the most data. They're the ones that have figured out which data matters.

Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1), 20–29.

This Is My Work

I build decision tools and processes that follow this principle.

I take complex analytical environments, specifically places where teams are overwhelmed by data, models, and competing metrics, and I replace them with clear systems built around the few things that actually matter. The result is faster decisions, higher confidence, and tools that people actually trust and act on.