The Case Against Experts

Little, Brown & Co.

Poor Imhotep. Thanks to the movies, you know him as the Mummy, the ancient Egyptian sorcerer whose corpse lurches to life to wreak havoc on tomb raiders. In real life, Imhotep was a revered healer and a progenitor of modern medicine. You wouldn’t have wanted to follow all of his advice, given that salves of animal blood and dung were among his remedies. But he got a surprisingly large number of things right, including treating infections with mold and the surgical removal of tumors; there’s a 3,500-year-old book of medical treatments widely attributed to him that endures today at the New York Academy of Medicine. Not that the masses of Imhotep’s day got much benefit from his wisdom—it was dispensed only to royalty and others among the highly privileged.

Today, of course, supposed expert advice is fairly sprayed at all of us from every TV, newspaper, and Web page. But in terms of getting it right, it’s been largely downhill from Imhotep. Let’s not get started on shoot-from-the-hip pop gurus like stock picker Jim Cramer, new-age healer Deepak Chopra, or reality-TV tycoon Donald Trump, who usually don’t even attempt to offer solid evidence for their claims. More disconcerting is that even our most credentialed, data-driven experts—scientists, economists, academics, military advisers, high-powered consultants—often end up dropping the ball or providing conflicting advice. Consider one frustrating example: after hearing for years that sunscreen is critical to lowering our cancer risk, we’ve more recently heard not only that studies have concluded that high-SPF sunscreens end up providing insufficient protection for most people, but also that many popular brands of sunscreen can even promote skin cancer. More disconcerting, there is another group of findings insisting that the bigger worry should be about not getting enough sunlight because sunlight helps our bodies produce cancer-risk-lowering vitamin D. Hey, no problem, you can just take vitamin D pills—except studies conflict on whether these pills are likely to cause more harm than good.

But the problem is actually much worse than this sort of expert-advice debacle would lead us to think. In fact, medical, economic, and business-management researchers themselves have studied the reliability of published research and concluded that most of it is flawed, exaggerated, or just plain wrong. No wonder: scientists and other top-shelf experts are often highly biased, shockingly sloppy, and in a surprising number of cases outright frauds—and I’m relying on formal studies of these problems when I make these claims. Medical researchers, for example, have noted that about two thirds of the findings published in top medical journals end up being contradicted. Leading researchers such as John Ioannidis, a researcher at Harvard, Tufts, and the University of Ioannina in Greece, suspect that most of what doctors are taught is actually off base. The $95 billion we spend on medical research each year in the U.S. has barely boosted our average life spans since 1978, and if you take away the improvement due to the drop in smoking rates, we’re generally living only months longer than we used to. It’s not that medicine is particularly troubled. When economists looked at a range of papers published in major economics journals a while ago, they estimated that the wrongness rate of the findings was essentially 100 percent. Why, with that sort of flawed expertise flowing to our leaders and decision makers, it’s surprising we haven’t witnessed some sort of global economic near-collapse in recent years! Meanwhile, the usefulness of all the expert advice on dieting with which we’re bombarded can be assessed with a glance at the population in almost any public place in America.

The heart of the problem is that published studies from scientists, economists, and other experts tend to falsely show that their theories are right. Surveys of these fields reveal that fraud, careerism, mismeasurement, suppression of data, lousy analysis, politics, poor self-policing, and many other serious shortcomings are fairly widespread even among the most respected researchers and institutions. According to one scientist’s highly conservative estimates, there are at least 1,500 cases of research fabrication every year in the U.S. alone—there’s reason to think the real number may be tens of times as large—but only about 20 are actually identified and reported. About one third of medical researchers surveyed admitted to at least one act of misconduct with regard to designing, conducting, interpreting, and reporting the results of studies within the previous three years. Two thirds of published drug-trial findings fail to report all of the harmful symptoms that turn up in the test subjects.

Despite these problems, there’s some minority of advice that’s good, and even critically important—we don’t want to start thinking that experts don’t know what they’re talking about when they tell us to sock money away in a 401(k) or to get our children vaccinated or not to smoke. But we don’t know how to pick the less obviously good stuff out from the constant stream of flawed and conflicting findings: fat is bad for you, fat is good for you, the economy is recovering, the economy faces a double-dip recession.

Part of the problem is that experts don’t have much incentive to get things right. We reward them for coming up with pronouncements that are appealing and seem trustworthy and that are dressed up with solid-sounding numbers, especially if the resulting advice hands us a simple, unqualified, universal solution to our problem. Cut out carbs and you’ll lose weight. You can never go wrong buying real estate. Take a baby aspirin daily and you’ll lower your heart-attack risk. Unfortunately, we live in a complex world in which most advice will have only some chance of being partly helpful some of the time for some of us. But who wants to listen to that sort of wishy-washy advice?

We can do better in sifting through expert advice. For starters, we should be highly wary of the latest breakthrough findings—that stuff almost always turns out to be wrong. Instead, look for a consensus of study data that has been building for years, even if—especially if—the conclusions aren’t very exciting or are couched in qualifications. And we need to resist our proclivity for accepting expert advice we like, and instead look to advice that pushes our noses into the messy, uncertain truth. We can’t always trust our common sense, and we don’t always know good advice when we hear it, but if we work hard in a well-informed way, most of us can get pretty good at telling when an expert pronouncement is fishy—which, unfortunately, is usually the case.

Freedman is the author of Wrong: Why Experts Keep Failing Us—and How to Know When Not to Trust Them. He looks at medical findings in the news at

Join the Discussion