It's not too soon to start thinking about New Year's resolutions, and here's mine, as a medical writer: I will not report on any amazing new treatments for anything, unless they were tested in large, randomized, placebo-controlled, double-blind clinical trials published in high-quality peer-reviewed medical journals. If that means not telling NEWSWEEK's readers about, say, a new magnetized-water cure for osteoporosis, cancer and autism—well, there are infomercials to fill that gap. The risk that I might overlook the next Lipitor is outweighed by the danger of hyping the next laetrile, the discredited 1970s-era miracle cancer drug made from apricot pits that failed to cure Steve McQueen.
Since there are more than 4,000 medical journals in the world, being able to ignore the great majority of them will save me a lot of work. But that's not why I'm doing it. I was shamed into it by a new book from R. Barker Bausell, a biostatistician at the University of Maryland. When a researcher tests a new drug or treatment, Bausell supplies the statistical analysis that journalists demand to see in a serious scientific study, even if they don't understand a word of it. From about 1999 to 2004, he was director of research for UM's center to study complementary and alternative medicine. This is a scientific term for "something you heard about from your hairdresser, who thinks she saw it on 'Oprah' "—a category that by Bausell's reckoning includes acupuncture, homeopathy, healing magnets and assorted herbs and supplements. Most of the treatments he encountered were for subjective conditions such as pain or depression. These are especially sensitive to the placebo effect, the tendency of some patients in clinical trials—typically about a third—to get better with fake magnets, inert pills or needles stuck in random places. Bausell thought the least you could ask of an actual treatment was that it work better than a fake one, but when he examined the studies critically, hardly any did. So he wrote "Snake Oil Science" to educate journalists and the public that "just because someone with a Ph.D. or M.D. performs a clinical trial doesn't mean that [it] possesses any credibility whatsoever … The vast majority are worse than worthless."
In fact, Bausell's book could give one the idea that the two most dangerous words in medicine are "studies show." Researchers, even those without a direct financial stake in the outcome of a trial, often have a psychological investment in what they're testing. Their papers get published because the editors of journals in fields like homeopathy start from the premise that the whole thing isn't a preposterous hoax, as Bausell and most mainstream doctors believe. If someone really does cure cancer—whether a drug company researcher or a Tibetan herbalist—The New England Journal of Medicine or The Journal of the American Medical Association will be happy to publish the news.
Even those who should know better can be fooled by personal experience. Joint pain typically waxes and wanes in cycles. People are most inclined to try a new remedy when the pain is worst, which is also just when it is about to start getting better on its own—but the magnet or the bracelet gets the credit. You should file such accounts under "anecdotal evidence," and then throw away the file. But journalists needing to liven up those dull statistics are notorious suckers for anecdotes—even a respected New York Times writer Bausell mentions who wrote, apropos of a large study that cast doubt on using glucosamine for arthritis, that she was sure it worked anyway, because it helped her dog. Also, her doctor told her "at least a third of his patients have benefited" from it. Does that figure sound familiar?
Well, you won't be hearing anything like that from me in 2008. Not that it will keep people from trying new amazing miracle oxygenated water. Studies show it treats depression, diabetes and impotence at least a third of the time. If you doubt it, just see how many ads pop up when you Google "laetrile."