Scientific misconduct is exceedingly rare and extremely serious. Charges Shave been brought alleging plagiarism or faking data or falsifying results. The latest case, however, involves the manner in which a researcher strung together a set of equations in order to find a message hidden in a stack of raw data. To reach for a metaphor, this is like bringing a felony indictment for jaywalking.
Two psychologists, both of whom have testified for the lead industry and one of whom has received tens of thousands of dollars in research grants from the industry, have filed misconduct charges against the scientist who first linked "low" levels of lead to cognitive problems in children. They don't suspect that Herbert Needleman of the University of Pittsburgh stole, faked or fabricated data. Rather, they say, he selected the data and the statistical model-the equations for analyzing those data-that show lead in the worst possible light. That's a dispute usually aired in research journals. Now it's become a federal case-and even those scientists most diligent about pursuing misconduct are uneasy. "If it gets to which statistical model is appropriate," a leading government fraud-buster told NEWSWEEK, "it gets real hard to believe a misconduct charge. "
The case began last year. Psychologists Sandra Scarr of the University of Virginia and Claire Ernhart of Case Western Reserve University filed charges of scientific misconduct against Needleman with the National Institutes of Health. The allegations center on a 1979 paper. It describes how Needleman and colleagues measured the lead in baby teeth, looking for a link between lead and intelligence. NIH told Pittsburgh to convene a panel of inquiry.
The panel's report, submitted in December and obtained by NEWSWEEK, found that Needleman didn't "fabricate, falsify or plagiarize. " It did have problems with how he decided whether or not to include particular children in his analysis, but called this "a result of a lack of scientific rigor rather than the presence of scientific misconduct." The panel found Needleman's statistical model "questionable," though. On that basis, the university launched an investigation.
Scarr, Ernhart and the Pittsburgh panel all condemn Needleman for not using a different model--one that, say, factored in the age of each child. If he had, they say, lead would not have had an impact on IQ. But last year Environmental Protection Agency scientist (and recipient of a MacArthur Foundation "genius" award) Joel Schwartz reanalyzed Needleman's data. He factored in age explicitly. "I found essentially the identical results," he says.
Another criticism addresses whether Needleman ignored data he didn't like. Scarr alleges that he looked at the children's lead levels and IQ score, and only then "decided in or out for each child." In fact, "the reasons for exclusion can be found in the protocol," says econometrician Hugh Pitcher of Battelle Memorial Institute, who analyzed the 1979 data when he was at EPA. They include such things as the child's having a head injury.
The selection, says Pitcher, was done before the researchers knew the kids' IQs.
This is not to say Needleman's work was perfect, just that any lapses did not change the outcome. Ernhart insists this is not good enough. "He doesn't feel it's necessary to do things the way you're supposed to," she says. "You have the sense that he was going to demonstrate the effects of lead no matter what. "
How does this case affect lead policy? "We don't even use Needleman's study anymore," says EPA's Schwartz: it has been superseded by research showing effects of lead at even lower levels (NEWSWEEK, July 15, 1991). The politicization of misconduct may be just starting, though. Crying fraud, says an NIH scientist, "can be used to railroad people you don't like."