Monday, February 13, 2012

Is "Bad Science" an Oxymoron

Bad Science
View full size
By Alfred Burdett

Having written on occasion about scientific fraud, scientific data manipulation and outright scientific nonsense, I was invited by its creators to comment on a poster entitled Bad Science, the psychology behind bad research, offered as a resource on the ClinicalPsychology.net website.

"Scientists," states an introduction to the poster, "are some of our most trusted members of society ... [but] many scientists are not as trustworthy as we would like to believe. By engaging in various kinds of scientific misconduct, such as falsifying or fabricating data, scientists are getting the results they want without the honesty and integrity that we expect of the scientific institution."

As a scientist of almost 50 years standing, it's news to me that scientists are among the most trusted members of the community. Personally, I would trust a scientists no more and no less than I would trust a banker or a politician. And that is surely not being unduly cynical, for as everyone knows, when their work impinges on important economic or political questions, scientists can be remarkably responsive to the interests of those funding their work, whether it be the tobacco industry, the drug industry, the arms industry or a government with an agenda on climate change, HIV/AIDS, the psychiatric treatment of political dissidents, or the use of tactical nuclear weapons in populated areas. So it seems to me that the people at ClinicalPsychology.net are proceeding on the basis of a questionable assumption.

There is no question, however, that data falsification (are falsified data, truly data at all in the scientific sense?) and data fabrication are not activities to be encouraged, so when ClinicalPsychology.net tells us to read their "infographic" to find out how to fix the problem, we are prepared to read on.

However, what we find is little in the way of the promised account of the "psychology behind bad science" or effective means to "fix the problem," but mainly a series of assertions about the prevalence of scientific fraud. "Shady scientific research is rampant" we are told, which sounds bad, but what does it mean. Well for one thing, "One in three scientists admit to using questionable research practices," which include "dropping data points based on gut feeling," and "changing the results or design of a study due to pressure from a funding source."

So now we begin to have some idea what they are talking about, but it nevertheless remains vague. What, for example, does it mean to drop a data point "based on gut feeling"? Presumably it means that the scientist believes that they have a plausible justification for dropping the data point in question: "I noticed some crud in that tube when I was adding the reagents," or "the rat that died looked sick before we began feeding it GM corn." Adoption of such rationalizations for the selection of data is not considered acceptable practice but it has a venerable history in science, and while few would condone it, the question of whether it constitutes "bad science" is less clear than many might suppose.

Scientific knowledge is not a collection of facts, it is a system of laws, principles and patterns which allow us to infer from a given set of facts another heretofore unknown set of facts, including facts about past, present or  future. Thus science as a process of discovery is concerned, primarily, not with any specific facts, but with ideas about the relationships among facts in the observable world. Because there is uncertainty about all particular observed facts, there is no overwhelming reason to reject a good idea because it is inconsistent with some observation that "gut feeling", i.e., some plausible argument, suggests is false.

So whether rejecting data based on "gut feeling" is "bad science" depends, I would say, on circumstances. A scientist with an interesting hypothesis who rejects a contradictory observation on plausible, or even implausible, grounds is doing no more than scientists of the greatest fame, Newton, Galileo, Einstein, for example, had not hesitation in doing.

The critical factor that determines whether overlooking or concealing evidence that contradicts one's hypothesis is to be condemned depends on both the motive and the competence of the individual concerned.

Anyone who fakes data to produce apparent support for a proposition favorable to the entity funding the research, is in my view, not guilty of "bad science", because what they are doing is not science at all. It is merely part of a conspiracy to deceive the public for financial gain. It would be best to deal with such people like any other fraudsters, whether swindling financiers or vendors of quack cancer cures.

However, when a scientist says, and more importantly believes, that things must be this way rather than that whatever the experimental data may show, then they are following a grand tradition in science. Thus, for example, when Einstein was asked by a student what he would have done if Sir Arthur Eddington's famous 1919 gravitational lensing experiment, which confirmed relativity, had instead disproved it, he replied "Then I would have felt sorry for the dear Lord. The theory is correct."

And that example of scientific arrogance clarifies the importance of competence when it comes to judging data. If you are a Nobel prize winner, potential or actual, you may need a good deal of arrogance to force your ideas into the mainstream, and you may well be justified in ignoring data that in the light of present knowledge are seemingly incongruent with your ideas. But if you are a novice or a hack, you'd be very much wiser to stick with the data you have whatever they seem to show, and leave it to those more perceptive than yourself to rationalize away whatever fails to fit the divine plan, assuming that God got it right.

Various other supposed facts about the trickiness of scientists are offered, the claims supported by a reference list comprising merely a set of unclickable URLS without description of the content to which they point and no means of examining that content other than by laboriously typing the URL into a browser navigation window, which is something very few people will bother to do. So for example, the first reference:

http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124,

is one heck of a line to type accurately, and gives no idea that it leads to a paper by John P. A. Ioannidis in PLoS Medicine, titled: Why Most Published Research Findings Are False

It is not immediately apparent what to make of a scientific paper claiming that (virtually) all scientific papers are false. No doubt it would have amused Epimenides, the Cretan philosopher immortalized for the paradoxical claim that "all Cretans are liars."

What, on examination, Ioannidis appears to be saying is that most published scientists don't understand the statistical methods they are using, which may be correct but is largely irrelevant. Statistics are not as important as many people seem to think. Scientists such as Isaac Newton and Charles Darwin got along perfectly well without statistics and most scientists, today, use statistics only because there is a rather pointless tradition that insists that they do so. In fact, common sense is often a better guide to the scientific significance of a set of data than statistics. That P < 0.05 does not mean that the result is of any scientific importance. Thus, as Yonatan Loewenstein comments on the Ioannidis paper it is probably closer to the truth to say that most scientific papers are true but useless.

The other references seem to be of rather trivial significance or totally irrelevant, but having been compelled for the purpose of this review to look them up, I have listed them below with clickable links for the convenience of any who may wish to examine them.

Carey, B. 2011. Fraud Case Seen as a Red Flag for Psychology Research. NY Times.
Fanelli, D. 2009. How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE 4(5): e5738.
Roche, T. 2011. OMG ALIENS!!1!! Or is it Just More Fake Science News? Techyum.
Goldacre, B. 2011. I foresee that nobody will do anything about this problem. Bad Science Blog.
Pinto. N. 2011. Women's Funding Network sex trafficking study is junk science. CityPages
Van Guyt, M. 2011. What to Do About Scientific Fraud in Psychology? Psychology Today.
Corbyn, Z. 2011. Researchers failing to make raw data public Nature.
Petroc, P. et al. Riot control: How can we stop newspapers distorting science? Guardian
Martin, B. 1992. Scientific fraud and the power structure of science. Prometheus, Vol. 10, No. 1, pp. 83-98.
 
But what is it that we are given to understand will make scientists more trustworthy? Three recommendations are offered.

1. Make raw data available to other scientists.
2. Hold journalists accountable.
3. Introduce anonymous publication.

Making raw data available to other scientists would allow others to perform their own analysis of the data, but it won't stop people fiddling the so-called raw data. Furthermore, the demand for publication of raw data is opposed for some valid reasons. For example, a scientist might spend years gathering data that they will then analyze in a series of publications. But if they are required to release the raw data with their first publication, then it is open to anyone to make use of the data and thus take credit for what has been largely the work of another. So although publishing raw data is I believe highly desirable in some cases, e.g., global weather data, it does not seem reasonable to demand this in all cases.

In fact, many scientists already publish their raw data, and when scientists withhold raw data without adequate reason, others can draw their own conclusions.

Holding journalists accountable is an excellent idea, but what that has to do with science more than anything else I am not sure.

The idea of anonymous publication is a new one on me, but it looks like a non-starter. I cannot imagine any scientist bothering to publish anything f it was not to have their name on it.

Overall, I have to say that the ClinicalPsychology.net poster on "Bad Science, the psychology behind bad research" delivers less than it promises and unhelpfully promotes a poorly defined concept of "bad science" that obscures rather than clarifies thought about the scientific method. I would say that anything that does or might advance understanding of reality is science and that anything that obscures understanding of reality is not "bad science" but something altogether different: careerism, perhaps, or politics, PR, and many other things of which we certainly have far too much. But to call such activities "bad science" is not only to coin an apparent oxymoron but to suggest that what is essentially detrimental to the pursuit of knowledge could with some tweaking be made useful and productive. This I do not believe. Institutionalized science in the West is now deeply corrupt. We are, I believe, past peak science. As universities expand and fill their ranks with second- third- and fourth-rate faculty engaging increasing numbers of graduate students of minimal talent in the scientific enterprise, the prospect for a great 21st century for Western science dims in proportion. Which is not to say the Western nations do not have first-rate scientists still. But the environment is not favorable to creating new generations of committed scientists of genius. For one thing, that would be elitist and intellectualist and other things that we now apparently cannot tolerate.

Nevertheless, I believe ClinicalPsychology.net's poster will prompt any thoughtful person to speculation and reflection that can clarify their understanding of the scientific process and its relation to society and the political and economic forces that shape society. Perhaps these and other comments on the poster will prompt further reflection by its creators leading to a deeper examination of the interesting questions raised.

To those who may find my somewhat unorthodox view of the scientific process of interest, I recommend Paul Feyerabend's fascinating study Against Method.

No comments:

Post a Comment