23 September 2007

The Fallibility of Research Findings--Another Reminder

Following closely on publication of this caution, this NYTimes Magazine piece provides further reasons for careful scrutiny of research findings.

Most people do not have the training in epidemiology and statistics required to judge a research paper's findings. That seems particularly true of many post-modernist minded journalist and academicians. The public should beware of the uncritical acceptance of research findings in the media and on campuses. Recent epidemiological overreach when looking at civilian casualties in Iraq highlights the ludicrous extent to which instruments of data analyses can be prostituted to the service of political journalism.

But the rare political whoring of epidemiology is just the tip of the iceberg. Given the inherent ambiguity of most research data, it is necessary to introduce some disciplined parsimony in transforming data to actionable information. Most scientists (outside of climate modeling) are able to admit the limitations of their findings and methods. Sadly, most "science" journalists too often lack the subtlety and nuance that reporting on science requires. They too easily succumb to the temptation for the "breakthrough", the "blockbuster headline", the sensational patina over what is likely to be "ho-hum" data in reality.

This is not nearly so much a problem for scientists as it is for the public. But if you read the article, you will see that it is also a problem for science.

Labels:

Bookmark and Share

16 September 2007

Learn to Reserve Judgment Until the Facts are All In

A lot of people trust research findings implicitly. Experienced researchers and data analysts do not.
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
Why Most Published Research Findings Are False

Of course, since this finding is based upon a simulation--and we know from climate science how simulations and computer models are subject to biasing by bad data--we should be suspicious of this finding as well. And rightly so. There is no such thing as perfect data. Some algorithms used in models and simulations are sufficiently recursive so as to create absurd results that diverge wildly with each run.

So, if you are tempted to place too much faith in an area of science that is particularly subject to biasing and simulation error--such as climate science--it might be wise to reserve judgment until the picture can be clarified by better research.

Hat tip Futurepundit.

Labels:

Bookmark and Share
Older Posts
Al Fin Main Page
Enter your Email


Powered by FeedBlitz
Google
WWW AL FIN

Powered by
Blogger

``