How can scientific claims be evaluated




















Although laudable, it is unrealistic to expect substantially increased political involvement from scientists. Another proposal is to expand the role of chief scientific advisers 1 , increasing their number, availability and participation in political processes. Neither approach deals with the core problem of scientific ignorance among many who vote in parliaments. Perhaps we could teach science to politicians? It is an attractive idea, but which busy politician has sufficient time?

In practice, policy-makers almost never read scientific papers or books. The research relevant to the topic of the day — for example, mitochondrial replacement, bovine tuberculosis or nuclear-waste disposal — is interpreted for them by advisers or external advocates.

And there is rarely, if ever, a beautifully designed double-blind, randomized, replicated, controlled experiment with a large sample size and unambiguous conclusion that tackles the exact policy issue. In this context, we suggest that the immediate priority is to improve policy-makers' understanding of the imperfect nature of science.

The essential skills are to be able to intelligently interrogate experts and advisers, and to understand the quality, limitations and biases of evidence. We term these interpretive scientific skills. These skills are more accessible than those required to understand the fundamental science itself, and can form part of the broad skill set of most politicians. To this end, we suggest 20 concepts that should be part of the education of civil servants, politicians, policy advisers and journalists — and anyone else who may have to interact with science or scientists.

Politicians with a healthy scepticism of scientific advocates might simply prefer to arm themselves with this critical set of knowledge.

We are not so naive as to believe that improved policy decisions will automatically follow. We are fully aware that scientific judgement itself is value-laden, and that bias and context are integral to how data are collected and interpreted. What we offer is a simple list of ideas that could help decision-makers to parse how evidence can contribute to a decision, and potentially to avoid undue influence by those with vested interests. The harder part — the social acceptability of different policies — remains in the hands of politicians and the broader political process.

Of course, others will have slightly different lists. Our point is that a wider understanding of these 20 concepts by society would be a marked step forward. Differences and chance cause variation. The real world varies unpredictably.

Science is mostly about discovering what causes the patterns we see. Why is it hotter this decade than last? Why are there more birds in some areas than others? There are many explanations for such trends, so the main challenge of research is teasing apart the importance of the process of interest for example, the effect of climate change on bird populations from the innumerable other sources of variation from widespread changes, such as agricultural intensification and spread of invasive species, to local-scale processes, such as the chance events that determine births and deaths.

No measurement is exact. Practically all measurements have some error. If the measurement process were repeated, one might record a different result. In some cases, the measurement error might be large compared with real differences. Thus, if you are told that the economy grew by 0. Results should be presented with a precision that is appropriate for the associated error, to avoid implying an unjustified degree of accuracy. Bias is rife. Experimental design or measuring devices may produce atypical results in a given direction.

For example, determining voting behaviour by asking people on the street, at home or through the Internet will sample different proportions of the population, and all may give different results. Because studies that report 'statistically significant' results are more likely to be written up and published, the scientific literature tends to give an exaggerated picture of the magnitude of problems or the effectiveness of solutions.

An experiment might be biased by expectations: participants provided with a treatment might assume that they will experience a difference and so might behave differently or report an effect.

Researchers collecting the results can be influenced by knowing who received treatment. The ideal experiment is double-blind: neither the participants nor those collecting the data know who received what. This might be straightforward in drug trials, but it is impossible for many social studies. Confirmation bias arises when scientists find evidence for a favoured theory and then become insufficiently critical of their own results, or cease searching for contrary evidence.

Bigger is usually better for sample size. The average taken from a large number of observations will usually be more informative than the average taken from a smaller number of observations. That is, as we accumulate evidence, our knowledge improves.

This is especially important when studies are clouded by substantial amounts of natural variation and measurement error. Thus, the effectiveness of a drug treatment will vary naturally between subjects. Its average efficacy can be more reliably and accurately estimated from a trial with tens of thousands of participants than from one with hundreds. Correlation does not imply causation. Sign in. Thanks for reading Scientific American.

Create your free account or Sign in to continue. See Subscription Options. Go Paperless with Digital. I don't think our prospects for evaluating scientific credibility are quite that bad. Credible scientists can lay out: Here's my hypothesis. Here's the next study we'd like to do to be even more sure. This suggests a couple more things we might ask credible scientists to display: Here are the results of which we're aware published and unpublished that might undermine our findings.

Pennywise and pound-foolish: misidentified cells and competitive pressures in scientific knowledge-building. Twenty-five years later.

Get smart. Sign up for our email newsletter. Sign Up. Read More Previous. Support science journalism. Knowledge awaits. See Subscription Options Already a subscriber? Create Account See Subscription Options. For example, "people who exercise have a lower risk of heart attack" is a statement of correlation, but "exercise lowers the risk of heart attack" is a statement of causation.

It is very hard to prove causation that A causes B. In order to do so, one needs to show that A must always be present for B to occur, and that B will always occur when A is present "A is both necessary and sufficient cause of B". An example of how this can be done in science is the use of Koch's postulates for determining whether a microorganism causes a particular disease:. Because of the limits on time, funding, or ethical considerations, often the best that can be done is to evaluate a relationship using logic and laws of probability.

When looking for a cause of an illness, scientists would look for large differences between people who had and did not have exposure to a suspected cause. They would check to see that those differences are present between groups that would otherwise be at similar risk for developing an illness.

Scientists would also check that a logical reason for a suspected relationship exists. Are new ideas or results viewed critically and with skepticism? Scientists should ideally presume a new idea wrong until it is well supported with evidence. Pseudoscientists are not skeptical of their own results, but are skeptical of the results of others.

Types of Arguments and Persuasive Devices. Certain techniques are commonly used to attempt to convince the reader of the validity of an argument.

Be aware of some of these techniques when you are evaluating a source. Straw Man. An argument directed not at someone's actual position, but at a weaker version the "straw man" created by the opponent.

This weaker version would seem, for example, illogical or irrelevant. Ad Hominem "to the man". An argument at an individual, rather than the individual's position. The person themselves is attacked, rather than the evidence or the logic of their argument. False Dilemma. Two choices are proposed, and one of these is more easily attacked.

This leaves the other choice as the only obvious possibility. However, in reality, there may be many other alternatives or complexities which are not addressed. Begging the Question. This type of argument also called "circular reasoning" assumes the truth of its conclusions as part of the reasoning leading up to the conclusion. Slippery Slope. For example, an organization may promote a daily vitamin supplement for brain health, but if that same organization is funded by vitamin makers or sells the supplement, you have reason to doubt that claim.

Where is the claim published? We find and read scientific claims across all types of media, from TV to newspaper to TikTok. In general, one should be skeptical of scientific claims made on social media, unless those claims are backed by a reputable scientific organization or scientific consensus in the literature. Primary research is the best place to verify claims, but it can often be hard to read.

Instead, focus on outlets such as textbooks, scientific review articles, or popular science magazines for easy-to-read, reliable facts. Other places to look are trustworthy news agencies and government sites. In Biology Now , we include a handy chart of where to find reliable and accurate information.

Has the claim been peer-reviewed and published in a reputable scientific journal? If the claim comes from a single study, does that study follow the scientific method?



0コメント

  • 1000 / 1000