Evaluating Scientific Evidence

Evaluating Evidence

The important thing is not to stop questioning. Curiosity has its own reason for existing. -Albert Einstein

Not all research is created equal. Many times, scientific studies conclude results that contradict each other, and scientists express opposing viewpoints on subject matter. This can lead to confusion for consumers and increased perceptions of risk and hazard. This brochure provides a greater understanding of what can seem like a mysterious scientific process and context for critical evaluation of scientific literature. 


Hierarchy of Scientific Evidence

When examining the strength of scientific evidence, a number of factors comes into play. Of the most important factors, however, is study design. In the hierarchy of evidence, the strongest evidence results from randomized controlled trials (RCT) and intervention studies. By comparison, weaker evidence results from case reports and expert opinion.

Types of Scientific Journal Articles

Full-length research paper: the majority of articles published in scientific journals. These include presentation of relevance for a specific study, methods used to perform the study, presentation of results gathered, and a discussion of conclusions about the study.

Industry interest piece: summary of the work conducted in a study and discusses recommendations (resulting from the research work) that may be helpful to a specific scientific industry. It does not include a detailed account of the study.

Hot topic: report on a study that is not complete at time of publication. Because the study topic is so ground-breaking or important to the field, the journal will allow publication of the preliminary results.

Technical note: report on a new or improved method in the specific scientific field. Invited review: written by authors suggested by journal editors and usually summarizes a specific scientific topic or recent symposia.

Letter to the Editor: usually has a short word limit (~300 words) and reflects topics of concern for the readers. It may include corrections made to articles after publication or even rebuttals from disagreeing scientists.

Download our latest resource today! 


IFIC Study Evaluation Checklist

Q1. Do the title and abstract reflect the study?
                 Yes / No. View Results Skeptically.

Q2. Is the study useful, novel, and/or relevant to humans?

                 Yes / No. View Results Skeptically.

Q3. Is the hypothesis clearly stated?

                  Yes / No. View Results Skeptically.

Q4. Was the study methodology described in detail?

Yes / No.

Do the authors cite a paper for the methods?
Yes / No. View Results Skeptically.


Q5. Are the methods valid, accurate and reliable?

                 Yes / No. View Results Skeptically.

Q6. Does the analysis of the results make sense?

                 Yes / No. View Results Skeptically.

Q7. Are the conclusions supported by the data?

Yes / No. View Results Skeptically.

Q8. Are there conflicts of interest?
(personal, academic, financial, conflicts of commitment)

Yes /  No

If yes, compare findings to the totality of evidence.


Q9. Does the study fit into the totality of evidence?

Yes / No. View Results Skeptically.


Study Design Cheat Sheet

Type Design Definition
Observational Cohort Cohort studies follow a group of people who share common characteristics and assess whether exposure to a certain risk factor leads to a certain outcome.
Case-control Case-control studies follow specific groups of people, cases vs. controls for a certain outcome, who differ only by exposure to a risk factor.
Cross-sectional Cross-sectional studies examine associations at a single point in time to assess prevalence of exposure to a risk factor or disease outcome.
Ecological Ecological or Epidemiological studies assess the rate of a disease outcome in relation to population-level factors.
Experimental Preventative trial Preventative trials use healthy individuals to assess illness prevention via an intervention. 
Clinical trial Clinical trials use a particular type of person or group of people and follow a pre-defined intervention plan.
Diagnostic trial Diagnostic trials screen a particular type of person or group of people and follow a pre-defined intervention plan.


Glossary of Scientific Terms


A relationship. In research studies, association means that two characteristics (sometimes also called variables or factors) are related so that if one changes, the other changes in a predictable way. An association does not necessarily mean that one variable causes the other.


Any factor, recognized or not, that distorts the findings of a study. In research studies, bias can influence the observations, results, and conclusions of the study and make them less accurate or believable.


Two variables are causally related if changes in the value of one cause the other to change. Two variables can be associated without having any causal relation, and even if two variables have a causal relation, their correlation can be small or zero.


A measure of linear association between two (ordered) lists. Two variables can be strongly correlated without having any causal relationships, and two variables can have a causal relationship and yet be uncorrelated.

Confounding Variable

An unforeseen, and unaccounted-for variable that jeoparsizes reliability and validity of an experiment's outcome (e.g. age, gender, smoking, income).


The extent to which a measure, procedure, or instrument yields the same result on repeated trials.

Statistical Significance

Calculation of the probability that an observed effect in a research study is occurring because of chance, typically expressed as a P-value (e.g. P < 0.05).


The degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.

Evaluating Scientific EvidenceStudy Evaluation ChecklistStudy DesignHierarchy of Scientific Evidence