Stage 4: Appraising evidence

From
Jump to: navigation, search

The quality of evidence is the confidence in the veracity of the information or data, and depends on the source, design and quality of each study or piece of information. In contrast with EBM where randomised controlled trials are ranked highest and observational studies ranked lowest, in rapid risk assessment the evidence may be limited and therefore there may be greater reliance on observational studies, including case reports and specialist expert knowledge. For most infectious disease threats only observational data are available.

Certain factors affect the quality of evidence. Factors that may increase the quality include: the method of generating data and study design (i.e. analytical epidemiology versus descriptive), the strength of association, evidence of dose response, and consistency with other studies/expert opinion. Factors that may decrease the quality include: reporting bias, inconsistency, and conflicting evidence/opinion.

Ideally, a rapid risk assessment should not rely on a single study or piece of evidence. There should be a cautious approach to interpreting information if only one research group reports on an infection or disease association in multiple publications. Poor evidence or information should not be used for the rapid risk assessment unless this is the only data available; in this case any uncertainties should be documented in the information table.

Triangulation is a technique widely used in qualitative research to address internal validity by using more than one data collection method to answer a research question. The body of evidence should be considered as a whole, and the triangulation of evidence should confirm (or refute) internal validity of findings. Triangulation of evidence,including specialist expert knowledge, may be important to reach a consensus. Ensure a minimum of two to threedata sources and agreement between these (i.e. two experts or expert and literature). Sources of evidence and agreement between these (or absence of) should be clearly stated in the information table. Based on consistency, relevance and external validity of the available and relevant information the quality of evidence is graded as: good, satisfactory, or unsatisfactory (definitions and examples are given in Checklist 3).

Checklist 3: Evaluating the quality of evidence (for information tables)

Quality of evidence = confidence in information; design, quality, and other factors assessed and judged on consistency, relevance, and validity. Grade: good, satisfactory, unsatisfactory Examples of types of information/evidence
Good - Further research is unlikely to change confidence in the information.
  • Peer-reviewed published studies where design and analysis reduce bias, e.g. systematic reviews, randomized control trials, outbreak reports using analytical epidemiology
  • Textbooks regarded as definitive sources
  • Expert group risk assessments, or specialized expert knowledge, or a consensus opinion of experts
Satisfactory - Further research will likely impact information confidence and may change assessment.
  • Non-peer-reviewed published studies/reports
  • Observational studies/surveillance reports/outbreak reports
  • Individual (expert) opinion
Unsatisfactory - Further research is likely to impact the confidence of information and is likely to change assessment.
  • Individual case reports
  • Grey literature
  • Individual (non-expert) opinion

References

Entire text copied from:

  • European Centre for Disease Prevention and Control. Operational guidance on rapid risk assessment methodology. Stockholm: ECDC; 2011. ISBN 978-92-9193-306-8 doi 10.2900/57509

Contributors