Toolkit quick-links: Home | About | Instructions | Results | Use Now
IMPORTANT NOTICE / 8 August 2020
The purpose of this toolkit
This toolkit is for anyone who is reading a document which purports to be a statement of what is and is not known about the environmental health effects of a chemical or group of chemicals.
The point of the toolkit is to help users navigate the credibility of a synthesis of evidence (such as a literature review or an expert opinion), to come to a more informed opinion as to the extent to which they should believe its conclusions.
Any literature or evidence review in the peer-reviewed literature, or even a REACH or pesticides registration dossier, could be a suitable application of the toolkit, so long as it is hypothesis-driven. (This means that the review should be attempting to answer a question which could in principle be answered by a single experiment, but is being answered by review either because the answer may already be known or because the experiment is not practical to carry out.)
This toolkit will not tell you if a review is sound or its answer is correct. Rather, its purpose is to guide you through a structured appraisal of the methodological strengths and weaknesses of a review of evidence, to give you a clearer sense of how much confidence you can have in its findings.
This toolkit is therefore akin to the sort of thing a journal editor might ask a peer-reviewer to complete when appraising a review which has been submitted for publication. We also present this toolkit in the hope of stimulating discussion of best practice in conduct of reviews.
To our knowledge this is the first time anyone has published an appraisal toolkit for reviews of the toxicity of chemicals; we hope it is instructive.
Who this toolkit is for
This toolkit is aimed at users of synthesised evidence, such as scientific advisers, policy-makers, lobbyists, advocates, researchers and consultants.
All these groups, particularly those directly involved in framing chemicals policy, are heavy users of synthesised science and therefore often have to evaluate the way in which this is presented, be it in academic reviews, reports, expert opinions, platform presentations or other formats.
Up to this point, however, there has been little guidance on how to do this in an efficient and consistent manner. We hope this toolkit will help with that.
What this toolkit does not do
There is no attempt in this toolkit at developing an overall numerical or qualitative rating for the quality of a review. We do not believe it is valid or helpful to do so; while the strengths and weaknesses of individual reviews can be analysed, it is not normally possible to interpret such an analysis into an overall score of a review’s credibility, and therefore not possible to compare the relative credibility of one review with another.
This toolkit therefore aids development of a structured critique of a review, broken down into domains which characterise the review process, but goes no further than that.
Source materials for the LRA toolkit
Ades, A.E.; Caldwell, D.M.; Reken, S.; Welton, N.J.; Sutton, A.J.; Dias, S. (2012): NICE DSU Technical Support Document 7: Evidence synthesis of treatment efficacy in decision making: a reviewer’s checklist. NICE.
CONSORT Group (2010): The CONSORT Statement.
Critical Appraisal Skills Programme (2013): Ten questions to help you make sense of a review.
Garg, Amit X.; Hackam, Dan; Tonelli, Marcello (2008): Systematic review and meta-analysis: when one study is just not enough. In Clin J Am Soc Nephrol 3 (1), pp. 253–260.
Gee, David (2013): Evaluating evidence: some quality criteria for critiques of expert assessments of the EDC literature. (Unpublished.)
Higgins, Julian P. T.; Green, Sally (Eds.) (2008): Cochrane handbook for systematic reviews of interventions. Chichester, England, Hoboken, NJ: Wiley-Blackwell.
Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G. (2009): Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Medicine 6, e1000097.
Mulrow, C. D. (1987): The medical review article: state of the science. In Ann. Intern. Med. 106 (3), pp. 485–488.
Oxman, A. D.; Cook, D. J.; Guyatt, G. H. (1994): Users’ guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. In JAMA 272 (17), pp. 1367–1371.
Shea, Beverley J.; Grimshaw, Jeremy M.; Wells, George A.; Boers, Maarten; Andersson, Neil; Hamel, Candyce et al. (2007): Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. In BMC Med Res Methodol 7, p. 10.
Shea, Beverley J.; Grimshaw, Jeremy M.; Wells, George A.; Boers, Maarten; Andersson, Neil; Hamel, Candyce et al. (2012): Methodology Checklist 1: Systematic Reviews and Meta-analyses. SIGN
SIGN (2012): Notes on Methodology Checklist 1: Systematic Reviews and Meta-analyses. SIGN
Planned development of toolkit
- Improved user interface
- PDF of instructions for download and print
- Improvement of elicitation questions and statements
- Further development of guidance for critique of synthesis domain