• Ramsay Lewis

What is Meta-analysis? An Introduction to Systematic Reviews and Meta-analysis for Policy Makers



Many governments are moving towards evidence-based policy-making (EBPM). While this may represent an improvement in the police-making process, policy-makers and analysts may struggle with the vast amounts of research available to them. Systematic reviews and meta-analyses could be useful for providing summaries of the research literature, but many policy-makers and analysts are unfamiliar with these methods.


This post provides an introduction. It answers the following questions: what are systematic reviews and meta-analyses? How can they be useful to policymakers? What are their advantages over other kinds of research? And, what should policy-makers look for when critically evaluating systematic reviews and meta-analyses? The paper concludes with final reflections and resources for policy-makers and analysts wishing to evaluate systematic reviews or conduct their own systematic reviews.


Note: A version of this post was originally published in Agenda Politica, a peer-reviewed academic publication in Brazil.

 

Policy-making is an immensely complex process. In considering the policy problem, policy-makers and policy analysts take into account several considerations—including etical implications of the various policy options, their political ideology, their own values, public opinion, and fiscal considerations, among others. Further, policy decisions are influenced by countless contextual factors such as cultural and historical factors, as well as bureaucratic, societal, and political structures (Davies, 2004; Miljan, 2012). In addition to these factors, scientific research and other forms of evidence often feature highly in a government’s policy decisions (Nutley, Walter, & Davies, 2007).


Evidence-based policy-making (EBPM) refers to a practice of making policy that is, at its core, based on the best available evidence. As defined by Davies (2004), EBPM is “an approach that helps people make well-informed decisions about policies, programs and projects by putting the best available evidence from research at the heart of policy development and implementation” (p. 3). This approach to policy-making does not exclude other considerations, but it emphasizes evidence as a basis for policy decisions rather than untested views of groups or individuals (Davies, 2004). While the idea of using evidence to inform policy-making is not new, EBPM is undergoing renewed popularity among many governments (Coburn, Honig, & Stein, 2009; Davies, 2004; Fox, 2005; Levin, 2013; Shaxson, 2005; Solesbury, 1999; Young, 2013). Many governments around the world, including the United States, the United Kingdom, Australia, Canada, and Northern European Countries, have recently expressed desires to move increasingly toward EBPM (Cabinet Office, 1999; Davies, 2004; Fox & Oxman, 2001; Fox, 2005).


It is often the task of public policy-makers and analysts to review research or other forms of evidence in order to make or provide advice for policy decisions. If policy decisions are based on evidence, it is often these actors who have made it so.


But they face some barriers to evidence-based policy-making. One barrier is that the amount of “evidence” on a given topic is too great for policymakers or analysts to adequately process. The past few decades have seen exponential growth in scientific research. The overwhelming amount of research in medicine and health fields has been especially well-described (Castillo & Abraham, 2008; Dawes & Sampson, 2003; Noone, Warren, & Britain, 1998; Tricco, Tetzlaff, & Moher, 2011), but this it extends to a variety of policy areas, such as education (Levin, 2013; Slocum, Spencer, & Detrich, 2012; Spencer, Detrich, & Slocum, 2012), economics (Walker et al., 2012), and environmental science (Nursey-Bray et al., 2014; Pullin & Stewart, 2006) among others.


Because of the sheer volume of bodies of research in various domains, it has been difficult to integrate and combine research to achieve clear, usable conclusions (Dawes & Sampson, 2003; Ringquist & Anderson, 2013). Moreover, within a body of literature, research often conflicts. It is very common for some researchers to find an effect they deem “significant” and for other researchers to find no such effect. Given the problem that very large bodies of literature could be relevant to a policy decision, and that the research within that literature may present conflicting evidence, the prospect of realizing EBPM may be daunting for a policy-maker or analyst.


In response to quickly growing bodies of research, many have advocated the use of systematic reviews and meta-analysis to inform decisions (Murad et al., 2014; Ried, 2006). Systematic reviews of research literature have the potential to be useful in summarizing and integrating research on a given topic and may allow policy analysts to more easily evaluate evidence for a policy decision. They have several advantages over single research studies and traditional narrative research reviews that make them likely to become increasingly important to policy-making, especially as it moves further towards being evidence-based. However, systematic reviews and meta-analyses are likely under-utilized by policy-makers, possibly because of their lack of familiarity with the method and ability to judge their quality (Laupacis & Straus, 2007).

This post provides an introduction to systematic reviews and meta-analyses so that policy-makers and analysts may become more familiar with these research methods and how they may be of value to informing policy decisions.


It aims to answer the following questions:

  • What are systematic reviews and meta-analyses?

  • How can they be useful to meta-analyses?

  • What are their advantages over other kinds of research?

  • And, what should policy-makers look for when critically evaluating systematic reviews and meta-analyses?

The first few sections of this paper answer these questions in turn. The paper concludes with some final reflections and some further resources for conducting a systematic review and meta-analysis as well as evaluating their methodological quality.


 

Systematic Reviews and Meta-analyses

A systematic review is an integrated and comprehensive summary of a body of research literature on a given topic. Systematic reviews are different from traditional narrative research reviews because of the systematic way that they are conducted (Fox, 2005; Tricco et al., 2011). Whereas traditional narrative reviews present some research on a topic selected by the author, systematic reviews include all studies that meet a pre-specified set of criteria. Systematically reviewing the literature allows these reviews to provide more reliable findings and thus may provide better evidence than traditional narrative reviews (Antman, Lau, Kupelnick, Mosteller, & Chalmers, 1992; Fox, 2005; Oxman & Guyatt, 1993).


Characteristics of systematic reviews include:

  • a reproducible methodology,

  • a systematic search that is likely to include all studies that meet specified criteria,

  • and an assessment of the validity of the studies included.

Systematic reviews often, but not always, include meta-analyses.


Meta-analysis is a set of techniques for synthesizing the quantitative results from multiple empirical studies (Borenstein, Hedges, Higgins, & Rothstein, 2009; Glass, 1976). Meta-analyses usually combine effect sizes from primary studies. An effect size is an index of the magnitude of a relationship between two variables (e.g., correlation coefficient, squared correlation coefficient, standardized mean difference, etc.). Primary studies are the original, individual studies that have produced the effect sizes that will be combined in a meta-analysis. (Borenstein et al., 2009).


The resulting combination is called the summary effect (or sometimes the summary coefficient). It is the weighted average of the effect sizes from all of the primary studies included in a meta-analysis (Borenstein et al., 2009). There are several methods of combining and weighting effect sizes from primary studies, and they differ between meta-analyses.


Meta-analysis was formalized first by Gene Glass in the 1970s to synthesize education research (Glass, 1976). Since then, meta-analysis has grown in popularity, as evidenced by the number of meta-analyses being currently conducted, the acceptance it garners from academic journals, and the breadth of domains in which it is currently used (Glass, 2000; Hunt, 1997; Ringquist & Anderson, 2013).


Meta-analyses are often used as part of a systematic review, but they do not have to be; some meta-analyses are not intended to review bodies of literature (for an example in psychology, see Todtenkopf, Vincent, & Benes, 2005). Similarly, while many systematic reviews include a quantitative synthesis of statistical results (a meta-analysis), some do not. This paper focuses on introducing systematic reviews that include a meta-analysis because these may be especially useful to policy-makers and analysts.


 

How Policy-Makers and Analysts can use Systematic Reviews

In their guide to conducting meta-analyses in public policy, Ringquist and Anderson (2013) discuss four primary ways that policy-makers can use systematic reviews that include meta-analyses.


First, systematic reviews can aid in problem identification. Individual studies often measure the extent of policy problems in specific contexts; but by integrating these individual studies, systematic reviews are able to give a sense of the scope and magnitude of a particular problem across contexts.


Second, policy decisions often require accurate measurements of various quantities. While individual studies can provide estimates of these quantities, systematic reviews and meta-analyses can provide more precise and more robust estimates of a variety of quantities.


Third, systematic reviews are able to help evaluate the outcomes of policies and programs. While individual studies may be limited to a few outcomes or to geographical areas, systematic reviews can provide policy-makers with a summary of a policy or program on a number of outcomes across multiple areas.


Fourth, meta-analyses can help test hypotheses and build theory. For example, some theorize that decentralizing services by giving more responsibility to local or municipal governments results in more effective and responsive services for communities. Individual studies may examine the effects of decentralization in particular contexts; but systematic reviews that include meta-analysis may be able to give a sense of whether, in general, decentralization leads to improved service delivery, and what factors may influence when decentralization is more or less effective.


 

Advantages of Systematic Reviews

All research may be useful to policy-makers, including traditional narrative literature reviews of research. However, advocates of systematic reviews and meta-analysis argue that these studies have many advantages over traditional literature reviews (Glass, 1976; Laupacis & Straus, 2007; Ringquist & Anderson, 2013; Tricco et al., 2011). These advantages may make them especially useful to policy-makers and analysts.


Systematic Reviews Can Summarize Large Bodies of Literature

Advocates of systematic reviews argue that narrative reviews are incapable of summarizing large bodies of literature. For example, Glass (1976) asserts that if you want to make sense of 500 studies on the relationship between class size and educational outcomes, you cannot meaningfully summarize them all in a traditional narrative literature review. Because a traditional literature review cannot meaningfully analyze all of the studies that might bear on a given issue, the author must decide which ones to include (Glass, 1976). In contrast, systematic reviews that include meta-analyses are designed to find all studies relevant to a given research question (that meet certain criteria) and integrate their findings. These studies may therefore provide policy-makers and analysts with more complete summaries of a body of literature.


Systematic Reviews Can Resolve Conflicting Results

Traditional literature reviews are also ill-suited to reviewing bodies of literature with conflicting results (Hunt, 1997; Ringquist & Anderson, 2013). Narrative reviews typically consider whether studies find significant results, and then, because significant results will be found in some studies and not others, a common conclusion of these reviews is that more research must be conducted to clarify the literature (Glass, 1976; Hunt, 1997).


However, significant results may not be found in some studies for various reasons beyond there actually not being an effect; for example, there may not have been a large enough sample size, or the effect under study may not be very strong. Meta-analysis, by virtue of working with effect size estimates, allows researchers to determine whether the insignificant results of some research are in fact conflicting with the research that has found significant results, or whether the effects are there but simply did not reach significance (Glass, 1976). In other words, non-zero effects are considered in meta-analysis.


For example, we could imagine that a study finds a small effect of an intervention on increasing health outcomes in an experimental group. The difference between the experimental group and the control group, however, is not statistically significant because the sample size was small, and so the study lacked the power to detect the small effect. A traditional narrative review on the effectiveness of the intervention would count this study as non-supporting evidence for the intervention's effectiveness because the results were not statistically significant.


In contrast, a meta-analysis would count this as contributing to an overall summary effect size estimate (the resulting effect from combining effect sizes from all of the primary studies). The contribution of this study might be small: studies in a meta-analysis are weighted by their sample size, and because this study has a small sample size, it would carry relatively little weight in the overall summary effect. However, any effect, regardless of its size, is included in a meta-analysis, and will contribute to the balance of evidence. This makes meta-analysis a better tool for reconciling differences among studies in a body of research literature.


Systematic Reviews Provide more Precise Estimates than Individual Studies

How precise an effect size estimate is depends in part on how large the sample is on which the estimate was based. Individual studies increase precision by obtaining as large a sample as possible (Garg, Hackam, & Tonelli, 2008). Meta-analyses combine data from samples of multiple primary studies and therefore are able to obtain larger sample sizes than the primary studies included. This allows them to provide more precise estimates of effect size than any of the included studies (Garg et al., 2008).


Systematic Reviews can be Less Biased than Traditional Narrative Reviews

Systematic reviews may be less biased than traditional reviews. In a traditional literature review, the researcher decides which studies to include, how to present the findings, and how to describe conflicting findings. Many argue that because of these features, traditional reviews are easily biased, even unintentionally (Chalmers, Hedges, & Cooper, 2002; Shapiro & Shapiro, 1983). In contrast, a systematic review is done using a systematic and replicable process, where the author chooses and justifies criteria for which studies are relevant and can be included. Then all studies that meet these criteria are included. Because the criteria are explicit, other authors can replicate the procedures or even argue that other criteria are better, and conduct the review again with different criteria. Because the author is not making decisions about selecting individual studies, systematic reviews are potentially less biased than traditional literature reviews.


Systematic Reviews are More Efficient for Policy-Makers and Analysts

Systematic reviews are conducted with the aim of synthesizing an entire research body in a given research domain. Rather than a policy-maker or analyst accessing each individual research study, systematic reviews allow a policy-maker or analyst to read a single review and still learn about the evidence from an entire body of literature. In this way, systematic reviews can save time for decision-makers.


Systematic Reviews Are Applicable to a Wide Variety of Policy Areas

Systematic reviews and meta-analyses can be useful for informing a range of policy issues. Policy-makers and analysts in public health may use meta-analyses showing that school-based diet and physical activity interventions are effective at preventing obesity in children (Wang et al., 2013). Policy-makers in North America have applied systematic reviews on drug costs and effectiveness to make policy decisions regarding which drugs are covered by insurance and Workers' Compensation coverage (Fox, 2005). With respect to addressing crime, systematic reviews can inform policy-makers on strategies to reduce corporate crime (Simpson et al., 2014), prevent sexual violence in young people (De La Rue, Polanin, Espelage, & Pigott, 2014), and reduce criminal recidivism (Villettaz, Gillieron, & Killias, 2015). Those in environmental policy may use systematic reviews to choose between different forest management practices (Samii, Lisiecki, Kulkarni, Paler, & Chavis, 2014) or to guide decisions around the creation of marine reserves for protecting fish species (Stewart et al., 2008). Systematic reviews can be useful to policy-makers in labour policy (Filges, Smedslund, Knudsen, & Jørgensen, 2015), foreign affairs and trade policy (Bruno & Campos, 2011; Ott & Montgomery, 2015); and transportation policies (Heath et al., 2006). The potential application of systematic reviews extends to virtually all policy areas.


 

Methodological Quality of Systematic Reviews that Include Meta-analyses

For all of the previously described reasons, some have argued that meta-analyses represent an important—and sometimes the best—source of research evidence (Guyatt et al., 2000; Murad et al., 2014), and may be especially useful for policy-makers and analysts trying to create evidence-based policy (Fox, 2005). While there is much potential for systematic reviews that include meta-analyses to inform policy, like any research, systematic reviews can be of high or low methodological quality depending on how well the methods match the research questions (Moher, Tetzlaff, Tricco, Sampson, & Altman, 2007; Moher et al., 1999; Schulze, 2007; B. J. Shea et al., 2007; B. Shea, Dubé, & Moher, 2001). There are a number of biases and issues that systematic reviewers need to address when conducting their research. This section presents a brief, non-technical description of what good systematic reviews should include and some ways they can be biased. It is an incomplete list, but it should be useful as a starting place for policy-makers and analysts who are new to systematic review and meta-analysis methodology.


Systematic Methodology and Complete Reporting

A primary strength of a systematic review and meta-analysis is that it is systematic and transparent, with an explicit methodology that allows it to be reproduced and verified (Ringquist & Anderson, 2013). A good systematic review should include clear criteria about which primary studies will be included or excluded. Further, these criteria should be determined before having collected data. Establishing clear criteria beforehand reduces the likelihood that the author is biased in the selection of studies (Garg et al., 2008). Similarly, the way that data is extracted from the studies, coded, and combined into summary effects should be explained in detail. The detailed reporting of the decisions made throughout the conduction of a systematic review allow the author to critically evaluate the research, and enable other researchers to reproduce the research if necessary (Ringquist & Anderson, 2013).


Addressing Publication Bias

Publication bias refers to the tendency for research studies that find significant results to be published more frequently than those that do not find significant results. Consequently, the published literature may tend to have more significant results than the complete literature does. If a systematic review only includes published literature, it is likely to overestimate the size of an effect (Ringquist & Anderson, 2013).


There are several strategies for addressing publication bias, but an important one is for systematic reviews and meta-analyses to include a comprehensive search strategy (Ringquist & Anderson, 2013). A comprehensive search uses several strategies to identify all relevant studies, including published research as well as unpublished and grey literature, including theses, dissertations, conference presentations, think-tank research, government white papers, and so on (Hopewell, Clarke, & Mallett, 2005; Ringquist & Anderson, 2013).


Another aspect of publication bias relates to the language of publication of the primary study. Many systematic reviews include only English-language articles (Gregoire, Derderian, & Le Lorier, 1995). Including only English language articles can lead to a language bias in which authors that find negative results in their study may be less confident about publishing in a widely disseminated English-language journal and instead submit to a local journal (Egger et al., 1997; Gregoire et al., 1995). Similarly, English-language journals may be more competitive and may therefore be less likely to publish negative results. In both cases, the result is that English-language articles may have larger effect sizes than non-English articles (Egger et al., 1997). A systematic review of only English articles may therefore not be representative of the entire population of articles; it may be biased towards finding significant effects. Higher quality systematic reviews will not restrict included studies in terms of language and will actively search for articles published in other languages.


Ensuring Accuracy of Data Extraction

After deciding which studies are included in the systematic review, the reviewer must read and record the characteristics of those studies (Ringquist & Anderson, 2013; Sánchez-Meca & Botella, 2010). This includes the statistical results to be combined, but also other characteristics including who was included in the study sample, the location of the study, methodological variables of the study, and so on. It is important that the recording of study characteristics is done accurately, so higher quality systematic reviews will have two or more researchers read and code the studies (B. Shea et al., 2007). They will also report the degree of agreement between coders and how differences were resolved (Sánchez-Meca & Botella, 2010).


Assessing Quality of Primary Studies

There has been much debate by methodologists around whether poor-quality original studies should be included in a meta-analysis (Glass, 2000).


Some methodologists argue that poor-quality studies should be excluded. They argue that “garbage in equals garbage out”; that is, including original studies that are low-quality can only result in a low-quality meta-analysis (Andersson, 1999; Garg et al., 2008; Mosteller & Colditz, 1996). Other methodologists argue that excluding original studies a priori on the basis of their quality can lead to a biased summary effect size and loss of information (Dickersin & Berlin, 1992; Fiske, 1983; Glass, 2000).


While there is some controversy around the inclusion of poor-quality primary studies, methodologists seem to agree that at the very least, a systematic review should include some evaluation of quality of primary studies (Glass, 2000; Jones, 1995; Shea et al., 2007). This allows the analyst to examine the extent to which the quality of original studies affects the summary effect size (Ringquist & Anderson, 2013).


Appropriate Statistical Model

Within meta-analyses that combine effect sizes, several different statistical models have been distinguished: fixed-effect models, random-effects models, and mixed models (Borenstein et al., 2009; Hedges, 1992). These models are mainly differentiated in terms of what they presume is represented (i.e., in the population) by the estimated effect (i.e., in the sample).


Within the fixed-effect model, the effect size reported in each primary study is taken to be an estimate of a single fixed population effect. Therefore, the summary effect estimate from a combination of those primary studies is also taken to be an estimate of a single population effect.


Within the random-effects model, each of the individual studies’ effect size estimates is presumed to come from a population of possible population effects. In other words, each of the individual studies estimates the effect size for a unique population. The summary effect that is estimated in a given meta-analysis represents, in this case, is a (weighted) average of population effects.


Mixed effect models represent a combination of the two, and model effects for both random and fixed factors.


These different models warrant different kinds of conclusions: whereas using a fixed-effect model allows the reviewer to make inferences about the studies included, a random-effects model permits the meta-analyst to make inferences about a population of studies. In other words, the results of a meta-analysis that uses a random-effects model are more general.


There is debate about which of these models should be used in which contexts. It is often recommended that reviewers base their decision about which statistical model to use on an assessment of heterogeneity. Briefly, heterogeneity refers to how similar the effect sizes in the primary studies are to each other (Borenstein et al., 2009). A group of effect sizes is said to be homogenous when they are similar to each other; when the effect sizes are quite different from each other, they are said to be heterogeneous.


If the effect sizes to be combined in a meta-analysis are homogenous, they are more likely to be estimating a single effect, so a fixed-effect model may be appropriate. If the effect sizes are quite heterogeneous, it is unlikely to be the case that they are estimating the same, fixed effect. Therefore, a random-effects model is more appropriate (Ringquist & Anderson, 2013). Systematic reviews and meta-analyses will use different methods to assess heterogeneity and will make different decisions about statistical models.


Properly evaluating whether the decision was a good one requires some expertise in systematic review methodology; however, policy-makers and policy analysts that do not have this expertise can still evaluate these decisions to some extent. At the very least, a systematic review that includes a meta-analysis should include an assessment of heterogeneity and a description of which statistical model was used, along with some justification of why this model choice makes sense for the phenomenon under study (Shea et al., 2007). Further, if there is heterogeneity, the review should discuss what factors could be causing this; i.e. why the primary studies may be estimating effect sizes of different magnitudes (Sánchez-Meca & Botella, 2010).


 

Conclusion

Policy-making is complex and policy-makers use multiple pieces of information to inform policy decisions. For governments and policy-makers working towards EBPM, systematic reviews and meta-analyses have the potential to be very useful—and in some cases, they may be one of the best sources of evidence for policy decisions.


However, research is most useful to policy-makers when it is of high quality; policy-makers and analysts will need to evaluate the quality of a systematic review that includes a meta-analysis in order to use it to inform a policy decision. This post has presented an introduction to systematic reviews and meta-analyses along with information aimed at facilitating a critical reading of these studies in order to facilitate quality judgments by policy-makers.


The information presented in this post is useful as an introduction to systematic reviews and meta-analyses, but it is incomplete. Policy-makers and analysts wishing to learn more about systematic reviews and meta-analyses may find the following resources useful. Julio Sánchez-Meca and Juan Botella (2010) have produced a list of questions aimed at guiding a critical reading of systematic reviews and meta-analyses for clinical psychologists wishing to use systematic reviews as the basis for clinical decisions (included in Appendix A). While this guide is not designed specifically for policy-makers or policy analysts, it still provides a useful and relatively thorough structure for making evaluations of the quality of systematic reviews for policy decisions (Sánchez-Meca & Botella, 2010). Beverley Shea and colleagues have also produced a tool for evaluating systematic reviews in health fields, called the AMSTAR, which has been validated to some extent (Shea et al., 2007; Shea et al., 2007; B. J. Shea et al., 2009).


Another indication of the quality of a systematic review is an endorsement by research organizations. There are at least six major organizations dedicated to producing high-quality meta-analyses as part of systematic reviews:

These organizations have rigorous standards for their systematic reviews, and so their endorsement can help steer policy-makers and analysts towards higher-quality reviews (although this shouldn’t replace a critical reading by the policy-maker). Readers interested in how to conduct their own systematic review or learning more about technical aspects of meta-analyses may consider consulting (Borenstein et al., 2009; Cooper, 2010; Cooper, Hedges, & Valentine, 2009; Ringquist & Anderson, 2013).



 


References

ANDERSSON, G. (1999). “The role of meta-analysis in the significance test controversy.” European Psychologist, 4(2), 75-82. doi:10.1027//1016-9040.4.2.75


ANTMAN, E. M., LAU, J., KUPELNICK, B., MOSTELLER, F., & CHALMERS, T. C. (1992). A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts: Treatments for myocardial infarction. JAMA, 268(2), 240-8.


BORENSTEIN, M., HEDGES, L. V., HIGGINS, J., P. T., & ROTHSTEIN, H. R. (2009). Introduction to meta-analysis. Chichester, UK: John Wiley & Sons.


BRUNO, R. L., & CAMPOS, N. F. (2011). Foreign direct investment and economic performance: A systematic review of the evidence uncovers a new paradox [online]. United Kingdom Department for International Development. Available from: http://r4d.dfid.gov.uk/PDF/Outputs/SystematicReviews/DFID_MRA_FDI_February_28_2011_Bruno_Campos.pdf. Accessed: October 30, 2015


CABINET OFFICE. (1999). Modernising government. London, UK: Stationary Office. Available from https://www.wbginvestmentclimate.org/uploads/modgov.pdf. Accessed: September 14, 2014


CASTILLO, D. L., & ABRAHAM, N. S. (2008). “Knowledge management: How to keep up with the literature”. Clinical Gastroenterology and Hepatology, 6(12), 1294-1300.


CHALMERS, I., HEDGES, L. V., & COOPER, H. (2002). “A brief history of research synthesis”. Evaluation & the Health Professions, 25(1), 12-37. Available from http://ehp.sagepub.com/cgi/doi/10.1177/0163278702025001003. Accessed October 19, 2015


COBURN, C. E., HONIG, M. I., & STEIN, M. K. (2009). “What's the evidence on district's use of evidence?” In BRANDSFORD, J., GOMEZ, L., LAM, D., & VYE, N. (Eds.), Research and practice: Towards a reconciliation (pp. 67-87). Cambridge, MA: Harvard Education Press.


COOPER, H. (2010). Research synthesis and meta-analysis: A step-by-step approach (3rd ed.). Thousand Oaks, CA: Sage.


COOPER, H., HEDGES, L. V., & VALENTINE, J. C. (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York, NY: Russel Sage Foundation.


DAVIES, P. (2004). Is evidence-based government possible? Paper presented at the at the 4th Annual Campbell Collaboration Colloquium, Washington, DC.


DAWES, M., & SAMPSON, U. (2003). “Knowledge management in clinical practice: A systematic review of information seeking behavior in physicians”. International Journal of Medical Informatics, 71(1), 9-15. doi:10.1016/S1386-5056(03)00023-6


DE LA RUE, L., POLANIN, J., ESPELAGE, D., & PIGOTT, T. (2014). “School-based interventions to reduce dating and sexual violence: A systematic review”. Campbell Systematic Reviews, 10(7), 1-110.


DICKERSIN, K., & BERLIN, J. A. (1992). “Meta-analysis: State-of-the-science”. Epidemiology Review, 14, 154-176.


EGGER, M., ZELLWEGER-ZÄHNER, T., SCHNEIDER, M., JUNKER, C., LENGELER, C., & ANTES, G. (1997). “Language bias in randomised controlled trials published in English and German.” Lancet, 350(9074), 326-9.


FILGES, T., SMEDSLUND, G., KNUDSEN, A. D., & JØRGENSEN, A. K. (2015). “Active labour market programme participation for unemployment insurance recipients: A systematic review”. Campbell Systematic Reviews, 11(2), 1-342.


FISKE, D. W. (1983). “The meta-analysis revolution in outcome research”. Journal of Consulting and Clinical Psychology, 51, 65-70.


FOX, D. M. (2005). “Evidence of evidence-based health policy: The politics of systematic reviews in coverage decisions”. Health Affairs, 24(1), 114-122. doi:10.1377/hlthaff.24.1.114


FOX, D. M., & OXMAN, A. D. (2001). Informing judgment: Case studies of health policy and research in six countries. New York, NY: Milbank Memorial Fund.


GARG, A. X., HACKAM, D., & TONELLI, M. (2008). “Systematic review and meta-analysis: When one study is just not enough”. Clinical Journal of the American Society of Nephrology, 3(1), 253-260. doi:10.2215/CJN.01430307


GLASS, G. V. (1976). “Primary, secondary, and meta-analysis of research”. Educational Researcher, 5(10), 3-8. doi://www.jstor.org/stable/1174772


GLASS, G. V. (2000). Meta-analysis at 25 [online]. Available from http://www.gvglass.info/papers/meta25.html. Accessed October 30, 2015


GREGOIRE, G., DERDERIAN, F., & LE LORIER, J. (1995). “Selecting the language of the publications included in a meta-analysis: Is there a tower of babel bias?” Journal of Clinical Epidemiology, 48(1), 159-163.


HEATH, G., BROWNSON, R., KRUGER, J., MILES, R., POWELL, K. E., RAMSEY, L. T. & the TASK FORCE ON COMMUNITY PREVENTIVE SERVICES (2006). “The effectiveness of urban design and land use and transport policies and practices to increase physical activity: A systematic review”. Journal of Physical Activity and Health, 3(supp 1), S55–S76.


HEDGES, L. V. (1992). “Meta-analysis”. Journal of Education Statistics, 17(4), 279-296.


HOPEWELL, S., CLARKE, M., & MALLETT, S. (2005). “Grey literature and systematic reviews”. In ROTHSTEIN, H. R., SUTTON, A. J., & BORENSTEIN, M. (Eds.), Publication bias in meta-analysis: Prevention, assessment, and adjustments (pp. 49-72). Chichester, UK: John Wiley and Sons.


HUNT, M. (1997). How science takes stock: The story of meta-analysis . New York, NY: The Russell Sage Foundation.


JONES, D. R. (1995). “Meta-analysis: Weighing the evidence”. Statistics in Medicine, 14(2), 137-149. doi:10.1002/sim.4780140206


LAUPACIS, A., & STRAUS, S. (2007). “Systematic reviews: Time to address clinical and policy relevance as well as methodological rigor”. Annals of Internal Medicine, 147(4), 273-274. doi:10.7326/0003-4819-147-4-200708210-00180


LEVIN, B. (2013). “The relationship between knowledge mobilization and research use”. In YOUNG S. P. (Ed.), Evidence-based policy-making in Canada (pp. 45-66). Don Mills, ON: Oxford University Press.


MILIJAN, L. (2012). Public policy in Canada: An introduction (6th ed.). Don Mills, ON: Oxford University Press.


MOHER, D., TETZLAFF, J., TRICCO, A. C., SAMPSON, M., & ALTMAN, D. G. (2007). “Epidemiology and reporting characteristics of systematic reviews”. PLoS Medicine, 4(3) doi:10.1371/journal.pmed.0040078


MOHER, D., COOK, D. J., EASTWOOD, S., OLKIN, I., RENNIE, D., & STROUP, D. F. (1999). “Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement”. The Lancet, 354(9193), 1900. doi:10.1016/S0140-6736(99)04149-5


MOSTELLER, F., & COLDITZ, G. A. (1996). “Understanding research synthesis (meta-analysis)”. Annual Review of Public Health, 17, 1-23.


MURAD, M. H., MONTORI, V. M., LOANNIDS, J. A., JAESCHKE, R., DEVEREAUX, P. J., PRASAD, K., GUYATT, G. (2014). “How to read a systematic review and meta-analysis and apply the results to patient care: Users’ guides to the medical literature”. JAMA, 312(2), 171-179. doi:10.1001/jama.2014.5559


NOONE, J., WARREN, J., & BRITAIN, M. (1998). “Information overload: Opportunities and challenges for the GP's desktop”. Studies in Health Technology and Informatics, 52(2), 1287-1291. Retrieved from http://europepmc.org/abstract/MED/10384667


NURSEY-BRAY, M. J., VINCE, J., SCOTT, M., HAWARD, M., O’TOOLE, K., SMITH, T., CLARKE, B. (2014). “Science into policy? Discourse, coastal management and knowledge”. Environmental Science & Policy, 38, 107-119. Retrieved from http://www.sciencedirect.com.ezproxy.library.uvic.ca/science/article/pii/S1462901113002189. Accessed: October 14, 2015


NUTLEY, S. M., WALTER, I., & DAVIES, H. T. O. (2007). Using evidence: How research can inform public services. Bristol, UK: The Policy Press.


OTT, E., & MONTGOMERY, P. (2015). “Interventions to improve the economic self-sufficiency and well-being of resettled refugees: A systematic review”. Campbell Systematic Reviews, 10(4), 1-53.


OXMAN, A. D., & GUYATT, G. H. (1993). “The science of reviewing research”. Annals of the New York Academy of Science, 703, 125-134.


PULLIN, A. S., & STEWART, G. B. (2006). “Guidelines for systematic review in conservation and environmental management”. Conservation Biology, 20(6), 1647-1656. doi:10.1111/j.1523-1739.2006.00485.x


RIED, K. (2006). “Interpreting and understanding meta-analysis graphs--a practical guide”. Australian Family Physician, 35(8), 638.


RINGQUIST, E. J., & ANDERSON, M. R. (2013). Meta-analysis for public management and policy. San Francisco, CA: Jossey-Bass.


SAMII, C., LISIECKI, M., KULKARNI, P., PALER, L., & CHAVIS, L. (2014). “Effects of decentralized forest management (DFM) on deforestation and poverty in low and middle-income countries: A systematic review”. Campbell Systematic Reviews, 10(10), 1-88.


SÁNCHEZ-MECA, J., & BOTELLA, J. (2010). “Systematic reviews and meta-analyses: Tools for professional practice”. Papeles Del Psicologo, 31(1), 7-17.


SCHULZE, R. (2007). “Current methods for meta-analysis: Approaches, issues, and developments”. Zeitschrift Fr Psychologie, 215(2), 90-103.


SHAPIRO, D. A., & SHAPIRO, D. (1983). “Comparative therapy outcome research: Methodological implications of meta-analysis”. Journal of Consulting and Clinical Psychology, 51(1), 42-53. doi:10.1037/0022-006X.51.1.42


SHAXSON, L. (2005). “Is your evidence robust enough? questions for policymakers and practitioners”. Evidence and Policy, 1(1), 101-111.


SHEA, B. J., BOUTER, L. M., PETERSON, J., BOERS, M., ANDERSSON, N., ORITZ, Z., GRIMSHAW, J. M. (2007). “External validation of a measurement tool to assess systematic reviews (AMSTAR)”. PLoS ONE, 2(12) doi:10.1371/journal.pone.0001350


SHEA, B. J., HAMEL, C., WELLS, G. A., BOUTER, L. M., KRISTJANSSON, E., GRIMSHAR, J. M BOERS, M. (2009). “AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews”. Journal of Clinical Epidemiology, 62(10), 1020. doi:10.1016/j.jclinepi.2008.10.009


SHEA, B. J., DUBÉ, C., & MOHER, D. (2001). “Assessing the quality of reports of systematic reviews: The QUOROM statement compared to other tools”. In Egger, M., Smith, G. D., & Altman, D. G. (Eds.), Systematic reviews in health care: Meta-analysis in context (pp. 122–139). London: BMJ Publishing Group. doi:10.1002/9780470693926.ch7


SHEA, B. J., GRIMSHAW, J., WELLS, G., BOERS, M., ANDERSSON, N., HAMEL, C., & PORTER, A. (2007). “Development of AMSTAR: A measurement tool to assess the methodological quality of systematic reviews”. BMC Medical Research Methodology, 7(1), 10. Retrieved from http://amstar.ca/docs/Publication%20-%20Development%20of%20AMSTAR.pdf. Accessed: October 30, 2015


SIMPSON, S., RORIE, M., ALPER, M. E., SCHELL-BUSEY, N., LAUFER, W., & SMITH, N. C. (2014). “Corporate crime deterrence: A systematic review”. Campbell Systematic Reviews, 10(4), 1-105.


SLOCUM, T. A., SPENCER, T. D., & DETRICH, R. (2012). “Best available evidence: Three complementary approaches”. Education and Treatment of Children, 35(2), 153-181. Retrieved from http://muse.jhu.edu/content/crossref/journals/education_and_treatment_of_children/v035/35.2.slocum.html. Accessed: October 30, 2015


SOLESBURY, W. (1999). Evidence based policy: Whence it came and where it's going. London, UK: ESRC UK Centre for Evidence Based Policy and Practice. Retrieved from http://www.lgsp.uz/old/publications/option_paper_training/ebp_when_it_came_and_where_it_is_going_eng.pdf. Accessed: October 30, 2015


SPENCER, T. D., DETRICH, R., & SLOCUM, T. A. (2012). “Evidence-based practice: A framework for making effective decisions”. Education and Treatment of Children, 35(2), 127-151. Retrieved from http://muse.jhu.edu/content/crossref/journals/education_and_treatment_of_children/v035/35.2.spencer.html. Accessed: October 30, 2015


STEWART, G. B., CÔTÉ, I. M., KAISER, M. J., HALPERN, B. S., LESTER, S. E., BAYLISS, H. R., MENGERSEN, K., & PULLIN, A. S. (2008). Are marine protected areas effective tools for sustainable fisheries management? I. biodiversity impact of marine reserves in temperate zones. CEE review 06-002 (SR23). Collaboration for Environmental Evidence. Available from: www.environmentalevidence.org/SR23.html. Accessed: October 30, 2015.


TODTEMKOPF, M. S., VINCENT, S. L., & BENES, F. M. (2005). “A cross-study meta-analysis and three-dimensional comparison of cell counting in the anterior cingulate cortex of schizophrenic and bipolar brain”. Schizophrenia Research, 73, 79-89. doi:10.1016/j.schres.2004.08.018


TRICCO, A. C., TETZLAFF, J., & MOHER, D. (2011). “The art and science of knowledge synthesis”. Journal of Clinical Epidemiology, 64(1), 20. Available from: http://www.sciencedirect.com/science/article/pii/S0895435609003618. Accessed: October 30, 2015.


VILLETTAZ, P., GILLIERON, G., & KILLIAS, M. (2015). “The effects on re-offending of custodial vs. non- custodial sanctions: An updated systematic review of the state of knowledge”. Campbell Systematic Reviews, 2015(1), 92. Available from: http://www.campbellcollaboration.org/lib/project/22/. Accessed: October 30, 2015


WALKER, D. G., WILSON, R. F., SHARMA, R., BRIDGES, J., NIESSEN, L., BASS, E. B., & FRICK, K. (2012). Best practices for conducting economic evaluations in health care: A systematic review of quality assessment tools. Rockville, MD: Agency for Healthcare Research and Quality.


WANG, Y., WU, Y., WILSON, R. F., BLEICH, S., CHESKIN, L., WESTON, C., SEGAL, J. (2013). Childhood obesity prevention programs: Comparative effectiveness review and meta-analysis. Comparative Effectiveness Review No. 115. (Prepared by the Johns Hopkins University Evidence-based Practice Centre under Contract No. 290-2007-10061-I.) AHRQ Publication No. 13-EHC081-EF. Rockville, MD: Agency for the Healthcare Research and Quality. Available from www.effectivehealthcare.ahrq.gov/reports/final.cfm. Accessed: October 30, 2015.


YOUNG, S. P. (2013). Evidence-based policy-making in Canada: A multidisciplinary look at how evidence and knowledge shape Canadian public policy. Don Mills, ON: Oxford University Press.

12 views0 comments