In recent years, the number of studies examining the effect of mindfulness meditation on conditions such as depression, anxiety, addiction, and chronic pain has grown exponentially. Clinical practice has followed suit, with agencies such as the United Kingdom’s National Health Service now recommending meditation as a standard psychotherapy. However, leading researchers have criticized the pace at which meditation has become accepted as a clinical intervention, warning that its benefits have not yet been adequately established and potential harms not ruled out. They worry that much of the research on the benefits of meditation has been prone to experimenter and expectation biases. Experimenters are typically themselves meditators; their desire to demonstrate meditation’s effectiveness may skew how they instruct participants or interpret results. Participants know that the intervention is hypothesized to be effective and that researchers want to demonstrate it to be so. Meditation studies often do not include controls, and when they do, they are often inadequate for ruling out non-specific factors that might be responsible for effects, such as an instructor’s enthusiasm or the mere act of undertaking a health-promoting practice. Further, varying definitions of “mindfulness,” styles of meditation, and experimental protocols make it difficult to compare and amalgamate findings in meta-analyses. In short, these researchers argue, we do not have enough evidence for mindfulness’s safety and efficacy to warrant its wide prescription as an intervention in clinical contexts (Van Dam et al. 2018; Davidson and Dahl 2018; Davidson and Goleman 2017; Davidson and Kaszniak 2015).
In this paper, I argue that their abundance of caution is excessive. It stems from an undue reliance on the tenets of evidence-based medicine (EBM) hierarchy, particularly the idea that RCTs and meta-analyses are superior to other forms of evidence. I argue that different forms of evidence are best suited to studying different kinds of interventions and effects, and that RCTs and meta-analyses are particularly ill-equipped for understanding meditation. First, the timeframe for RCTs is relatively short, whereas the benefits of meditation accrue slowly, over longer periods. Second, RCT methodology isolates a single, relatively simple causal factor that produces an easily measurable effect. Plausibly, the benefits of meditation practice are not due to a single “active ingredient,” but instead a complex interplay between the practice and the context in which it is embedded. The reductionism inherent in RCT methodology precludes determining whether this is indeed the case, limiting what we are able to learn about the benefits of meditation.
Thus, whereas some critics have argued that the EBM approach tends to overestimate effect sizes (Stegenga 2018), I show that it is prone to underestimate the effect of meditation. The short timeframe of an RCT will fail to deliver an effect if more time is in fact necessary for the effect to manifest. The search for “the” active ingredient of meditation encourages study designs that maximally simplify the practice; if enriched versions in their original contexts are more effective, this cannot be readily discovered through such methodology. Moreover, these problems are exacerbated when meta-analyses are considered the best possible evidence. When researchers argue that differences between definitions of mindfulness and meditation protocols prevent comparison between studies, they are using an unduly strict criterion for cross-study comparison. They are also implicitly assuming that, without the ability to amalgamate evidence from different studies via the meta-analysis, we cannot say what the evidence, on balance, tells us. Further, meta-analyses exclude anything but selected RCTs (e.g., Goyal et al 2014). Thus, if RCTs are considered the gold standard of evidence, but they underestimate effect sizes, then meta-analyses that exclude other forms of evidence showing the intervention to be effective will systematically underestimate them as well.
The underestimation of the effects of meditation is especially problematic because, unlike pharmaceutical or surgical interventions, meditation costs nothing and is accessible to anyone. To the extent that it may be effective, it ought to be widely adopted as an intervention. I propose a principle for weighing evidence in medicine based on inductive risk considerations. However we conceive of quality of evidence—even if we assume the EBM hierarchy—we should relax our standard if the prima facie risk of harm is low and the potential to benefit many people is high. Because I focus on lowering the evidentiary threshold to enable greater benefit to more people, I call this the inductive reward principle for evaluating evidence. I show that its application requires judging what constitutes background knowledge on the basis of other considerations. In line with other commentators (e.g., Worrall 2002, 2010; Cartwright 2007; Stegenga 2014), I conclude that there is no single hierarchy of evidence; nevertheless, weighing evidence can be a principled endeavor.
References:
Cartwright, Nancy. “Are RCTs the Gold Standard?" BioSocieties 2, no. 1 (2007): 11–20.
Davidson, Richard J., and Cortland J. Dahl. “Outstanding Challenges in Scientific Research on Mindfulness and Meditation." Perspectives on Psychological Science 13, no. 1 (2018): 62–65.
Davidson, Richard J., and Daniel Goleman. Altered Traits: Science Reveals How Meditation Changes Your Mind, Brain, and Body. New York: Avery, 2017.
Davidson, Richard J., and Alfred W. Kaszniak. “Conceptual and Methodological Issues in Research on Mindfulness and Meditation." American Psychologist 70, no. 7 (2015): 581–92.
Goyal, Madhav, Sonal Singh, Erica M. Sibinga, Neda S. Gould, Amit S. Rowland-Seymour, Ritu Sharma, Zackary Berger, Dana Sleicher, and Jennifer A. Haythornthwaite. “Meditation Programs for Psychological Stress and Well-Being: A Systematic Review and Meta-Analysis." JAMA Internal Medicine 174, no. 3 (2014): 357–68.
Stegnega, Jacob. Medial Nihilism. Oxford: Oxford University Press, 2018.
Stegenga, Jacob. “Down with the Hierarchies.” Topoi 33 (2014): 313–22.
Van Dam, Nicholas T., Marieke K. van Vugt, Clara E. Vago, David R. Schmalzl, Willoughby B. Britton, Judson A. Brewer, Yi-Yuan Tang, et al. “Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation." Perspectives on Psychological Science 13, no. 1 (2018): 36–61.
Worrall, John. “What Evidence in Evidence-Based Medicine?" Philosophy of Science 69, no. S3 (2002): S316–S330.
Worrall, John. “Evidence: Philosophy of Science Meets Medicine." Journal of Evaluation in Clinical Practice 16, no. 2 (2010): 356–62.
Leading researchers have criticized the pace at which mindfulness meditation has become adopted as a clinical intervention, warning that its benefits have not been adequately established and potential harms not ruled out (e.g., Van Dam et al. 2018). Their abundance of caution stems from an undue reliance on the evidence-based medicine (EBM) hierarchy of evidence, according to which randomized controlled trials (RCTs) and meta-analyses are superior to other evidence. I argue that, plausibly, meditation is effective not because of a single “active ingredient,” but also due to its embeddedness in a rich context. Yet RCT methodology precludes discovering that this is the case and meta-analyses typically exclude non-RCT evidence. I instead propose the inductive reward principle for weighing evidence: However we conceive of evidence quality, we should relax our standard if the prima facie risk of harm is low and the potential to benefit many people is high.