Evidence-based is perhaps one of the biggest buzzwords in the education policy field. Given the nearly $740 billion of taxpayer funds invested annually in the US preK-12 public education system (Hussar et al., 2020), ensuring the money is spent on programs, services, and technologies that support the success of all students should be a priority for policymakers and practitioners alike. Interest in evidence-based practices—those that well-designed and rigorous empirical studies identify as impactful—accompanied the rise of the education accountability movement that swept the country in the 1990s, culminating in the passage of the No Child Left Behind Act (NCLB) in 2001. NCLB included language encouraging—and in some cases mandating—the use of federal funds on programs and practices rooted in “scientifically based research” (Wilde, 2004). The Every Student Succeeds Act (ESSA) of 2015, which replaced NCLB, continues to prioritize using evidence to guide investment of federal money, outlining four levels of evidence that correspond to the strength of a program’s research base that leaders should consider in their decision-making processes.

Given that expertise in evaluation research tends to fall outside the job description of most education practitioners, it is perhaps not surprising that many struggle to identify quality research as well as gather their own data to support their decision-making (Booher, Nadelson, & Nadelson, 2020). So how can we ensure that education leaders have the technical knowledge necessary to implement and evaluate evidence-based practices? With their book CommonSense Evidence: The Education Leader’s Guide to Using Data and Research, Nora Gordon and Carrie Conaway seek to remedy this issue by enhancing the research skills of educators. Their practitioner-oriented guide is a comprehensive resource for evaluating research evidence. It empowers educators—whether they be district leaders, principals, or teachers—to draw their own conclusions about the relevance and validity of different sources of information rather than relying solely on the judgment of researchers and other “experts” who may be far removed from education’s front lines.

The book’s seven chapters walk readers through each step of the research process—from identifying a research question and conducting a literature review to evaluating the rigor of extant studies, collecting and analyzing data, and communicating findings. Anchoring the narrative are three case studies of educators who are novices in evaluating research evidence: a district superintendent tasked with reducing chronic absenteeism rates, a math department chair trying to improve student performance, and a chief state school officer addressing a teacher shortage in rural districts. Each chapter draws on at least one of these case studies to illustrate key points. For example, the first chapter describes how each protagonist segments a broad question of interest (e.g., What are promising policies for improving rural teacher recruitment?) into narrow ones that can be examined using data (e.g., Would state-funded tuition forgiveness incentives be effective?). Each chapter concludes with “Key Takeaways” summarizing the main concepts as well as an “Apply Your Learning” section posing questions intended to spark reflection and also support readers in applying what they have learned to their own contexts.

Common-Sense Evidence really shines in how successfully it conveys complex statistical terms and methodological concepts in nontechnical language that is accessible to the lay reader. This is especially true in chapter 3, which outlines how to evaluate the relevance and rigor of a research study. Gordon and Conaway do an excellent job explaining a common pitfall among novice consumers of research—confusing correlation with causation. They expertly describe why many observational impact studies fall short of being able to make causal claims about the effect of a program or policy, explaining how a lack of randomization of participants into a treatment or control group can prohibit the identification of a treatment effect. Unlike the typical practitioner-oriented book, this text does not shy away from using statistical terminology when necessary, incorporating and explaining commonly encountered terms in the research literature like selection bias, randomization, quasi-experimental, and effect size. It is essential that educators understand these statistical terms if they are to fully engage with research, identifying the strength of a study’s design and the validity of its findings.

What makes Common-Sense Evidence stand out is how it tackles head-on false notions within the research and policy-maker communities that large-scale quantitative impact studies are the only valid sources of evidence. Gordon and Conaway lament the limitations of focusing on such studies, arguing that their purported rigor is meaningless if the study design and findings are not relevant to a practitioner’s own context.

Much of the evidence often viewed as most rigorous isn’t as helpful as it could be, because it is divorced from the actual needs of the field. This disconnect arises from the academic community’s focus on a narrow, methodologically based definition of quality, which overvalues the technical aspects of research and undervalues relevance.” (p. 3)

The authors instead argue in chapter 4 for a more expansive definition of what constitutes credible evidence, encompassing findings from basic research and qualitative research studies that may be more relevant to understanding a particular problem and identifying potential solutions for addressing it— including results from descriptive analyses that education leaders undertake themselves. Indeed, Gordon and Conaway explain that educators can and should engage in collecting and analyzing their own data to examine particular policies and practices, offering strategies for exploring data and effectively sharing findings in chapters 5 and 6. As the authors note, just because the data a principal or teacher gathers are not from a randomized controlled trial does mean they are not valuable.

While Gordon and Conaway clearly wrote the book with education leaders in mind, I wonder why they did not elaborate on who constitutes this audience. The case studies all focus on individuals who are in visible leadership positions and address schoolor districtwide issues as part of their daily work. While some of the vignettes draw in secondary characters, like a school counselor tasked with helping a principal understand chronic absentee data, educators like classroom teachers and district staff are not as consistently mentioned. While not necessarily holding formal leadership positions, these educators are leaders whose decision-making has a direct impact on the success of students. The book would have benefited not only from more strongly emphasizing how the information and strategies it contains are relevant for any educator looking to increase their research skills, but also from explicitly articulating the importance of cultivating data fluency in all educators. For example, while chapter 7 explores how leaders can build and sustain an evidence-based culture in their organization, notably absent from the list of recommendations is discussion of strengthening data fluency among education stakeholders at large.

Common-Sense Evidence is a must-read for education practitioners who aim to be more fluent, critical consumers of research—who seek to move from being “passive recipients of wisdom from the ‘experts’ . . . to key players in creating that wisdom” (p. 1). It would be especially useful to incorporate this text into schoolor districtwide training about using evidence to drive improvement. Chock-full of helpful resources, the book presents complex concepts in an approachable, easy-to-read manner without oversimplifying the messiness of working with data. Given that interest in using data to support decision-making is only increasing, ensuring that all education practitioners have the skills to understand evidence is important now more than ever before. Common-Sense Evidence is a resource for doing just that.

Booher
,
L.
,
Nadelson
,
L. S.
, &
Nadelson
,
S. G.
(
2020
).
What about research and evidence? Teachers’ perceptions and uses of education research to inform STEM teaching
.
Journal of Educational Research
,
113
(
3
),
213
225
. doi: 10.1080/00220671.2020.1782811
Hussar
,
B.
,
Zhang
,
J.
,
Hein
,
S.
,
Wang
,
K.
,
Roberts
,
A.
Cui
,
J.
, . . .
Purcell
,
S.
(
2020
).
The condition of education 2020
(NCES 2020-144).
Washington, DC
:
National Center for Education Statistics
.
Retrieved from
https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2020144
Wilde
,
J.
(
2004
).
Definitions for the No Child Left Behind Act of 2001: Scientifically-based research
.
Bethesda, MD
:
National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs
.
Retrieved from
https://ncela.ed.gov/files/rcd/BE021264/Definitions_of_the_NCLB_Act.pdf