Home » President's Corner

Statistics and Statisticians: Essential to Evidence-Based Decisionmaking

1 October 2009 One Comment
Sally Morton

Sally Morton

The Obama Administration is committed to policy decisions driven by evidence, as emphasized by Peter Orszag, director of the Office of Management and Budget. The time is now to promote the use of statistics and the role of statisticians in making good decisions based on objective evidence.

Evidence-based decisionmaking, including both a desire to be guided by evidence and to generate new evidence, is broadly applicable, as noted in a 2009 National Science Foundation (NSF) report titled “Social, Behavioral, and Economic Research in the Federal Context.” Indeed, the National Academies Division of Behavioral and Social Sciences and Education established the Committee on Social Science Evidence for Use, whose charge is to address questions on how best to strengthen the quality, use, and utility of social science research and to lay a solid foundation for the continuous improvement of both the conduct of social science research and its application to policy.

The use of evidence-based decisionmaking ranges from education to the environment to criminal justice to national security. In education, for example, the What Works Clearinghouse was established in 2002 by the U.S. Department of Education’s Institute of Education Sciences to provide educators with evidence about the effectiveness of educational interventions and practices.

A key tool to improving health care decisionmaking is evidence-based medicine, a term that appeared in the early 1990s. Evidence-based medicine is defined by David Sackett and co-authors in their BMJ editorial titled “Evidence Based Medicine: What It Is and What It Isn’t” as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical experience with the best available external clinical evidence from systematic research.” Statistics is central to evidence-based medicine.

A variety of opportunities exist for the interested statistician to learn more about and participate in evidence-based medicine. For example, the Cochrane Collaboration, founded in 1993, was named after the epidemiologist Archie Cochrane. It is an international not-for-profit independent organization dedicated to providing information about health care treatments available worldwide. The collaboration produces and disseminates systematic reviews and promotes the search for evidence in the form of clinical trials and other studies. In particular, the Cochrane Handbook discusses statistical issues and provides guidance on how to prepare a review.

The Campbell Collaboration, named for social scientist Donald Campbell, is a sister organization to the Cochrane Collaboration that was begun in 2000. It focuses on preparing, maintaining, and disseminating systematic reviews in the social sciences (e.g., crime, education, and social welfare).

The Agency for Healthcare Research and Quality (AHRQ) Evidence-Based Practice Centers (EPCs) are an important source of systematic reviews and methodological guidance. EPCs are located in the United States and Canada. In particular, EPCs have produced a methods reference guide that addresses statistical methods in systematic reviews.

Considerable attention in the health care reform debate has been focused on comparative effectiveness research, which might be called a subset of evidence-based medicine. The Institute of Medicine has estimated that less than half of all health care treatments delivered today are supported by evidence. As defined by Carolyn Clancy and Jean Slutsky of AHRQ, the central question of comparative effectiveness research is which health care treatment “works best, for whom, and under what circumstances.” While research comparing the effectiveness of interventions has been conducted for decades, the term “comparative effectiveness research,” or CER, is relatively new.

The 2009 American Recovery and Reinvestment Act (ARRA), or “Stimulus Bill,” provided $1.1 billion to support CER, among other directives to health care–related activities. ARRA divided the CER funding into $300 million for AHRQ, $300 million for the National Institutes of Health (NIH), and $400 million for the Office of the Secretary of Health and Human Services (HHS). According to the law, the funding is to be used to evaluate the relative effectiveness of different health care services and treatment options and encourage the development and use of clinical registries, clinical data networks, and other forms of electronic data to generate outcomes data.

The law also provided $1.5 million to support an Institute of Medicine (IOM) study. The resulting IOM committee (of which I was a part) drew together people from a host of disciplines, including statistics, to make recommendations to the secretary of HHS on national priorities for CER. The committee defined CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition, or to improve the delivery of care.” The purpose of CER is to assist consumers, clinicians, purchasers, and policymakers in making informed decisions that improve health care at both the individual and population levels.

Several components of this definition merit emphasis. First, CER is about effectiveness, not efficacy. Efficacy examines how a treatment performs under ideal circumstances, generally via an internally valid—but limited in generalizability—randomized controlled trial. Effectiveness examines how a treatment works in the real world, for example in a variety of different settings, for different types of patients, such as those with comorbid conditions and so on.

Methodologically, CER expands beyond what is traditionally thought of as evidence-based medicine methodology, the latter being synthesis, sometimes meta-analysis, of existing data. CER includes the generation of new evidence, for example the fielding of a clinical trial or an observational cohort study, or the collection of data via claims data or electronic medical records. CER expands the notion of evidence, and challenges our discipline to play a part in defining the most appropriate evidence to answer a specific question.

CER encompasses the range of health care, from screening to diagnosis to treatment. It addresses decisions at both the patient and public health levels. One intent of the ARRA legislation was to ensure that subpopulations (e.g., children) were considered. The vision was to include stakeholders in the priority setting. To that end, both a public hearing and Internet survey were used to gather public input for the IOM committee deliberations. Click here to see the report.

CER is also patient-centered. It is about providing information to patients and their families, caregivers, and health-care providers about what treatment works best and under what circumstances. In particular, CER outcomes are those that matter to patients, and CER studies are long term. Evidence is worth nothing if it does not answer the questions that are important to patients and if it is not translated into usable information and disseminated.

As Lynne Billard, a past president of the ASA, said in her 1996 address:

It is up to us as an association to chart a course that focuses on the unique strengths inherent to statistics and its boundless opportunities to play pivotal and indispensable roles in resolving contemporary issues, a course that guarantees the success of our profession and of statistical science. “Do we count?” We like to think we do. The crucial question is: “Do others think that we count?” That answer and our response to it will fashion our future.

To me, this is still the key question. Evidence-based decisionmaking is a contemporary challenge in which statistics and statisticians can, and must, count.

As always, I welcome your comments.

Thank you.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

One Comment »

  • Dorothy Lerum said:

    I thoroughly enjoyed this and agree that evidence based decision making does count.