Home » Columns, Featured, Science Policy

Evidence-Based Policy at the U.S. Department of Labor

2 May 2016 2,105 views No Comment

This column is written to inform ASA members about what the ASA is doing to promote the inclusion of statistics in policymaking and the funding of statistics research. To suggest science policy topics for the ASA to address, contact ASA Director of Science Policy Steve Pierson at pierson@amstat.org.

I’m pleased to have Demetra Nightingale—the chief evaluation officer for the U.S. Department of Labor (DoL)—as this month’s science policy guest columnist. Nightingale describes her office’s evidence-based approach for improving the effectiveness of DoL’s many programs. The DoL evaluation work is frequently highlighted in the federal government’s work to better integrate evidence and rigorous evaluation in budget, management, and policy decisions.
~ Steve Pierson, ASA Director of Science Policy

DemetraNightingale125Demetra Smith Nightingale is the chief evaluation officer for the U.S. Department of Labor. She is responsible for coordinating the department’s evaluation agenda and working with all agencies to design and implement evaluations.

 

The federal government is focused on improving the effectiveness of government by using data more efficiently and conducting rigorous program evaluations to build evidence about “what works.”

The U.S. Department of Labor (DoL) is responsible for workforce development, job training, unemployment insurance, and labor standards enforcement through worker protection programs such as those in the Occupational Safety and Health Administration (OSHA)—which enforces workplace safety laws—and the Wage and Hour Division—which enforces minimum wage and overtime laws.

The evidence-based approach at DoL involves both program evaluation and performance management.

Evaluation and Research

The evaluation emphasis at DoL is led by the Chief Evaluation Office, which coordinates a department-wide evaluation program responsive to overarching policy priorities and goals set forth in the department’s strategic plan.

Evaluation and research activities include the following:

  • Formal program evaluations using experimental and nonexperimental designs
  • Testing new approaches through pilots and demonstrations
  • Exploratory quantitative and qualitative analysis
  • Capacity-building related to evaluation

Formal experimental evaluations with random assignment to treatment and control groups are common in the workforce development policy area, estimating the net impact of a program or strategy compared to a counterfactual condition representing what the impact would be without the intervention. For example, evaluations are conducted to determine the effectiveness of employment services and job training to identify practices that can be replicated across the public workforce development system and to identify possible efficiencies.

A 2012 experimental study—Impact of the Reemployment and Eligibility Assessment (REA) Initiative in Nevada by Marios Michaelides and coworkers—evaluated strategies to speed the rate at which unemployed workers become reemployed and found that “rapid reemployment is more likely to occur when unemployment insurance claimants receive targeted individualized employment services. The treatment group claimants collected 3.13 fewer weeks and $873 lower total unemployment benefit amounts than the control group.” These findings suggested the resulting public savings exceeded average program costs by more than four times.

Another net impact evaluation—An Effectiveness Assessment and Cost-Benefit Analysis of Registered Apprenticeship in 10 States by Deborah Reed and colleagues—used nonexperimental multivariate modeling to estimate the effectiveness of registered apprenticeships, which provide individuals with long-term training leading to certificates and licenses in electrician and other trade occupations. That analysis found that “participation in registered apprenticeship was associated with substantial gains in earnings of $47,000 over a nine-year period following enrollment in the program and $99,000 over the career of an apprentice.”

The statistically significant positive evidence from these (and other) evaluations has been used to justify expanding the strategies.

Evaluations and statistical analysis of program outcomes also are conducted in worker protection programs, often using program management data in more analytic ways than the program agencies otherwise have resources to do. For example, embedded within a large evaluation of OSHA enforcement activities is a sub-study testing whether targeted mailings to businesses increase requests for free onsite consultations to assess workplace health and safety conditions. Firms randomly assigned to receive the targeted notice were 25% more likely to request the free assistance compared to firms that received regular general information, thus increasing voluntary compliance and deterring injuries.

In addition to formal evaluations and analyses, DoL sponsors surveys on high-priority topics. To better understand the effects of the Family Medical Leave Act (FMLA), nationally representative samples of workers and employers were surveyed about their use of and perspectives about family and medical leave. The findings—as explained by Jacob A. Klerman and his associates in the report “Family and Medical Leave in 2012”—provided useful information to the federal agencies about how to better inform workers and employers about labor regulations.

A number of capacity-building activities are also underway to improve federal employees’ knowledge about evaluation, including methodological seminars, statistical user groups, and established guidelines for high-quality evaluations. The guidelines are posted on DoL’s evidence-based Clearinghouse of Labor, Evaluation, and Research (CLEAR) website, which includes systematic evidence reviews of evaluations sponsored by DoL or other researchers.

Performance Management

The Performance Management Center (PMC) leads DoL’s performance management activities. The priority goals laid out in the strategic plan are operationalized in annual operating plans for each DoL agency. Through quarterly review meetings with the deputy secretary, the department’s agency heads discuss their agency’s performance progress compared to previously established targets.

Evaluations contribute evidence that feeds into the performance management process through analysis of factors associated with current measures to consider definitional refinements or new measures to more fully capture performance. In one study, management data from workers’ compensation programs were analyzed to identify factors associated with the rate at which individuals return to work after receiving compensation payments because of a work-related injury. Statistical analysis is also examining employment-related services to subgroups such as women, ethnic minorities, and veterans returning from active duty.

Thus, a culture of evidence is emerging at DoL, due in part to the active, empirically based, and comprehensive program of research and evaluation, consciously linked to management and operations through strategic planning and performance management. Rigorous evaluations help policy makers and administrators understand why public programs may or may not be meeting their goals, the relative effectiveness of different strategies to achieve goals, and how informed evidence can help identify what needs to change to improve results.

The statistical community can play an important role in furthering the progress made in evidence-based policy by sharing the latest statistical techniques with the public policy and evaluation community.

First, the evidence-based climate in the federal government requires that publicly funded evaluations and research—as well as performance measurement—adopt the best methods, including applications for creating appropriate comparison groups, using the most appropriate matching and estimation techniques, or considering external validity when designing evaluations. In addition, access to timely, high-quality, and secure data for research and evaluation purposes is critical.

The statistical community can continue to enthusiastically support federal data and statistical systems and the statistical agencies, which undergird the foundation of evidence-based policy making.

    Editor’s Note: The opinions expressed here are those of the author and should not be attributed to the U.S. Department of Labor or the federal government.

    1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
    Loading...

    Comments are closed.