Home » Columns, Featured, Science Policy

DHHS Administration for Children and Families Uses Rigorous Evaluation

1 June 2016 1,803 views No Comment
This month’s guest columnist—Naomi Goldstein, deputy assistant secretary for planning, research, and evaluation at the U.S. Department of Health and Human Services Administration for Children and Families—writes about her agency’s evaluation policy. This piece is part of an Amstat News series spotlighting the federal government’s work to better integrate evidence and rigorous evaluation into budget, management, and policy decisions. —Steve Pierson, ASA Director of Science Policy

Science Policy_goldsteinNaomi Goldstein joined the Administration for Children and Families in 2000 and became deputy assistant secretary in 2015. She earned her bachelor’s in philosophy from Yale University, her master’s from the Kennedy School of Government, and her PhD in public policy from Harvard University. Learn more about Goldstein from the DHHS.

The Administration for Children and Families (ACF) is a division of the U.S. Department of Health and Human Services that oversees programs for low-income and vulnerable populations such as the Head Start early education program, the Temporary Assistance for Needy Families program, child welfare and protective services, and many more.

ACF’s mission is to foster health and well-being through the compassionate and effective delivery of human services. ACF and its Office of Planning, Research, and Evaluation (OPRE) have a long history of rigorous evaluation drawing on academic traditions, primarily in economics and psychology. It is ACF’s policy to integrate both use of existing evidence and opportunities for further learning into all our activities. Where an evidence base is lacking, we build evidence through strong evaluations. Where evidence exists, we use it.

Our research and evaluation activities cover a range of types of studies, codified in a Common Framework for Research and Evaluation. Our work includes measurement development, nationally representative surveys such as the National Incidence Study of Child Abuse and Neglect and the National Survey of Early Care and Education, design and testing of service innovations, and impact studies such as the Mother and Infant Home Visiting Program Evaluation and the Employment Retention and Advancement project.

In 2012, ACF established an evaluation policy to formalize our commitment to learning and outline a few guiding principles. We built on existing policies of other agencies and private organizations. Developing the policy required us to clarify our goals and principles. Having the policy has helped keep these goals and principles in the forefront. It can orient new employees and help make these goals and principles part of the shared set of values and assumptions across our agency. The policy has gained some external attention, as well. For example, the Department of Labor adopted a similar policy in 2013. Also, a chapter in the Analytical Perspectives volume of the FY 2017 president’s budget proposal cited the policy and adopted much of its content.

The policy covers five principles: rigor, relevance, transparency, independence, and ethics. Under the principle of rigor, the policy states that ACF is committed to using the most rigorous methods that are appropriate to the evaluation questions and feasible within budget and other constraints. Rigor is not restricted to impact evaluations, but is also necessary in implementation or process evaluations, descriptive studies, outcome evaluations, and formative evaluations. Both qualitative and quantitative approaches. Rigor requires ensuring that inferences about cause and effect are well founded (internal validity); requires clarity about the populations, settings, or circumstances to which results can be generalized (external validity); and requires the use of measures that accurately capture the intended information (measurement reliability and validity).

In assessing the effects of programs or services, ACF evaluations will use methods that isolate to the greatest extent possible the impacts of the programs or services from other influences such as trends over time, geographic variation, or pre-existing differences between participants and non-participants. For such causal questions, experimental ativan online approaches are preferred. When experimental approaches are not feasible, high-quality quasi-experiments offer an alternative.

Achieving rigor requires that we recruit and maintain an evaluation workforce with training and experience appropriate for planning and overseeing a rigorous evaluation portfolio. To accomplish this, we aim to recruit staff with advanced degrees and experience in a range of relevant disciplines such as program evaluation, policy analysis, economics, sociology, and child development. And we provide professional development opportunities so staff can keep their skills current.

Under the principle of relevance, the policy emphasizes the importance of strong partnerships among evaluation staff, program staff, policy makers and service providers. Policy makers and practitioners should have the opportunity to influence evaluation priorities to meet their interests and needs. Planning for research and evaluation should be integrated with planning for new initiatives. It is also important for evaluators to disseminate findings in ways that are accessible and useful to policy makers and practitioners.

Under the principle of transparency, ACF is committed to making information about planned and ongoing evaluations easily accessible, including descriptions of the evaluation questions, planned methods, and expected timeline for reporting results. Further, we will release evaluation results regardless of the findings. Evaluation reports will describe the methods used, including strengths and weaknesses, and discuss the generalizability of the findings. Evaluation reports will present comprehensive results, including favorable, unfavorable, and null findings. ACF will release evaluation results timely and archive evaluation data for secondary use by interested researchers.

Under the principle of independence, ACF’s evaluation policy confirms our commitment to preserve objectivity through insulating evaluation functions from undue influence and from both the appearance and reality of bias.

Finally, under the principle of ethics, ACF is committed to conducting evaluations to safeguard the dignity, rights, safety, and privacy of participants through complying with both the spirit and the letter of relevant requirements, such as regulations governing research involving human subjects.

There are many obstacles to carrying out high-quality research and evaluation in a complex bureaucratic and political context. This core set of principles helps keep our work on track. In addition, we rely on the expertise and capacity of other sectors, including academia and private contracting firms—and on the standards set by organizations like the American Statistical Association.

The National Survey of Child and Adolescent Well-Being (NSCAW)

The National Survey of Child and Adolescent Well-Being (NSCAW) is an example of a nationally representative study sponsored by OPRE. It is a longitudinal study of children and families who have been the subjects of investigation by Child Protective Services. The study collects first-hand reports from children, parents, and other caregivers, as well as reports from caseworkers and teachers and data from administrative records. NSCAW examines child and family well-being outcomes in detail and seeks to relate those outcomes to experiences with the child welfare system and to family characteristics, community environment, and other factors. Data are archived for secondary use at the National Data Archive on Child Abuse and Neglect.

Behavioral Interventions to Advance Self-Sufficiency (BIAS)

As an example of an experimental approach, OPRE’s Behavioral Interventions to Advance Self-Sufficiency (BIAS) project has conducted 15 randomized trials in seven states. This project is the first major effort to use a behavioral economics lens to examine programs that serve poor and vulnerable families in the United States. Unlike many of our studies that examine substantial interventions aimed at influencing long-term outcomes, BIAS focuses on relatively small, inexpensive adjustments to practices meant to influence proximate outcomes such as participation in services or submission of required forms to continue receiving benefits. In 11 of the 15 trials, adjustments such as extra reminders or more simplified, personalized letters yielded significant impacts on outcomes of interest.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.