Home » Columns, Science Policy

Influencing Federal Research Funding Policy—White Papers?

1 November 2013 1,518 views One Comment
This column is written to inform ASA members about what the ASA is doing to promote the inclusion of statistics in policymaking and the funding of statistics research. To suggest science policy topics for the ASA to address, contact ASA Director of Science Policy Steve Pierson at pierson@amstat.org.

Contributing Editor
Pierson-color copySteve Pierson earned his PhD in physics from the University of Minnesota. He spent eight years in the physics department of Worcester Polytechnic Institute and later became head of government relations at the American Physical Society before joining the ASA as director of science policy.

In March 2012, the White House Office of Science and Technology Policy (OSTP) rolled out with great fanfare their Big Data Initiative, a $200 million multi-agency initiative to help solve some of the nation’s most pressing challenges. In summer 2011, OSTP rolled out the $40 million National Robotics Initiative and in April 2013, President Obama announced the $100 million BRAIN Initiative, a new research effort to better understand the human mind and uncover new treatments.

These high-profile initiatives are only three science-related initiatives out of many more on a host of topics. Few, if any, focus on statistical science. Why are there no statistical science initiatives, and what does the statistical science community need to do to see statistic science initiatives in the future?

Because of the StatsNSF initiative and the ASA recommendation of a chief statistical scientist at NSF, I have been discussing with my counterparts in other professional societies the challenges that the statistical community faces in terms of raising our profile in the scientific community. On the topic of Big Data and the statistical science community’s contention that the federal policymakers should be engaging statisticians more (yielding better science), one counterpart quipped that computer scientists have dominated the Big Data landscape for two reasons: their sheer numbers and the white papers generated from the Computing Research Association (CRA) Computing Community Consortium (CCC).

Indeed, it didn’t take much digging to see the influence of CCC white papers. For example, a set of slides posted to the CRA website shows the progression from the publishing of a May 2009 CCC report, “From Internet to Robotics: A Roadmap for U.S. Robotics,” to a 2010 OSTP directive to all agencies to include robotics in fiscal year 2012 (FY12) budgets to the announcement of the National Robotics Initiative in 2011. The next slide shows a similar progression for the OSTP Big Data, starting with a “series of white papers prepared in fall 2010 relating data mining, machine learning, predictive modeling, etc., to national challenges.” A glossy CCC brochure, titled “The Computing Community Consortium: A Catalyst and Enabler for the Computing Research Community,” explains that the CCC “was established in 2006 through a cooperative agreement between the National Science Foundation and Computing Research Association to provide a voice for the national computing research community. The CCC facilitates the development of a bold, multi-themed vision for computing research and communicates that vision to a wide range of major stakeholders.” It also documents the close ties of CRA and CCC to the Obama presidential transition team and OSTP through a quote from Tom Kalil, deputy director for policy at OSTP, “These [white] papers and workshop reports have had a clear influence on the administration’s budget and recruiting decisions and have already sparked collaborations between government, industry, and academia.”

The CCC white papers—ranging in length from two to 20 pages and including research gaps, opportunities, and recommendations—cover a wide range of topics, from Big Data and informatics to synthetic biology and personalized medicine to the smart grid and transportation.

My CRA counterpart, Peter Harsha, also told me a key to success is to tie a white paper to current national priorities, whether they are health care, sustainability, data analytics, or education.

CCC also hosts visioning activities, “workshops that will create exciting visions and agendas for research at the frontiers of computing. Successful workshops will articulate new research visions; galvanize community interest in those visions; and mobilize support for those visions from the computing research community, government leaders, and funding agencies.”

Other organizations also have had success with white papers or their equivalent. The idea for the BRAIN initiative was sparked at a 2011 conference and defined in a 2012 Neuron paper, “The Brain Activity Map Project and the Challenge of Functional Connectomics”. Astronomy projects are proposed and prioritized in the Astronomy and Astrophysics Decadal Survey, produced every 10 years by the National Research Council of the National Academy of Sciences.

One could argue that the Big Data and BRAIN initiatives include statistical science. I would contend, however, that statistical science is included on the margins. The role of statistical science in these initiatives is not clearly laid out, leaving it to the statistical science community to try to convince program officers why they should be funding statistical science projects as part of the initiative. It would be more effective to have the statistical components (i.e., the statistical questions to be addressed, what it would take to address them, etc.) laid out in the initiative so the funding agencies seek out the statistical scientists through solicitations, review panels, advisory committees, and workshops.

Initiatives happen at many levels. The BRAIN initiative is an out-of-the-ballpark grand slam. Just as important are the more modest white papers that, say, lead to a new program at the division level at NSF, or the equivalent at an NIH Institute. Indeed, Sastry Pantula at the Division of Mathematical Sciences in 2012 reached out to the ASA and our sister math societies for “budget drivers,” proposals/white papers that NSF should consider funding in future years. At the printing of this article in FY14, NSF is already planning its proposals for FY15 and beyond. Pantula was therefore asking for ideas about statistical science (and mathematics) to inject into the internal NSF process for funding initiatives. Indeed, when StatsNSF sent out questions to the community in January 2013, at least one of the questions was getting at budget drivers: “Are there complex or massive data problems that might be amenable to joint attack by several disciplines?” We have yet to provide budget driver ideas to NSF, and how to do so is, of course, the topic of this column.

While I think the white paper approach has potential for the statistical science community, I think more discussion and exploration are needed. In particular, the CCC is supported in part by the NSF, and the Astronomy and Astrophysics Decadal Survey for 2010 was supported by NASA, NSF, and the Department of Energy. Could the statistical science community find support to produce such white papers and vision activities? If we were able to produce the white papers, what would be the steps to take to ensure their influence at federal funding agencies and the White House Office of Science and Technology Policy? Are there more productive routes the statistical community should be taking to spur federal research funding initiatives with a larger emphasis on statistical science?

This month, as the capstone to the International Year of Statistics, the Future of Statistical Sciences Workshop is being held in London. I’m optimistic that the workshop and its report will serve as both an effective visioning activity and a white paper, thereby helping to inform the questions asked above.

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 4.00 out of 5)
Loading...

One Comment »