Home » Artificial Intelligence

AI in the Federal Government: Conceptualizing a Use-Case Taxonomy and the Urgency of Building an AI-Capable Government

1 September 2023 592 views No Comment
Photo of Nathen Huang, wearing suit and tie, smiling.
Nathen Huang (he/him) is a quantitative social scientist and data scientist who is passionate about leveraging statistics to advance society’s well-being. After years of working in federal consulting, Huang was detailed to the US Department of State as a Presidential Innovation Fellow and subsequently joined the federal service as a program analyst. He is a graduate of Columbia University’s MA in quantitative methods in the social sciences program and lives in Washington, DC.

When I was in college, I grew entranced with a somewhat propagandistic show called Parks and Recreation. The show follows Leslie Knope, an eager and overly dedicated manager of a team of quirky bureaucrats who manage the daily operations of running the parks department in the insignificant suburb of Pawnee, Indiana. While the show primarily showcases and satirizes the hijinks and inefficiencies of municipal government, I came away with a different impression: Those working in the public sector had admirable and unbreakable resolve to provide better services to people, regardless of their background.

In the years since I first watched Parks and Rec, I’ve found myself drawn to public-sector work and have worked for the federal government as both a contractor and civil servant across various agencies: Department of Transportation; Department of Homeland Security; Department of Defense; Centers for Medicare and Medicaid Services; Department of Housing and Urban Development; and (now) Department of State.

Having worked with data in various federal capacities my entire career as a data scientist, I have seen firsthand both opportunities and challenges for greater data science capacities—as well as the subsequent popularization of artificial intelligence—in the federal government. I have developed a taxonomy that summarizes AI’s usefulness to the government for three primary use cases.

Case Management

First, the most direct application of AI in the federal government is in case management. I use the term “case” loosely; a case is simply any instance of a file, application, document, issue, or matter routinely dealt with in the government. As the federal government is meant to be a responsible steward of citizens’ data, cases are naturally the biggest challenge the government faces—and its most plentiful source of data. Deciding which civil rights complaints are prosecutable; determining eligibility for subsidies in housing, veteran health care, or taxes; and identifying which refugees are immediately eligible for asylum vs. parole are all instances of cases to manage.

Much of the government does this manually. Not only is it tedious and laborious, but reviewing individual cases according to a set of standards often open to interpretation can introduce all sorts of bias—availability bias, racial bias, selection bias, etc. Nonetheless, case management will always be essential to governmental operations if citizens need their individual situations brought before the government and addressed.

AI has a lot to offer here. By automating and making predictions on data, case adjudication can be much easier and judged less inconsistently. However, that doesn’t mean AI isn’t immune to faulty decision-making—after all, machine learning algorithms are only as good as the data they are trained on—and we should be vigilant to AI’s harms. While using AI systems for judging cases, the government should prioritize data privacy and security in algorithms given the risks inherent in revealing and using citizen data—especially if training data can be accessed or exposed. If we are thoughtful and careful about how we deploy AI systems in reviewing cases—mitigating bias as much as we can—we will benefit from algorithms that make case management much more consistent and accurate.

Efficiency in Delivering Services

Another benefit of AI is it can help the government deliver more efficient services. Beyond simply being able to manage various cases that come through the government, AI systems can help the government push out critical services to its citizens more quickly. Problems the government works on in which speedy services are necessary include serving populations at greatest risk for certain diseases and in greatest need of medication and treatment; anticipating which students and businesses should be eligible for loans and tax benefits; predicting and responding to disasters with equipment and adequate staff—for both health-related crises like COVID and natural disasters like earthquakes and hurricanes—and identifying who will be affected.

When executing decisions manually, government services can operate slowly and inconsistently; for each case, there may be myriad ways of activating relevant systems to get assistance to the people who need it. At the very least, AI can offer a programmatic and potentially less-biased system for matching the right federal programs to the people who need them, removing the bureaucratic red tape and intermediaries required to make a decision.

AI’s potential for efficiency should inspire us to more quickly identify ways to incorporate AI into federal government practices, but it should also give us pause. When the impact of AI systems is on the entire country, we must consider the risks and dangers of ‘getting it wrong’—especially if AI causes federal programs to move too quickly, offer shoddy services, or engage in activities that can harm privacy or exceed legal boundaries.

Process Optimization

Finally, the third benefit to AI in the federal government is the opportunity for process optimization. Unlike service delivery efficiency, process optimization focuses on ensuring the government achieves a goal in an “optimal” way—which may or may not ensure overall “efficiency” of the service but focuses on doing a task better. For example, weapons and vehicles fitted with AI-powered systems optimized for various terrains and conditions enable the sub-agencies of the Department of Defense to conduct missions remotely and quickly respond to threats. Likewise, with the looming deployment of autonomous vehicles, there is a critical need for the government to ensure these vehicles are cooperating with each other on the road and following federally set standards.

AI can also be useful for detecting threats to the country. For the FBI and intelligence agencies, AI systems can quickly detect when fugitives are most likely to commit crimes and enter or flee the country based on online activity and communications gathered from signals intelligence. While many of these processes are not easily optimized—since optimization requires a standard that can be validated for whether an objective was achieved—they offer examples of how AI learning could improve processes in automatic ways that enable the government to respond more quickly to and improve upon the various problems it addresses.

Unfortunately, while harnessing AI systems appropriately poses a challenge, one of the greatest barriers to the federal government leveraging their potential is the government’s inability to recruit and retain necessary talent to deploy them, which is somewhat inherent in the nature of the work. When agencies, bound to previous standard operating procedures, are unable to conceptualize how advanced data analytics capabilities can benefit existing federal workstreams, talented people may not be motivated to find jobs in the government.

Compensation is also an issue. When technology companies are paying 100–200 percent of the entry-level federal salaries for engineers and data scientists, even the most well-meaning technologists may not be motivated enough to work for the government.

To incentivize technologists to invest their talents in the public good, the Presidential Innovation Fellows program has matched private sector talent with high-impact technology projects in the federal government for more than a decade. Increasingly, other programs have emerged to incentivize technologists as well, including the US Digital Corps, Congressional Innovation Fellows, and AAAS Fellowship.

Building an AI-capable government—one that affords greater benefits to society than the government we have now—will take all types of people and skills. The federal government needs AI-competent experts in many areas—including quality assurance engineers, data architects, data engineers, and policy analysts—to reconcile government processes with data services that comprise AI systems. Data stewards and policy experts will be needed to check data intake; maintain processes for funneling data; and ensure the reporting, visualizing, and analyzing of collected data.

Parks and Rec shows the work of helping the public can sometimes be tedious and draining, but it also demonstrates how public service can be immensely rewarding and impactful. When implemented properly and thoughtfully, AI systems can help take away the tedium of public sector work and free up civil servants to exercise their creativity and solve novel problems.

The prospect of doing the most good through public service enabled by technology continues to motivate my work and desire to stay in government. I hope more people feel empowered by that possibility, too.

The Flip Side

When we approach algorithmic bias, our instinct is often to ask how we can create less biased algorithms—after all, biased algorithms can perpetuate the existing biases we hope AI will resolve in a more impartial manner. However, designing less biased algorithms is just one aspect of addressing the larger challenge of AI bias, especially when it comes to federal work that affects the entire country. We should ask not just how to create less biased algorithms, but how biased algorithms create the conditions for future bias.

For instance, algorithms trained on specific ethnic groups, binary gender, or geographically unrepresentative areas could be used to create recommendations that reinforce inequalities in our society. A job recommendation algorithm lacking training data on applicants with a disability or graduates of historically Black colleges and universities could cultivate a less diverse workforce, perpetuating the false belief that able-bodied people and/or persons of color are not qualified workers.

Moreover, rather than making the federal government better, AI in the government may instead cultivate limitations on how we view humanity—in binary or limited categorical outcomes. In this way, we may become dependent on AI to moderate our understanding of truth, rather than conceptualizing the world in a way that aligns with how we, as human beings, live and behave. As the saying goes, “Computers are binary; people are not.”

AI systems are great for helping the federal government, but they are not the solution to all the challenges our government faces. Because we know algorithmic bias can emerge from so many places—pre-existing federal standards, data sources, political and legislative pressures, IT systems—we should consider how we can have a role in shaping the future we seek to create.

Editor’s Note: The views expressed in this article are those of the author and do not necessarily reflect those of the US government.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

Leave your response!

Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.

Be nice. Keep it clean. Stay on topic. No spam.

You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.