As a White House AI advisory committee recently said in its first report, “direct and intentional action is required to realize AI’s benefits, reduce potential risks, and guarantee equitable distribution of its benefits across our society. With the acceleration of AI adoption comes a parallel imperative to ensure its development and deployment is guided by responsible governance.” As the report and recent Capitol Hill hearings on AI make clear, while generative AI has the potential to usher in massive productivity and operational efficiency gains, the risks of errors, misuse, bias, privacy, and security all pose significant challenges to agencies.
Yet, ChatGPT and other generative applications aren’t the only type of AI.
Machine learning tools have been around for a number of years now and include tools that can automate manual tasks, such as providing specific information to constituents, redacting sensitive documents, or combining files and databases to better correlate information.
Federal agencies, however, are far behind in their use of machine learning tools. As they grapple with generative AI, there’s the real risk that they’ll fall further behind in using the non-generative AI and machine learning tools that can benefit their operations and overall mission today, with none of generative AI’s downsides.
When government embraces machine learning, the benefits are immediate and compelling. One of the best examples is using machine learning for Freedom of Information Act (FOIA) requests. As the number of FOIA requests continues to climb, simply keeping up with them using manual processes is quickly becoming difficult. It may become practically impossible in the years ahead. With today’s FOIA-focused software, agencies can automatically and reliably scan for and redact sensitive information, saving many hours and days of work.
Machine Learning: The “Other AI”
When people consider AI, they typically think about the Hollywood version: HAL from 2001: A Space Odyssey, Wall-E, or Arnold Schwarzenegger’s Terminator. While today’s AI is far less advanced than these fictional versions, generative AI is an ideation tool. It can autonomously generate content, ideas, or solutions based on a set of input parameters.
On the other hand, machine learning is more like a data-driven optimization tool. It processes and analyzes existing data to identify patterns, correlations, and insights that may not be immediately apparent to human analysts. By learning from historical information, machine learning algorithms can make predictions or help optimize existing processes. Essentially, machine learning acts as a data interpreter, extracting value from information that might otherwise go unnoticed.
This type of pattern analysis and process optimization can be extremely valuable to federal agencies, who can use machine learning in applications as varied as fraud detection, security, intelligence, government transparency, healthcare, research, border protection, energy and more.
Wherever vast amounts of data reside, or manual processes can be reliably replaced with automation, agencies can likely benefit from machine learning. Today, as mentioned above, it’s already used to automatically redact sensitive information from FOIA requests, as well as other uses such as allocate disaster response resources based on historical data and predict grid failures.
Responsibly Using AI for Government
Machine learning can create massive new efficiencies and streamline government operations. It’s been applied to numerous government and private sector initiatives and has been proven over years of use. Generative AI may also someday create compelling productivity gains, but unlike machine learning, it’s new, untested, and so far, untrustworthy. It confidently generates inaccuracies and is likely to do so for years. It’s rife with bias. It creates new attack vectors that hackers may exploit.
With that in mind, agencies should consider a two-pronged approach when it comes to AI. While a specific roadmap will look different for different agencies, generally, agencies should:
- Pause all use of generative AI programs and applications on government devices. This pause may not be permanent, but taking time to really understand the technology and its limitations is critical before introducing it widely in government. The risks are simply too great to justify any nascent productivity gains at this point in the adoption curve.
- Use this pause to research and codify best practices and put guardrails in place for specific, allowable uses of generative AI.
- Issue agency-wide communications outlining AI policy and requiring permission for legitimate uses.
- At the same time, explore machine learning applications to automate processes and increase efficiencies where possible.
- With this approach, agencies can responsibly integrate AI into their workflows and safeguard against any unintended consequences.
As AI continues to advance at breakneck speed, agencies would be wise to pause, assess the risks and benefits of each piece of AI technology, and then commit to developing and following use case policies tailored to the type of AI tool being implemented.
Many examples of transformational use cases already exist, including video and text redaction for the processing of FOIA requests, predictive maintenance planning for DoD aircraft, and fraud detection software that assists healthcare agencies in detecting Medicaid fraud.1 These proven applications will help demonstrate responsible government use of AI. In this way government agencies can use the parts of AI that are mature, trustworthy, and helpful today, while responsibly laying the foundation for the even more advanced AI of tomorrow.
Howard is the CEO of OPEXUS, the leading provider of government process management software. He has more than 20 years of experience driving growth in technology businesses, with more than 15 years supporting government customers. Howard holds a Bachelor of Science from the McIntire School of Commerce of the University of Virginia.