U.S. Vice President Kamala Harris announced a number of new policies to regulate how federal agencies use AI in their work.
The latest policy comes after President Biden issued an executive order on AI in October last year. The U.S. government is trying to allay concerns, and many are skeptical, about how government agencies will use AI technology.
The policy written by the Office of Management and Budget (OMB) requires federal agencies to:
- Addressing the risks of using AI
- Increasing transparency in AI use
- Promoting responsible AI innovation
- Grow your AI workforce
- Strengthening AI governance
risk resolution
Federal agencies must identify and manage AI risks to ensure that AI use does not impact the rights or safety of citizens.
By December 1, 2024, all federal agencies must implement “specific safeguards” that address potential risks, such as algorithmic discrimination and other impacts on society.
Giving people at airports the ability to opt out of TSA facial recognition use, ensuring human oversight of healthcare, and detecting fraud in government services are essential when AI is used.
Agencies that cannot implement these safeguards should stop using such AI tools.
Increased transparency
The policy requires all federal agencies to publicly release a list of the AI tools they use. They must identify use cases that impact rights or safety and how to address them.
Draft guidance on how agencies should report on this excludes cases where AI is “used as a component of the national security system or within the intelligence community.”
Even in these excluded cases, agencies must report metrics of the AI systems they use, inform the public of these exempt AI use cases, and justify why they are exempt.
An interesting requirement is that federal agencies “must disclose government-owned AI code, models, and data, and that such disclosure must not pose a risk to the public or government operations.”
Promoting responsible AI innovation
The new policy emphasizes the U.S. government’s commitment to deploying AI technology across a wide range of applications.
According to the announcement, the government will remove unnecessary barriers to make it easier to deploy AI for applications such as addressing the climate crisis, responding to natural disasters, public health, and public transportation.
OMB’s policy encourages “agencies to experiment responsibly with generative AI,” while providing guidance on how to do so safely.
Grow your AI workforce
The Biden-Harris administration has pledged to hire 100 AI experts by summer 2024 as part of a program to safely and reliably deploy AI across federal agencies.
Big tech AI companies are competing fiercely for AI talent, and the U.S. government seems to understand that attracting and retaining the right people to fill 100 positions will not be easy.
It plans to hold a careers fair next month and has published guidance on pay and leave flexibility, particularly for AI roles.
To attract workers into these roles, agencies can offer upfront salary incentives, relocation incentives, flexible remote work hours, and additional annual leave.
Strengthening AI governance
Federal agencies should designate a chief AI officer to ensure accountability, leadership, and oversight for the use of AI in their operations.
“We are directing every federal agency to appoint a chief AI officer with the experience, expertise, and authority to oversee all AI technologies used by the agency,” Harris said in Wednesday’s announcement.
Additionally, an AI governance committee should be established to coordinate and manage AI use across the institution.
“AI presents not only risks but also tremendous opportunities to improve public services and advance societal challenges such as addressing climate change, improving public health, and promoting equitable economic opportunity,” said OMB Director Shalanda Young.
The new AI policy aims to help federal agencies unleash that potential while protecting the rights and safety of the people they serve.