By Joe Paiva, Chief Operating Officer at Fearless
The White House’s recent memorandum on artificial intelligence governance marks a critical moment for federal agencies and their industry partners.
As AI systems become increasingly central to government operations, the memorandum establishes clear imperatives: protect civil rights and liberties, ensure AI safety and trustworthiness and develop these technologies through diverse, highly skilled teams.
These priorities align directly with our approach at Fearless. We believe diverse teams are the foundation for creating AI-enabled solutions that work for all Americans.
The challenges outlined in the White House memorandum — from protecting civil rights to developing responsible AI governance — can’t be addressed through technology alone. They require a fundamental shift in how agencies approach AI development, starting with the diversity of the teams responsible for building these systems.
It’s not too early to sign up for the most exciting and important AI networking event in the GovCon world. Potomac Officers Club’s 2025 AI Summit is set for March 20 of next year. The lineup will be stacked with important names in the government AI space.
Hiring to Support Equitable AI
AI is reshaping how the government serves its citizens. But, as the White House memorandum makes clear, the rapid improvements in and adoption of AI brings with it both enormous potential for good and significant risks.
How do we ensure these powerful new tools promote democracy and protect civil rights rather than amplifying existing inequities?
The key lies in the teams developing these technologies. Diverse teams provide our strongest defense against AI bias in government systems.
The risks of biased AI are significant and far-reaching. A 2019 study revealed troubling racial bias in a widely used healthcare algorithm. For patients with the same number of chronic conditions, Black patients were 48 percent less likely than White patients to be flagged for extra care. This bias stemmed from using healthcare costs as a proxy for health needs, reflecting systemic inequalities in healthcare access. The study found that this affected millions and reduced identified Black patients by more than half.
This example underscores a fundamental problem: most AI and machine learning models are trained on historical datasets that reflect centuries of systemic bias and discrimination. There’s redlining in housing, legacy admissions in higher education and underinvestment in schools and businesses in minority neighborhoods. These and countless other inequities are baked into the data by which AI-based applications “learn” how to make predictions.
Without concerted efforts to identify and mitigate these biases, AI will continue to amplify past inequities and erect new barriers to opportunity for underrepresented groups. This is particularly concerning as AI becomes more prevalent in government services, where decisions can have profound impacts on citizens’ lives.
Building Teams That Build Better AI
The memorandum’s emphasis on expanding America’s AI talent pool points to an important truth: the future of equitable AI depends on diverse teams bringing different perspectives to the table.
Fearless has seen firsthand how diverse teams can address these challenges. When people from different backgrounds work together, they bring a range of perspectives that can identify and mitigate potential biases in AI systems. This diversity of thought is crucial in developing fair and equitable AI solutions for government services.
Consider an AI system for screening job candidates in government roles. A diverse team is more likely to question traditional metrics like university prestige or past job titles that might reinforce existing inequalities. Instead, they might focus on skills-based assessments or consider non-traditional career paths, resulting in a fairer screening process.
The U.S. Office of Personnel Management has taken steps to address these issues. In April 2024, OPM issued skills-based hiring guidance and a competency model for AI, data, and technology talent. This guidance is designed to assist agencies in identifying key skills and competencies needed for AI professionals and increase access to these technical roles for individuals with nontraditional academic backgrounds.
This shift towards a skills-centric paradigm emphasizes practical skills over educational backgrounds or past titles. It also prioritizes talent with AI proficiencies tailored to organizational objectives.
By reconsidering college degree requirements for certain positions and expanding recruitment efforts to target diverse talent pools, including veterans, people with disabilities, LGBTQ+ individuals and older workers, agencies can tap into a wider pool of qualified candidates.
Strategies for Building Diverse Teams
To address some of the concerns highlighted in the memorandum and build diverse AI teams, government agencies and contractors need to take proactive steps:
- Prioritize diversity in AI and data science teams. This means looking beyond traditional talent pools and actively recruiting from underrepresented groups in tech. OPM has authorized agencies to use a Direct Hiring Authority to assist efforts to increase AI capabilities in the federal government, making it easier to recruit AI talent from several specialties. These include IT specialists, computer scientists and engineers, and management and program analysts.
- Implement ongoing training on AI ethics and bias recognition for all team members involved in AI development. OPM has issued guidance on the “Responsible Use of Generative Artificial Intelligence for the Federal Workforce” to support Federal employees in learning about GenAI’s potential benefits and risks, and exploring best practices for safely, securely, and responsibly using GenAI in their work.
- Engage with diverse communities throughout the AI development process, from initial planning to testing and implementation. This can involve partnering with minority-serving institutions, LGBTQ+ advocacy groups, and disability rights organizations to connect with underrepresented groups and gain valuable insights.
- Establish clear guidelines for AI fairness and regularly audit AI systems for potential bias. OPM has issued an “Artificial Intelligence Classification Policy and Talent Acquisition Guidance” to address position classification, job evaluation, qualifications, and assessments for AI positions.
- Use skills-based hiring approaches, emphasizing practical skills over educational backgrounds or past titles. This aligns with OPM’s recent guidance on AI competencies and skills-based hiring, which aims to increase access to technical roles for individuals with nontraditional academic backgrounds.
By implementing these strategies, government agencies and contractors can create more diverse and inclusive AI teams, leading to fairer and more effective AI systems in government services.