Hello, Guest!

GovCon Expert Joe Paiva Finds AI at a Crossroads—Amplifying Biases or Empowering All

By Joe Paiva, Chief Operating Officer at Fearless

The digital divide of the 1990s exacerbated long-standing inequities in our society. 

As broadband internet and personal computers proliferated, they reached affluent neighborhoods and households first. This left economically disadvantaged communities, disproportionately communities of color, on the wrong side of the divide. The impacts on education, job skills development and economic opportunity further widened existing disparities.

Today, we face an even more dangerous new digital divide — one fueled by the rapid rise of artificial intelligence and machine learning.

Algorithms are increasingly used to make high-stakes decisions that impact people’s livelihoods and quality of life — from college admissions to job candidate screening, to home mortgage approvals to the allocation of government services.

The fundamental problem is this: most of these AI and ML models are trained on historical datasets that reflect centuries of systemic bias and discrimination. There’s redlining in housing, legacy admissions in higher education and underinvestment in schools and businesses in minority neighborhoods. These and countless other inequities are baked into the data by which AI based applications “learn” how to make predictions. 

For example, a 2017 study by researchers at the University of Virginia and the University of Washington found that AI algorithms used by major online platforms to target job ads were significantly less likely to show opportunities in engineering, computing and other high-paying fields to women compared to men. The algorithms had learned to optimize ad placement based on past engagement data, perpetuating long-standing gender disparities in STEM careers. Research articles have found similar issues in AI used for hiring, where models trained on historical employment records can entrench racial and gender biases in selection processes. Equally insidious but more difficult to document examples permeate.

Without intentional effort to identify and mitigate these biases, AI will continue to amplify past inequities and erect new barriers to opportunity for underrepresented groups.

And because of the digital divide that began in the ‘90s, underserved communities and people of color have faced significant barriers to developing digital skills, pursuing education and job opportunities, and participating in the digital economy. As a result, these groups are less likely to be developing and implementing the AI tools and practices that are threatening to further divide.

A 2020 study by the National Skills Coalition, “Applying a racial equity lens to digital literacy,” reveals stark disparities in digital skill attainment between white workers and their Black, Latino and Asian American and Pacific Islander peers.

The study found that while 41 percent of white workers have advanced digital skills, only 13 percent of Black workers, 17 percent of Latino workers and 28 percent of AAPI workers have attained this level. These gaps in advanced digital skills are the product of structural inequities deeply rooted in our society, from uneven access to quality education and training to biased hiring practices and lack of diversity in the tech sector.

As a result, rather than being the great equalizer we once hoped for, AI threatens to systematize and amplify the biases of the past, affecting access to opportunity for generations to come.

Only by building teams as diverse as the public we serve can we design AI and digital services that work for all.

There are promising examples of AI being deployed thoughtfully to identify bias and the social issues that are present in disparity. The Veterans Administration is utilizing AI in many ways. The Social Determinants of Health Extractor, or SDOH, is an AI-powered tool that analyzes clinical notes in electronic health records to identify key social factors, such as a patient’s economic status, education, housing situation and social support networks, that may influence their health outcomes.

By using natural language processing and deep learning techniques, the system can automatically surface SDOH information. The extracted SDOH variables can then be used by researchers to examine how these social factors contribute to health disparities and impact clinical outcomes for veterans from minority or underserved communities. 

Understanding these relationships is a critical step toward designing more targeted interventions and equitable care delivery practices that address the root social drivers of health.

In the criminal justice system, AI is being leveraged to address racial disparities in sentencing. Researchers at the Stanford Computational Policy Lab developed a machine learning model to identify bias in risk assessment tools used by judges to inform sentencing decisions. 

By analyzing data from over 100,000 criminal cases in Broward County, Florida, the team found that Black defendants were nearly twice as likely as white defendants to be misclassified as high risk of recidivism.

Armed with this insight, policymakers and judges can take steps to mitigate the bias, such as adjusting risk thresholds or supplementing the algorithms with additional contextual information. 

While AI alone cannot solve systemic inequities, these examples demonstrate its potential as a tool for diagnosing and beginning to address bias in high-stakes government decisions and actions.

To disrupt the cycle and close the digital divide, diversity and inclusion must become a strategic imperative. Not only within government agencies, but also the contracting community that serves them and the technology sector as a whole. Only by building teams as diverse as the public we serve can we design AI and digital services that work for all. 

Failing to act will allow the new digital divide to calcify, further concentrating wealth and power in the hands of the few at the expense of the many.

The call to action is clear. As leaders in government and the technology ecosystem, we must:

  • Select system development partners who employ widely diverse teams of people to critically examine the data used to train AI models for historical bias and discrimination including not only data scientists and engineers, but also librarians, sociologists, economists and groups of potentially impacted stakeholders themselves.
    • Conduct regular audits of training datasets to identify and mitigate biases related to race, gender, age and other protected characteristics. 
    • Develop and implement fairness metrics and testing procedures to evaluate AI models for disparate impact before deployment. 
    • Document and publicly share the results of these audits and the steps taken to address issues.
  • Proactively partner with and invest in underserved communities to develop local tech talent and entrepreneurship
    • The Reboot Representation Tech Coalition is made up of more than 22 leading tech companies, including Adobe, Dell, Intel and Uber. They’ve pledged millions to double the number of Black, Latina and Native American women graduating with computing degrees by 2025. The coalition partners with nonprofits and universities to provide scholarships, mentorship and career opportunities to women of color in tech, including in AI and data science fields.
    • It is critical organizations work toward giving people in underserved communities the skills they need to compete for “new-collar” jobs in IT, specifically including data engineering, data science and other AI-related skills.
  • Be extremely intentional with respect to establishing development processes and performance metrics that ensure transparency and fairness are baked into how we design and deploy AI systems that impact people’s lives
    • Develop clear guidelines and oversight mechanisms for the use of AI in high-stakes decisions, such as hiring, lending and criminal justice. It is important to help stakeholders acknowledge what the system does and does not do, understand the ramifications on both direct and indirect stakeholders, and finally, focus on the transparency of the findings to share the insights with both present and future stakeholders. 
    • Provide meaningful opportunities for public input and redress. 
    • Establish independent auditing and appeals processes to identify and correct errors or biases in AI-driven decisions. 
    • Require companies and government agencies deploying AI to publish transparency reports detailing their systems’ purpose, design and performance.

The path ahead is clear. By embracing diversity, equity and inclusion as core values in the development and deployment of AI, we have the power to create a future where technology truly serves all. 

When we harness the talents and perspectives of our nation’s full diversity, we can create AI systems that are more innovative, more equitable and more impactful. Realizing this vision will require sustained commitment and collaboration across government, industry, academia and communities. It will demand courageous leadership, honest introspection and a willingness to break from the status quo. But the potential rewards—a society where AI narrows opportunity gaps instead of widening them, where technology is a source of empowerment rather than exclusion—are too great to ignore. 

So let us seize this moment, and work together to build a future where the power of AI lifts up the full diversity of the American people. In this future, the digital divide gives way to digital dignity and innovation drives not just prosperity, but justice. This is the future we must build, and the future we will build, together.

Video of the Day