By Bill Webner, CEO of Capgemini Government Solutions, in collaboration with Melissa Hatton, AI Strategy Lead at Capgemini Government Solutions; James Farnsworth, Chief U.S. Compliance & Regulatory Counsel at Capgemini America, Inc.; and Johnathan Duffie, Senior Legal Counsel & Americas AI Legal Lead at Capgemini America, Inc
Artificial intelligence is driving groundbreaking innovation across industries, prompting nations worldwide to invest heavily in technology advancement to maintain their global leadership. This pursuit offers a range of business and societal benefits, from improved operational efficiency to innovative solutions that enhance quality of life globally.
However, AI leaders in both the public and private sectors must recognize the ethical and environmental implications of AI advancements. Biases often found in AI models trained on non-representative data raise equity and justice concerns. Additionally, the substantial computing power required to train and run AI models highlights the need for sustainable practices to reduce carbon emissions.
Given these complexities, a diverse range of expertise — including ethicists, policymakers, legal professionals, economists, data engineers and scientists, and business leaders — is essential to understand and guide these transformative changes in the United States.
Emerging AI regulatory landscape
Efforts are underway in numerous countries, including the United States, to regulate the expanding AI market. The European Union spent three years crafting the EU AI Act, establishing a comprehensive, risk-focused framework for AI regulation. Recently ratified by the European Parliament in March, this legislation is set to take effect as early as this summer. The EU AI Act, adopting a risk-based approach, could establish the global standard for safety and individual rights, particularly for high-risk applications through centralized enforcement.
In contrast, the Biden administration’s AI Executive Order provides a flexible set of principles for federal agencies, encouraging tailored AI policies across different sectors with an emphasis on fostering innovation and securing economic leadership.
Despite progress in establishing strategic AI regulatory frameworks, the United States grapples with challenges in achieving a unified approach. American companies find themselves at the intersection of innovation and regulation. As the federal government develops AI guidance and bipartisan legislation, and states pursue their regulatory paths, the legislative landscape offers both challenges and opportunities.
Public-private partnerships for AI governance in the U.S.
Organizations are forming internal governance structures dedicated to AI, serving as a compass to stay aligned with evolving laws and regulations to leverage AI with minimal disruptions.
Traditional business strategies may fall short in this AI era. Addressing AI challenges requires diverse perspectives that anticipate long-term implications, consider socio-economic and environmental impacts, and guide policy and legal framework development.
Public-private partnerships are crucial for effective AI governance and risk management. For example, the Cybersecurity and Infrastructure Security Agency’s partnership model has aided in crafting cybersecurity standards and fostering best practices. The Biden administration has announced that more than 200 entities are joining a consortium to support safe AI development and deployment. These alliances ensure inclusive, forward-thinking governance and technological advancement.
Integrating internal governance structures and interdisciplinary teams, such as AI offices, is essential for comprehensive AI strategies. These structures typically include dedicated roles or committees to understand legal requirements, assess AI risks, and ensure compliance. Investing in these mechanisms positions organizations to maximize AI’s potential through guidance and assets.
U.S. governance frameworks should emphasize agility and flexibility, enabling organizations to respond quickly to technological advances, emerging risks, and new regulations. This approach helps organizations adapt to changing landscapes by guiding assessments of technology impacts on operations and compliance. When new legislation emerges, the governance structure can facilitate rapid organizational responses, ensuring compliant AI deployments with minimal disruption.
A robust AI governance body is crucial to enable confident innovation, guided by a consortium managing associated risks. This could lead to faster AI development, impactful model implementation, enhanced data management, better ethical integration into AI processes, and improved trust in AI adoption by stakeholders.
Navigating the next chapter in the U.S. model
Today’s AI era is characterized not only by rapid technological advancements and potential benefits but also by concerns about environmental and societal impacts. Diverse perspectives are essential to navigate this unfolding chapter. By forming partnerships, government entities and private corporations can establish AI governance frameworks to drive innovation while protecting the public.