Generative artificial intelligence is rapidly gaining traction and popularity, prompting government officials and commercial executives alike to sharpen their focus on harnessing the power of AI in a responsible, sustainable and safe way.
According to Ted Kaouk, chief data officer for the Office of Personnel Management, the “disruptive nature” of AI will have widespread impacts on the public sector’s work and workforce, and the technology is “poised to redefine knowledge work in transformative ways.”
However, as AI continues to develop and advance, it is simultaneously unlocking new opportunities and challenges.
“Over the last two and a half months, generative AI technologies supporting automated query and response generation have enabled users to create new images and human-like text through tools like DALL-E and ChatGPT. Major cloud providers have announced integration and general availability of these technologies through their public facing interfaces,” Kaouk said at the Potomac Officers Club’s 4th Annual AI Summit.
“And the wide availability of these technologies raises key questions about novel use cases, retention impacts on the workforce and multiple ethical considerations,” he warned.
Some of these ethical considerations include what Kaouk describes as a “phenomena of hallucinations” in which AI tools can generate factual inaccuracies with “human-like fluency.” In the United States’ current phase of AI technology development, Kaouk urged that the verification of any factual information generated by AI is “essential.”
Kaouk modified the phrase that typically defines the zero trust cybersecurity approach — “trust, but verify” — and said that when used in relation to generative AI, the phrase should be “don’t trust, please verify.”
Beyond factual accuracy concerns, AI is also opening up technology-enabled possibilities for employees, and federal leaders are facing novel policy questions and regulation needs. Kaouk mentioned that AI can be leveraged by public sector employees to conduct their work at a much more rapid pace, but first, the federal workforce will need policies, guidelines and training to “make the most” of AI.
“We’ve heard about the challenges for intellectual property and plagiarism, privacy and confidentiality, and bias. I think that raises some important questions about what responsibilities we will have in the federal government for disclosure,” Kaouk noted.
“What is the average worker’s responsibility to disclose their use of reliance upon generative AI technologies in the conduct of their day-to-day tasks? What are the supervisor’s responsibilities in setting appropriate use rules? What are the responsibilities of decision makers? What is an agency’s responsibility to disclose it to the public?” he added.
As chair of the Federal Chief Data Officers Council, Kaouk is working to address these emerging issues and other concerns that exist at the intersection of data and AI. According to Kaouk the CDO Council is tasked with establishing government-wide best practices for the use, protection, dissemination and generation of data; promoting and encouraging data sharing agreements between agencies; and consulting with government and industry on how to improve access to government data.
The council’s data skills and workforce development working group, for example, is developing resources to improve data competency across the federal government, and Kaouk said this work involves looking at the data skills needs of the federal workforce, including those associated with AI.
Kaouk also shared that the use of AI and generative AI will be a “central part” of the council’s efforts to update the federal Data Ethics Framework.
“As agencies on the HR side explore potential use cases for AI and machine learning in hiring, for example, the federal government must be prepared to leverage these technologies ethically and as a model employer. Many hiring technologies use software programs that use algorithms and artificial intelligence, and while these technologies may be useful tools, they may also result in unlawful discrimination against certain groups of applicants without the proper safeguards. So this will be another collaborative effort undertaken by the council in the year ahead,” Kaouk said.
Artificial intelligence is a major priority for the intelligence community as it continues to transform the global intelligence landscape. Learn more about the IC’s priorities during the 3rd Annual IC Acquisition and Technology Innovation Forum hosted by GovCon Wire on March 9. Register here.