The Department of Health and Human Services is hugely optimistic about artificial intelligence’s application in the daily processes of providers and clinicians, including in potential uses like patient monitoring, health journey predictions, note-taking and even guiding surgical procedures. Steven Posnack, principal deputy assistant secretary for technology policy at HHS, said that a future where a doctor forgoing the implementation of AI tools in their practices is a “liability concern” might not be far off.
Posnack kicked off the Potomac Officers Club’s 2024 Healthcare Summit on Wednesday with a keynote address that discussed AI’s growing profile in the healthcare sector. Speaking to a crowd of largely government contractors, Posnack said that both industry and government need to be pondering “how much decision-making are we going to be enabling some of these tools to make right out of the gate?”
“A lot of it is around making sure that it’s augmenting the work that people are doing, helping give them a head start,” Posnack said.
It’s not too early to register for what’s sure to be one of the biggest GovCon networking events of the year: the 2025 Artificial Intelligence Summit, from Potomac Officers Club. Save your spot now for this March 20 symposium centering on the most transformational technology in decades. Many opportunities for public-private partnership will be spotlighted.
Regulation Wariness
Drawing on lessons learned from two decades in digital health and technology and health policy, Posnack was adamant that the correct response to AI’s proliferation is not to regulate it strictly. Intense and persistent government regulations would require keeping up with the meteoric pace at which AI and other tech evolves, and that’s quite difficult, he suggested.
“It’s hard to chase industry updates and development with regulation. So we have to take different approaches that are incremental, that can run in parallel, but help provide some structures and guardrails and some safe spaces for industry to continue to work,” Posnack resolved.
Sweeping regulations for AI are also not appropriate for the healthcare space because within healthcare, different areas have different “risk tolerances.” The potential negative consequences of AI faux pas or hallucinations in back office operations versus those in patient care versus those in the research and development arena are of varying degrees of gravity, said Posnack.
“So it’s just not a one-size-fits-all from a policy perspective for us,” he shared, adding that when figuring out how and when to bring AI into healthcare, HHS is extra focused on notions of complexity, scale and autonomy.
When Regulation Is Necessary
Posnack is wary of regulating AI partly because of his experience with electronic health record, or EHR, adoption 15 years ago, when laws were trailing the speedy developments happening with EHRs by some distance. However, these days, with EHR variables somewhat more stable, some stricter rules and requirements are necessary. He revealed that after January 1, 2025, the companies that design and produce EHRs will be required to report on 31 of their source attributes to their clinical clientele.
“We’re not judging those models or algorithms, we’re just saying you’ve got to inform the clinical users more about what’s under the hood so that they can make better informed choices about the types of models that they’re putting into clinical practice,” Posnack explained.
To learn more about all of the ins and outs of government AI usage and what agencies expect of industry, attend Potomac Officers Club’s 2025 AI Summit on March 20!