The General Services Administration has issued an interim security policy directing employees and contractors to limit the use of generative artificial intelligence systems that work with large language models from the GSA network and equipment owned by the government.
GSA cited risks that LLMs, which train on public data sources and inputted data, may leak government information to unauthorized platforms.
The rule is valid until June 30, 2024.
Generative AI tools, including OpenAI ChatGPT, Google BARD and Salesforce Einstein, use LLMs to generate text-based content based on the data patterns learned from their training.
Craig Martell, chief digital and artificial intelligence officer at the Department of Defense and a 2023 Wash100 awardee, previously warned that such language models could be used by adversaries for disinformation.