Large language models are artificial intelligence tools that are trained on mass amounts of data and can understand and respond to user prompts. Current LLMs are known for their ability to generate text, but these models are now evolving to be able to create images and video.
The ability of LLMs to break down huge datasets offers public sector organizations many opportunities to enhance their decision making capabilities. Though LLMs hold immense potential, they still have issues, and experts say that a human-in-the-loop is necessary for these tools to be used effectively.
âWe’ve gone from a search engine to an answer engine, but the analyst is also required. And so, we’re trying to use them as a partner to help inform and get through the massive amounts of data to help inform decision making,â Col. Michael Medgyessy, intelligence chief information officer for the U.S. Department of the Air Force, explained during a panel discussion at the Potomac Officers Clubâs 5th Annual CIO Summit on Wednesday.
Sean Williams, founder and global CEO of AutogenAI, pointed out that because LLMs can access such wide amounts of information, there is a significant chance that their analysis, though done very quickly, may not represent the truth as accurately as a human could.
âFor accuracy, we want to use that ability to read and then we want to apply human notions of, âwhat is truth, what is a trusted source?ââ he said.
Some of this is a technical issue, but according to Williams, it is also a âphilosophical problem about what we actually mean by âtruthââ and how that concept can be matched with new AI technologies.
Despite these concerns, Timothy McKinnon, a program manager at the Intelligence Advanced Research Projects Activity, said the inferences made by LLMs should still be made available to users.
âWhat we need ultimately is a taxonomy or an understanding of all the different ways in which inferences can be good and bad,â he said.
Another challenge LLMs present is bias. McKinnon said that the assumption that it is possible to create unbiased models is flawed due to how LLMs function. Since these models draw from such large amounts of data, they often pick up information from sources that are biased themselves, he noted.
âI think that instead of trying to de-bias models, what we should be trying to do is trying to induce perspectives based on an interesting understanding of bias,â he said.
To do so, McKinnon recommended organizations âtry to understand bias and use it to induce a set of diverse perspectives and play off of the analyst’s creativity.â
Medgyessy brought up data tagging â which he said offers an âopportunity for models to be actually really good because the data sets that it is reading are really goodâ â as a way to combat problems with accuracy.
âWhen you train something on [properly tagged data], it’s like sending it to school â not to the playground â to figure out what it’s like,â he added.
In the future, Medgyessy said that what he sees as the biggest threat in the LLM space is âhow humans actually receive information and in their own decisions, being manipulated.â
The Potomac Officers Clubâs next event, the 2024 5G Forum, will dive into how federal agencies are using modern network technologies to accomplish their missions. To learn more and register to attend the event, which will feature public and private sector 5G experts, click here.