Hello, Guest!

GovCon Expert Cameron Chehreh: As Government Use of AI Surges, Effective Management of Data Streams Becomes Imperative

By Cameron Chehreh, Vice President and General Manager of Public Sector at Intel and 2023 Wash100 Award winner

One doesn’t need to look far to discover a tapestry of ways in which the federal government is successfully leveraging artificial intelligence. From the Air Force using AI to plan airlift operations to neuromorphic computing being used to optimize operations in space, the ways government agencies are implementing AI run the gamut from rudimentary to literally out of this world.

AI is one of five technology superpowers — including ubiquitous compute, pervasive connectivity, cloud-to-edge infrastructure and sensing — that combine to form the digital mission infrastructure that propels government and other organizations.

As AI’s presence continues to grow, agencies must not lose track of what is really required to power the technology’s ability to process and analyze vast amounts of information from different data sources. It’s not just about graphics processing units anymore, especially with the rise of edge computing. AI requires a range of technologies from CPUs to GPUs to open source software.

Let’s take a closer look at data streams and the two strategies that will make or break AI’s ability to continuously learn and rapidly deliver accurate and trustworthy results.

If you’re interested in learning more about how top government and private sector leaders are conscientiously integrating and enabling AI in their strategies and processes, be sure to register for and attend ExecutiveBiz’s in-person Trusted AI and Autonomy Forum on Sept. 12. Principal Director of Trusted AI and Autonomy in the Office of the Under Secretary of Defense for Research and Engineering Dr. Kimberly Sablon will be delivering the keynote address.

What is a data stream?

Much more than information moving from ingestion to output, a data stream is a complex process that involves multiple steps and touchpoints that data must go through before it becomes actionable intelligence. Steps may include data preparation to ensure the quality of the data; model creation and training by teams of data scientists; data fine-tuning based on machine learning and analysis; and more.

Even as these steps occur, more data is being fed into and out of the system to various collection points. For example, pre-trained models are being moved from on-premises to the edge, where access to intelligence can be improved through a process called edge inferencing.

While GPUs are important to this process—as they can dramatically increase the speed at which processing takes place—agencies must place equal weight on the software and processing equipment necessary to facilitate their data streams. To that end, agencies should focus on two key strategies.

1. Build a foundational platform for data streams and workloads

For AI to be successful, data scientists and operations managers must be equipped with a sandbox that allows them to build, run, train and ultimately deploy machine learning models. This requires a foundational platform that allows teams to create and test models, manage workload performance, and handle other key tasks that are critical to the data stream.

The platform should allow for both experimentation and control. For example, data scientists should be able to compare multiple models, collaborate with peers, rapidly run hundreds of experiments simultaneously and more. Concurrently, the platform should provide scientists and engineers with complete visibility into and control over all the mechanisms of their data streams—from data prep and modeling to the compute and storage resources necessary to run their workloads.

2. Consider where data will be ingested and processed

Once models have been created and trained, they will continue to evolve based on new information that is collected, processed, and incorporated into the data stream. Those actions could take place in many ways and places. Extremely intensive and non-time-sensitive workloads may continue to be processed within an enterprise data center. Other less-intensive workloads that depend on near-real-time processing are more likely to be performed through an edge server.

There are different ways to approach edge inferencing, depending on the desired outcome. For instance, a law enforcement agency may wish to use computer vision at the edge for license plate or vehicle detection and recognition. Images can be processed at the point of collection and quickly analyzed, and the results can be sent back nearly immediately. Other mission critical tasks performed at the edge include predictive maintenance on equipment, tracking components of the supply chain, weather tracking, environment monitoring and more.

All of these examples require pre-trained models that are fed into the data stream for fast processing and information transmission. None of this is necessarily dependent on GPUs, which, although they are discrete hardware, may be too big to fit in a small edge device. Building a software infrastructure that allows for optimized power consumption, fast processing at the edge and easy integration with multiple application programming interfaces is a more effective option for now and in the future.

Agencies are no longer at the dawn of AI; they are rapidly ascending toward its apex, and no one knows how far the journey will go. But as data streams become more complex, and more information is pushed out and processed at the edge, one thing is clear: for the journey to continue, agencies must dispense with the concept of GPUs being the beating heart behind successful AI deployments. That distinction now belongs to flexible, scalable and open software that feeds and propagates data streams.

Video of the Day