U.S. Marine Corps veteran and two-time Wash100 Award winner Dana Barnes was sought out by the world of information technology, rather than chasing it himself. He was recruited by IBM in the late 1990s, where he participated in the Business Development Professionals Program, initiating his passion for solving business problems with technology and a facility with serving public sector customers.
Barnes eventually went to work for Microsoft, dedicating over a decade to the tech giant in roles such as director of Department of Defense sales and vice president of state and local government. At Microsoft he also gained an intimate familiarity with, and interest in cybersecurity and digital transformation practices. After a stint as senior vice president at cyber company Palo Alto Networks, Barnes came to his current position as president of government at Dataminr.
In this Executive Spotlight interview, Barnes discussed Dataminrâs work organizing and delivering insights about data through artificial intelligence applications, the importance of educating about proper AI usage and the need for the government to adapt to the quickly changing tides of technological development.
What are some of the key barriers that remain in widespread federal AI adoption, and how do you think we can overcome them?
The biggest challenge is understanding. The government is trying to understand all of the various impacts of artificial intelligenceâhow it operates, but also on the citizens that they serve. There are a lot of policies and laws on the books that were written pre-artificial intelligence. They may not align quite well. In some cases, they may even restrict or limit the use of new technologies in general.
Policy can, whether knowingly or not, limit or restrict the use of technologies. As technology advancesâand it’s advancing at an incredibly rapid pace todayâthere are needs for new laws and policies that can keep pace with the technological advances that we’re seeing and the speed of that.
Itâs not always AI. Sometimes, one of the challenges for the government is not the use of AI, but what data the AI is leveraging and its function. We see that at Dataminr because we focus on publicly available information and when you focus on publicly available information, you have to be careful about how that data is going to be used. You have to make sure that all of that publicly available data is going to be used for good. Itâs this ethical and moral aspect that impacts the widespread adoption of AI. People want to understand it and make sure they’re doing the right things.
The other barrier is procurement. Procurement in government can take forever. By the time you buy something and go through all the steps from a procurement perspective, that technology might actually be out of date. Something new could be available that you want to use and you have to start that cycle all over again. This is something the federal government and state governments have been trying to tackle for years. How do they speed up the procurement cycle, but also make sure that they’re still doing the right thing with those tax dollars? Itâs a big challenge.
The last thing is the lack of experience by users and the potential adoption of technology. One of the things the government is doing is trying to train people. Thatâs how you overcome many of these things: through training, education and engagement. We see a lot of outreach from government leaders now into Silicon Valley and all types of companies. We’re based in New York and we just had some international senior leaders visit our headquarters because they wanted to learn more so they could better use AI. Weâre seeing the same thing happen in the U.S. federal government where we’ve had leaders come spend the day with us and really learn. It’s an educational process, and the more you know, the better you can tackle it.
Do you think that regulations within the government need to be relaxed for AI in particular in order for the government to keep pace with how quickly the technologies are evolving?
Not relaxed, but the laws and policies need to be adapted. They need to adapt to the new capabilities, but you still need to keep the spirit of all the policies designed to ensure that you don’t have any bad actors in your procurement system or in your operational flows. You want to protect citizens and the tax dollars and spend them the right way. The challenge becomes, how do you adapt to keep up with the pace of innovation?
I’ll give you a prime example. When we build an aircraft carrier today, you can imagine that is a large program, and they have to do all of this documentation, specifications and everything that they want based upon the information they have today. And then they put out the request for proposals and go through a bidding process. That might take two years to get through. Sometimes three. For that aircraft carrier, that might actually work for 80 to 90 percent of what you’re building in that aircraft carrier, and then you can go build it. But when you’re talking about software and some of the modern technologies and services, those services change and improve so quickly that even the requirements that were documented in year one could be completely outdated by the time you get to year two.
You need the ability to have rapid procurement, but you still need the safety controls to make sure you do it the proper way. Thereâs a feedback loop that I think government needs to maintain with industry to make sure that there is constant feedback. It’s something that the government has been working really hard at. I will tell you that when I first started working at IBM, procurement processes were incredibly slow, and this was in the late 1990s and early 2000s. We procure much faster today than we did then. And we just have to keep adapting.
Data is often coming from multiple sources that organizations need to collect, analyze and understand in order to use it. What are some of the key challenges and opportunities youâre seeing emerge as organizations harness data and use it to drive decisions?
One of the biggest challenges is the volume of data. The government is inundated with data today. There are so many different data sources and the ability to manage all those data sources becomes the proverbial needle in the haystack â the one thing that I really need is hidden somewhere within loads and loads of data. Thatâs a challenge. Some organizations don’t have the tools or the resources to get to it. Some agencies have smaller budgets and fewer resources than their larger peers which may make it more challenging for them to manage vast amounts of data efficiently.
In order for the government to pull this off, they have to leverage AI. In doing so, you can have one person focus on the bits that are critically important, and leverage AI to sift through it. You never take the human out of the loop, but if you can make that human’s job better so that they can focus on what’s relevant and important at the time, then they’re going to be more effective. Without the tech, it’s very difficult for us to process the amounts of data that are out there in these agencies.
As human beings, we can be distracted by a thousand things. We really want to focus on what’s important and critical at the moment. Those AI tools and technology can help us to focus and they can surface up those things that are critically important right now that we need to focus on. Then we can spend that time doing the right analysis on the right things instead of sifting through a bunch of essentially white noise.
What kind of tools and technologies can organizations use to make their data more accessible and understandable?
Companies that handle massive amounts of data can leverage the tools that Dataminr offers. So first and foremost, there’s artificial intelligence that, as we’ve talked about, we see organizations leveraging along with increased use of cloud infrastructure. I think that’s a key thing because infrastructure is critical. There is AI, but you have to have all the infrastructure to support it as well. And the cloud provides a lot of that infrastructure and computing power so that you can do all the crunching that you need to do. Organizations are really moving to the cloud so they can store and manage all of that data. That allows them to do things very quickly. Itâs very valuable from an AI perspective. There are a lot of companies now in the AI space. AI is very complex. It’s not one thing, it goes across tons of different use cases and things that it can do.
AI can help with making data more accessible and understandable. Organizations should leverage the companies that are out there who are putting resources and research and development into how to manage all of this data and how to make it more accessible. Over time that actually reduces cost for the government.
For example, imagine if the government wanted to leverage all of our data sources, which will reach over 1 million by the end of the year. Many of those sources have a fee, which Dataminr pays to all of the data providers. I was in New York meeting with a customer and they turned around and said, âhey, didn’t we try to contract with that one vendor that provides the data?â They are saving valuable budget resources by leveraging our platform.â
At Dataminr, we pay and then we spread the burden of that payment across our customer base, thereby reducing costs to our customers. As opposed to if a government entity had to go and pay for each data source; they wouldn’t be able to afford to do that. Working with us or other companies in this space, weâre reducing that cost significantly. Thatâs a great way to make data more accessible both to government and your constituents as well.
Remember, the government provides information to its constituents. Corporations today are really trying to figure out how to help the government operate and make that data more accessible. AI, as well as machine learning, are great tools to help them do that.
The other major benefit of leveraging data is to simplify. Yes, there is a surplus of data and you make it accessible, but when you make the data simple or you simplify the processes, now you can make it more accessible as well. So you’re creating bite sized pieces of relevant information. If you can create structures that get people what they really want so they can focus on what’s really important to them. That’s what these technologies help to do: they make everything bite sized and relevant. If you get too much information as an individual, it can be challenging to digest and one’s ability to truly understand it may be lowered significantly.
We live in a world where omnipresent sensors track organizations, vehicles and systems all around the world in both the physical and virtual realms. How is your organization adapting and responding to the proliferation of sensors and the rapid expansion of the Internet of Things?
As a company that really focuses on alerting, IoT is a wonderful thing because everything is connected, which means there is plenty of information available and we can see what’s going on more effectively and help our customers to do things really well. Our platform is what you call multimodal. One of the things customers ask us all the time are questions regarding misinformation and disinformation. âIf you guys are scrubbing all these different sources and something goes out there that’s not true, how do you know it’s not true? Because you might send an alert based on that, and we may inadvertently put resources in a place that we don’t need to put them when they could be doing something somewhere else.â
Given that we are multimodal, we work to address these concerns. Multimodal means we leverage audio, video, text and imagery, etc., to cross correlate. We have close to a million data sources. We look for the patterns across all of that expanse, utilizing AI. So if you add more IoT sensors, that gives the ability to cross reference even more things across that multimodal domain to make sure that what we’re alerting on is accurate and true.
We need to take action because, as you add more sources, it just helps us to get a better picture of what’s happening. We do that quite well at Dataminr. When the company was founded, there were a few incidents that were very poignant for our founders. One of them was the 2009 plane landing on the Hudson flown by Sully Sullenberger. It wasnât initially widely reported in the news, but what was interesting was that eyewitnesses were tweeting about what they were seeing in real time.
That was kind of a light bulb for our founders, identifying that those reactions were firsthand information. People are right there seeing what’s happening, and they’re reporting on it. Meanwhile, how long did it take for the news and first responders to get there and give you the âofficialâ picture of what was happening? That took a little bit longer. Thatâs when we started to see that publicly available information, or PAI, is incredibly powerful and could do good things for people. We pride ourselves on that. All of our inputs are publicly available pieces of information, and we piece them together, like I said earlier, in that multimodal approach. As you see those things happen, it’s great.