Most federal agencies work in complex environments—especially defense, law enforcement, border protection and agencies with similar missions—and sit on top of massive amounts of data. The data they own or can access could help unwind some of the complexity, but several factors complicate agency efforts to make the most of it.
Broadly speaking, federal agencies are not able to squeeze as much value from their data as they could. Federal law places some constraints upon agencies regarding the use of data. In my experience at U.S. Customs and Border Protection, for example, we were restricted to using our own data sets and a small number of external sources. There is only so much you can do when you are forbidden to tap into larger pools of data.
Even within those limits, however, some agencies can still improve their ability to identify gaps in data and in finding ways to fill them. The real art of this is in how well agencies determine which techniques and tools they should be using to analyze data and turn it into actionable information. The answer varies from case to case. Finding the right solutions takes a certain amount of skill, working through the types of data available and determining the desired outcome of the analysis.
For example, agencies involved in counter terrorism have gotten to be pretty good at detecting terrorists traveling to or from the United States, based on analyzing data on travelers and applying a rules engine to determine which travelers warrant more scrutiny. Over time, they have refined the process, improving accuracy considerably. They have the green light from Congress to build search engines in which they can aggregate data from all travelers and run it through a set of rules to winnow out the few people who stand a good chance of posing a threat.
This is the goal: the ability to cross-compare data sets. Agencies are trying to develop better and more reliable ways to search, using the data sets they have. However, what none of these agencies is able to do is utilize other external data that might be relevant to help them narrow the search field.
How decision support helps
We are not yet in an environment where data allows managers or operators to make better decisions than ever before. The data environment is exploding. It is a problem everyone faces at some level, but the government has more complexity and a more restricted toolset than the private sector.
In my view, a good early target is using decision support tools, drawing on the available data, to model an agency’s operating environment and extrapolate the effects of choosing one of a number of options. This then enables leaders to pick the best option based on the forecasted results.
When implemented well, an agency can use the technology to model the outcome of the possible choices, using system dynamics and other model types. Integrated simulations can then present the decision maker with probabilities of decision efficacy. This automates the process of working through a decision tree, which many decision makers still do in their heads. Further, it provides a single explicit model from which everyone involved can work, rather than each having their own internal model and trying to explain it to the others.
This is simply the natural next step in leadership and business intelligence (BI). It is only when we get this capability integrated into decision support that we can, in my mind, be said to have obtained a return on investment on our current and previous investments in big data and BI tools.
My experience is that agencies need a proactive approach, an eye toward the kinds of decisions they want to use data to inform next year, or 5 years or 10 years down the road. To do that, they need to identify where they have gaps in knowledge. However, the cost, complexity and data demand of that level of predictive analytics may cause many agencies to remain focused on their most immediate concerns.