This story was originally published by Data-Smart City Solutions.
One of the great promises of open government this decade has been that it can serve as a catalyst for a new civic-centered “innovation ecosystem.” This ecosystem, replete with successful startups and data-driven advancements in government operations, could not only enhance transparency, but spur replication and generate economic value for cities. In 2013, McKinsey and Company concluded that open data, in all its forms, had the potential to contribute $3 trillion a year of value across the global economy. This and other reports set off a wave of excitement, encouraging The Economist to declare that “the open data movement has finally come of age.”
Yet in the nearly three years since, progress at the local level has been incremental. Cities like Chicago have achieved many operational data-driven advancements, yet the process of replication has been slower than many in 2013 may have hoped. Consider that for almost a year now, Chicago’s food inspection program has been guided by predictive analytics.
The original predictive algorithm, developed by the city’s Department of Innovation and Technology (DoIT), relied entirely on open data to identify which restaurants were most likely to be in critical violation of city health codes. Working collaboratively with the Department of Public Health (CDPH) and pro-bono data scientists at Allstate Insurance, the algorithm was piloted in late 2014 to considerable success: inspectors were able to discover critical violations 25% faster, on average, then if they had used the traditional inspection method. By 2015, the algorithm was integrated into regular food inspection operations.
The algorithm is open source, of course, and free to be used by anyone. Yet despite its efficacy at a low cost for Chicago, there has not been a rush from other cities to replicate the program (at least not yet). Following food inspections coverage from this website, the program’s success—and its replication challenges, for that matter — were even covered by TheAtlantic and PBS NewsHour, helping increase awareness.
So why is this lack of replication the case? In short, it isn’t always easy for cities to do it – even when the algorithm itself is free. Some governments don’t have the internal staff or capacity to take on such projects. Others may be leery that the effort it takes to adopt an analytics program, whether perceived or actual, is not worth the cost. If cities without internal capacity wish to seek partners or vendors, they either may not know which are best to reach out to, or have their efforts complicated or undermined by complex procurement structures.
While these barriers are only part of the story for why the innovation ecosystem is perhaps not as robust as it could be, a good starting point to overcoming them is to enhance open government beyond raw information on a portal. It’s a point not lost on Tom Schenk, CDO of Chicago, who acknowledges the challenges of spurring replication. “The specifics do change between cities,” Schenk said to The Atlantic. “To even pick up code and adapt it to your specific business practice still takes work.”
To address these challenges, Schenk has expanded Chicago’s open data program during his tenure to include a growing GitHub repository of algorithms and codes, and even OpenGrid, an open-source, map-based platform to visualize city data. OpenGrid in particular has been made available to other cities as a one-click download on Amazon Web Services’ marketplace, easing barriers to access in the process.
Yet for adopting something more advanced, like Chicago’s food inspection program, sometimes the basic principles of hard work and a nimble operation can get the job done. Since the food inspection program was launched last year, one locality has taken advantage of the algorithm: Montgomery County, Maryland, located just outside of Washington, D.C.