“Onerous levels of oversight”
Lee Berthiaume from the Canadian Press wrote a fascinating article last week, based on an internal Department of National Defence report on IT support. The report looks at DND’s IT processes and systems, describing them as “not only inefficient and expensive to maintain, but also often out-of-date and poorly supported”. From my vantage point this is consistently a problem across the federal government writ large.
The entire article is worth a read, but one section in particular stood out (emphasis mine):
The internal report also took aim at the military’s troubled procurement system, which was found to deliver IT equipment with inadequate or out-of-date technology. Poor planning was partly to blame but the report also blamed onerous levels of oversight.
While that oversight was described as the result of cost overruns and delays on past IT projects, the report said that it nonetheless created new problems in delivering modern equipment.
“The complex processes associated with the capital projects and procurement are very slow and cumbersome,” according to the report. “The process cannot keep up with the rate of change of technology.”
Those delays — and their potential impact on operations — were also cited as a major reason for why a patchwork of IT systems and programs now cover different parts of the Defence Department and military.
It’s really cool to see that internal teams at DND are putting together this kind of critical analysis, and really exciting that news organizations like the Canadian Press are shining a light on it. Public scrutiny of government IT, it’s my jam!
“Onerous levels of oversight” seems to very accurately capture one of the main root causes of IT failures in government. Which may seem counterintuitive, of course! The government has IT-related oversight processes across the board, for project management, procurement, security, and any other aspect of IT projects. This extensive oversight often exists as a reaction to previous IT failures, so it’s understandable to think that oversight would prevent future failures. But that’s often not the case.
Large IT projects are inherently likely to fail, and when political and public service leaders ask for action in response – “we have to make sure this doesn’t happen again”, or something along those lines – introducing new oversight processes is a familiar and comforting response. It’s a response that avoids asking larger, more existential questions about how IT projects are designed and implemented in government, and what public service leadership capacity would be required to make that happen differently.
Even the Office of the Auditor General – my all-time favourite Officer of Parliament, although the Office of the Privacy Commissioner is a close second – has fallen into this trap in the past. The OAG has published some brilliant pieces on how the government is failing to effectively deliver services to Canadians, as one of the few Canadian government institutions making the case for how important this is. But their recommendations following investigations into major IT failures (Phoenix, for example) often ultimately recommend more oversight, reinforcing the problems that the DND report describes above.
Almost a year ago I wrote a post titled “Introducing agile to large organizations is a subtractive process, not an additive one”. Every government department and division nowadays tends to describe their work as agile, or mostly agile; the challenge is that in a large majority of cases departmental teams haven’t been able to stop doing the non-agile processes that their institutional procedures require them to do (dozens or hundreds of waterfall project management artefacts, lengthy architecture review or investment planning committee presentations, pre-canned technical requirements documents that prevent future iterations or changes in priority based on user feedback). All of these oversight processes and the activities that support them take time and energy away from doing higher-value, higher-impact work that public servants could otherwise be doing.
Thinking anecdotally to the IT- and service delivery-related policies and processes that I’ve seen in practice across the federal government, easily more than half of them have a net negative rather than net positive impact on public service teams trying to deliver modern services. As just one example of this, ESDC’s IT strategy team has a truly phenomenal post describing how their existing project management and oversight processes increase rather than reduce IT project risks. The same is true in a variety of forms in practically every other federal department.
Many of these processes end up being self-reinforcing or self-perpetuating, partly since oversight and compliance work is more comfortable (to public servants accustomed to traditional policy work), and partly since siloed departmental oversight roles create incentives to be as risk-averse as possible. For example, if a project never makes it out the door, an IT security overseer achieves their lowest-possible-risk outcome even as the project team is catastrophically delayed. If an architecture committee mandates a problematic standardized solution, they accomplish their objectives even if that decision adds years of delays to other teams’ work.
Despite some promising TBS policy changes over the past few years, there’s still so much that could be changed – and the departmental procedures that most teams follow tend to both be significantly more complex, and to lag several years behind TBS efforts to streamline policy requirements.
Ultimately, changing this all comes down to public service leadership that is able to recognize the value of having fewer oversight processes, of trusting their people more, and of doing less process-compliance work and more delivery work. (And on breaking things into smaller rather than larger projects!) It depends on leadership that can eliminate institutional barriers, and take on perceived risks related to non-compliance with processes that, if followed, would actually lead to worse outcomes. And it depends on having public servants with the technical and design expertise to know what good service delivery outcomes look like.