Category Archives: public service sector renewal

Covid-19: Lessons from and for Government Digital Service Groups

This article was written by David Eaves, lecturer at the Harvard Kennedy School, Tom Loosemore, Partner at Public Digital, with Tommaso Cariati and Blanka Soulava, students at the Harvard Kennedy School. It first appeared in Apolitical.

Government digital services have proven critical to the pandemic response. As a result, the operational pace of launching new services has been intense: standing up new services to provide emergency funds, helping people stay healthy by making critical information accessible online, reducing the strain on hospitals with web and mobile based self-assessment tools and laying the groundwork for a gradual reopening of society by developing contact tracing solutions.

To share best practices and discuss emerging challenges Public Digital and the Harvard Kennedy School co-hosted a gathering of over 60 individuals from digital teams from over 20 national and regional governments. On the agenda were two mini-case studies: an example of collaboration and code-sharing across Canadian governments and the privacy and interoperability challenges involved in launching contract tracing apps.

We were cautious about convening teams involved in critical work, as we’re aware of how much depends on them. However, due to the positive feedback and engaging learnings from this session we plan additional meetings for the coming weeks. These will lead up to the annual Harvard / Public Digital virtual convening of global digital services taking place this June. (If you are part of a national or regional government digital service team and interested in learning more please contact us here.)

Case 1: sharing code for rapid response

In early March as the Covid-19 crisis gained steam across Canada, the province of Alberta’s non-emergency healthcare information service became overwhelmed with phone calls.

Some calls came from individuals with Covid-19 like symptoms or people who had been exposed to someone who had tested positive. But many calls also came from individuals who wanted to know whether it was prudent to go outside, who were anxious, or had symptoms unrelated to Covid-19 but were unaware of which symptoms to look out for. As the call center became overwhelmed it impeded their ability to help those most at risk.

Enter the Innovation and Digital Solutions from Alberta Health Services, led by Kass Rafih and Ammneh Azeim. In two days, they interviewed medical professionals, built a prototype self-assessment tool and conducted user-testing. On the third day, exhausted but cautiously confident, they launched the province of Alberta’s Covid-19 Self Assessment tool. With a ministerial announcement and a lucky dose of Twitter virality, they had 300k hits in the first 24h, rising to more than 3 million today. This is in a province with a total population of 4.3 million residents.

But the transformative story begins five days later, when the Ontario Digital Service called and asked if the team from Alberta would share their code. In a move filled with Canadian reasonableness, Alberta was happy to oblige and uploaded their code to GitHub.

Armed with Alberta’s code, the Ontario team also moved quickly, launching a localised version of the self-assessment tool in three days on Ontario.ca. Anticipating high demand, a few days later they stood up and migrated it to a new domain — Covid-19.ontario.ca — which has since evolved into a comprehensive information source for citizens, hosting information such as advice on social distancing or explanations about how the virus works with easy to understand answers.

The evolution of the Ontario Covid-19 portal information page, revised for ease of understanding and use

The Ontario team, led in part by Spencer Daniels, quickly iterated on the site, leveraging usage data and user feedback to almost entirely rewrite the government’s Covid-19 advice in simpler and accessible language. This helped reduce unwarranted calls to the province’s help lines.

Our feeling is that governments should share code more often. This case is a wonderful example of the benefits it can create. We’ve mostly focused on how code sharing allowed Ontario to move more quickly. But posting the code publicly also resulted in helpful feedback from the developer community and wider adoption. In addition, several large private sector organisations have repurpose that code to create similar applications for their employees and numerous governments on our call expressed interest in localising it in their jurisdiction. Sharing can radically increase the impact of a public good.

The key lesson. Sharing code allows:

  • Good practices and tools to be adopted more widely — in days, not weeks
  • Leveraging existing code allows a government team to focus on user experience, deploying and scaling
  • The crisis is a good opportunity to overcome policy inertia around sharing or adopting open source solutions
  • Both digital services still have their code on GitHub (Ontario’s can be found here and Alberta’s here).

The amazing outcome of this case is also a result of the usual recommendations for digital services that both Alberta and Ontario executed so well: user-centered design, agile working and thinking, working in cross-functional teams, embedding security and privacy by design and using simple language.

Case 2: Contact tracing and data interoperability

Many countries hit hard by the coronavirus are arriving at the end of the beginning.

The original surge of patients is beginning to wane. And then begins a complicated next phase. A growing number of politicians will be turning to digital teams (or vendors) hoping that contact tracing apps will help re-open societies sooner. Government digital teams need to understand the key issues to ensure these apps are deployed in ways that are effective, or to push back against decision makers if these apps will compromise citizens’ trust and safety.

To explore the challenges contact tracing apps might create, the team from Safe Paths, an open source, privacy by design contact tracing app built by an MIT led team of epidemiologists, engineers, data scientists and researchers, shared some early lessons. On our call, the Safe Paths team outlined two core thoughts behind their work on the app: privacy and interoperability between applications.

The first challenge is the issue of data interoperability. For large countries like the United States, or regions like Europe where borders are porous, contact tracing will be difficult if data cannot be scaled or made interoperable. Presently, many governments are exploring developing their own contact tracing apps. If each has a unique approach to collecting and structuring data it will be difficult to do contact tracing effectively, particularly as societies re-open.

Apple and Google’s recent announcement on a common Bluetooth standard to enable interoperability may give governments and a false sense of security that this issue will resolve itself. This is not the case. While helpful, this standard will not solve the problem of data portability so that a user could choose to share their data with multiple organisations. Governments will need to come together and use their collective weight to drive vendors and their internal development teams towards a smaller set of standards quickly.

The second issue is privacy. Poor choices around privacy and data ownership — enabled by the crisis of a pandemic — will have unintended consequences both in the short and long term. In the short term, if the most vulnerable users, such as migrants, do not trust a contact app they will not use it or worse, attempt to fool it, degrading data collection and undermining the health goals. Over the long term, decisions made today could normalise privacy standards that run counter to the values and norms of free liberal societies, undermining freedoms and the public’s long term trust in government. This is already of growing concern to civil liberties groups.

One way Safepaths has tried to address the privacy issue is by storing users data on their device and giving the user control over how and when data is shared in a de-identified manner. There are significant design and policy challenges in contact apps. This discussion is hardly exhaustive, but they need to start happening now, as decisions about how to implement these tools are already starting to be made.

Finally, the Safepaths team noted that governments have a responsibility in ensuring access to contact tracing infrastructure. For example, they struck agreements to zero-rate — e.g. make the mobile data needed to download and run the app free of charge — in a partner Caribbean country to minimise any potential cost to the users. Without such agreements, some of the most vulnerable won’t have access to these tools.

Conclusions and takeaways

This virtual conversation was the first in a series that will be held between now and the annual June Harvard / Public Digital convening of global digital services. We’ll be hosting more in the coming weeks and months.

Takeaways:

  • The importance of collaboration and sharing code within and between countries. This was exemplified by code sharing between the Canadian provinces and by the hope that this can become an international effort.
  • Importance of maintaining user-centered focus despite of the time pressure and fast-changing environment that requires quick implementation and iteration. Another resource here is California’s recently published crisis digital standard.
  • Privacy and security must be central to solutions that help countries deal with Covid-19. The technology exists to make private and secure self-assessment forms and contact tracing apps. The challenge is setting those standards early and driving global adoption of them.
  • Interoperability of contact tracing solutions will be pivotal to tackle a pandemic that doesn’t borders, cultures, or nationality. As the SafePaths team highlighted, this is a global standard-setting challenge.

Harvard and Public Digital are planning to host another event on this series on the digital response to Covid-19, sign up here if you’d like to participate in future gatherings! — David Eaves and Tom Loosemore with Tommaso Cariati and Blanka Soulava

This piece was originally published on Apolitical.

Government, Digital Services & IT Procurement Reform

Next semester I’ll be teaching a course on why healthcare.gov initially turned into a disaster. Why? Because sadly, the failure of healthcare.gov was not special. Conservative estimates suggest over half of all IT projects are not completed on time or on budget. Others suggest the numbers are higher still. What did make healthcare.gov special was that its failure was so spectacularly public. As a result, unlike so many IT failures that stay hidden from public view, healthcare.gov is, somewhat paradoxically, safe to talk about. This makes it a wonderful case study to explore why so many large IT projects end in disaster.

Thinking about the course has me diving deeper into government IT procurement. Many of my colleagues in the civic tech space are convinced that procurement reform is critical to improving government IT services and fostering a better eco-system of IT vendors. I agree there is a great deal of good work that could be done to make procurement simpler and more effective. However, procurement is not a silver bullet. Indeed, I suspect that procurement reform, on its own, will have a limited impact, because it is not the central problem.

What procurement thinks it is trying to solve

There are two broad goals of government IT procurement policies that I think often get conflated.

One set of rules try to constrain and/or force people to make good technicaldecisions. Is the solution accessible? Is it secure? Does it protect privacy? etc…

The second goal is to constrain and/or force people to make good processdecisions. This is about ensuring fairness, accountability and broadly to prevent corruption. Did you get three bids? Have you laid out the specifications? Are you related to any of the vendors? Have the vendors donated money to any politicians? etc…

Both sets of rules have unintended consequences that make procurement slow and more difficult (although for many governments this can be a feature, not a bug, and making spending more difficult can be a net positive at a system level even if frustrating at the unit level).

The underlying assumption

Unpack these two goals — and particularly the first one — and you discover two underlying assumptions:

  1. IT implementations are treated like technical not adaptive challenges

The first assumption is that IT should be commoditized. While some IT purchases may be similar to buying a pencil, most are not. And if you are talking about an IT purchase that is custom build or the cost is north of $1 million, this is almost definitely not the case. Large IT implementations are complex with significant unknowns. It is impossible (despite organizations efforts) to spec out how all the problems will be solved in advance. Moreover this work takes place in dynamic environments where assumptions about the tech it will operate on, how users will interface with it and even what it product will need to do are dynamic. We treat IT projects like they are technical challenges — that we know exactly what must be done and that it can be all mapped out in advance — but they are almost always adaptive problems, where we don’t even know what all the problems are at the beginning of the process.

2. Effective process can force good decision making (or at least constrain bad ones)

Think of all the questions an enterprise — government or for profit — needs to assess: Will a solution work? Does it have a good user experience for our users? Can that UX evolve? Can the vendor adapt to unknown problems? Can they integrate with our current environment? To our future environment? Plus hundreds of other issues, all of which require nuance and knowledge.

But any set of procurement rules is about standardization process — so that the process can be evaluated, not the outcome. And this makes it harder to bring to bear these nuanced decision and knowledge because nuance, by definition, is hard to standardize. I fear that implicit in procurement reform is the belief that a good set of policies can design a process that, irregardless of who runs it, will prevent disaster and possibly even ensure an optimal outcome. If we assume procurement problems are technical problems for which the appropriate solution must merely be identified, then with the right “code” the machinery of procurement — regardless of who is manning it — should be able to select the optimal solution.

Both assumptions are, of course, patently ridiculous.

This is why I’m not confident that tinkering with the rules of procurement in the IT sector will generate much change. I certainly don’t think it will foster a new eco-system of service providers who will provide solutions that don’t fail 50% of the time. All the tinkering in the world won’t change the underlying issues — which is more than anything else: more than rules, oversight or the size of the ecosystem, is capacity of the people of running the procurement to evaluate technology and vendors capacity that matters.

I’ve seen terrible, terrible IT choices made by organizations and inspired decisions (like California’s Child Welfare Services RFP on which I’m writing a case study) be produced by virtually identical procurement rules and policies. The current rule set can allow a determined actor in government to make good choices. Could we do a lot better? Absolutely. But it is not the defining barrier.

This is, again, why I think USDS and similar digital service groups that try to attract talent that has worked on mass market consumer technology matter. Recruiting top technology talent into government is the single best strategy for ensuring better procurement. Deeply experienced people stuck with an okay rule set will be better than less experienced people stuck with an okay rule set.

And vendors generally agree. As a vendor, what you want more than anything is a capable, intelligent and knowledgeable buyer, not a set of rules that must be adhered to no matter the context.

And, to be clear, this is how the private sector does it. Ask Peter Thiel, Mark Zuckerberg, Elon Musk or Paul Graham (especially Paul Graham). None of them would outsource the technology choices for their start up or $1B dollar unicorn to a group of procurement officers equipped with even the most perfect set of procurement rules. Quite the contrary. They spend millions in stock options and money hiring amazing talent to make highly informed decisions to meet needs in a fluid and evolving environment.

So we should do Procurement Reform?

We should. We definitely should. But let’s recognize the limits. We should have rules that prevent corruption, nepotism or competitive practices. But unshackling people of rules may equip them to make more bad decisions as it does good ones. Let’s make rules that help people buy software and don’t make the process a burden, but our defense against poor outcomes shouldn’t be more rules, it should be an investment in pulling the brightest and best minds into government.

This is a problem I think digital service groups are trying to solve — creating a pipeline of talent that has worked with the latest tech and delivered solutions that have served hundreds of millions of users, flowing through government. So let’s just ensure that we invest in both. Procurement reform without the right people won’t get us there.

The Future of USDS: Trump, civic tech and the lesson of GDS

Across Washington, the country, and the world, the assumptions people have about various programs, policies and roles have been radically altered in the last 12 hours with the victory of President-Elect Trump. Many of my students and colleagues have asked me — what does this mean for the future of United States Digital Service and 18F? What should it mean?

This is not the most important question facing the administration. But for those of us in this space the question matters. Intensely. And we need a response. USDS and 18F improve how Americans interact with their government while saving significant amounts of money. Democrats and Republicans may disagree over the size of government, but there is often less disagreement over whether a service should be effectively and efficiently delivered. Few in either party believe a veteran should confront a maze of forms or confusing webpages to receive a service. And, the fact is, massive IT failures do not have a party preference. They have and will continue to burn any government without a clear approach of how to address them.

So what will happen now?

The first risk is that the progress made to date will get blown up. That anything attributed to the previous administration will be deemed bad and have to go. I’ve spent much of the morning reaching out to Republican colleagues, and encouraging those I know in the community to do the same. What I’ve heard back is that the most plausible scenario is nothing happens. Tech policy sits pretty low on the priority list. There will be status quo for likely a year while the administration figures out what is next.

That said, if you are a Republican who cares about technology and government, please reach out. I can connect you with Jen Pahlka who would be happy to share her understanding of the current challenges and how the administration can use USDS to ensure this important work continues. There are real challenges here that could save billions and ensure Americans everywhere are better served.

The second risk is implosion. Uncertainty about what will happen to USDS and 18F could lead to a loss of the extraordinary talent that make the organizations so important.

Each employee must decide for themselves what they will do next. Those I’ve had the privilege to engage with at USDS, 18F or who served as Presidential Innovation Fellows have often displayed a sense of duty and service. The divisive nature of the campaign has created real wounds for some people. I don’t want to pretend that that is not the case. And, the need to push governments to focus on users, like Dominic, is no less diminished. Across Washington, there are public servants who did not vote Republican who are returning to their jobs to serve the best they can. The current administration has been effective in issuing a call to arms to civic technologists to help government. Now, having created a critical mass of civic technologists in DC, can it hold to continue to have the influence and grow the capabilities a 21st century government needs? Maintaining this critical mass is a test that any effort to institutionalize change must clear.

If you work for USDS or 18F, there are maps. The Government Digital Service was created by a partnership between a Conservative Minister (Francis Maude) and a group of liberal technologists (Mike Bracken et al). I doubt either party was naturally comfortable with the other at first, but an alliance was made and both its strengths and its flaws could serve as one template for a way to move forward.

My own sense is the work of USDS and 18F must be bigger than any one administration or party. For some this is a painful conversation, for others it is an easy conclusion. I understand both perspectives.

But in either case, there must be a dialogue around this work. So please, both sides. Find a way to talk. There is certainly a need for that in the country.

If there is anything we can do at Harvard Kennedy School to convene actors on either side of the aisle to help find a path forward for this work, please let me know. This work is important, and I hope it will not be lost.

Addendum: Just saw Naoh Kunin’s piece on why he is staying. Again, everyone has to make their choice, but believe in the conversation.

The Empire Strikes Back: How the death of GDS puts all government innovators at risk

The UK Government Digital Service(GDS) is dead. I’m sure it will continue to exist in some form, but from what I’ve read it appears to have been gutted of its culture, power and mandate. As a innovator and force for pulling the UK government into the 21st century, it’s over.

The UK government comms people, and the new head of GDS himself will say otherwise, but read between the lines of the stories as well as what former GDS people are saying, and it’s clear: GDS is dead.

These narratives are also aligned as to why. GDS was killed by what the Brits call “Whitehall.” This is short form for the inner and elite circles of the UK public service somewhat cruelly depicted in “Yes, Minister.” Their motive? GDS represented a centralization of IT control, and the departments — for whom GDS frequently found savings of tens of millions of pounds and provided effective online services on behalf of— didn’t like seeing their autonomy and budget sucked away by a group of IT geeks.

Losing GDS is a tragedy for the UK. But the downside isn’t limited to just the UK. It represents a blow to efforts to rethink and digitize governments around the world.

Why?

Because the debates over how governments work are not domestic (they haven’t been for a long time). GDS inspired innovators and politicians everywhere who were hungry for models of how governments could function in a digital age. Copycats sprang up around the world, such as the Digital Transformation Office in Australia and the United States Digital Service in the US at the national level, as well as others at the regional and local level. (For example, the Province of Ontario, which has a fantastic digital team, is currently hiring a Chief Digital Officer to lead a similar type of org.)

While working independently, these organizations share a common sense of mission and so trade best practices and lessons. More importantly, a few years ago they benefited from a massive asymmetry in information. They understood just how disruptive the innovation they were seeking to implement could be. And not from a technology perspective, but to the organizational structure and distribution of power within the public service.

As it became clearer to the rest of the public service in the UK just how far-reaching GDS’s vision was, and how much thinking digitally challenges the way senior public servants in places like Whitehall think… that asymmetry in knowledge and capacity didn’t diminish, but the asymmetry in awarenessand implications did. And so the center fought back. And it has far more power than any GDS might.

But it isn’t just in the UK. I suspect the existence of GDS and its global network has activated a global counter-movement among senior mandarins. Public servants are connected across international borders by shared alma maters, common international organizations and professional groups, cross-border projects, etc.… don’t think for a second that that global network hasn’t been activated (particularly among Treasury Board officials) to what some see as a new threat.

To be clear, this isn’t a GDS good, Whitehall bad piece. For anything to fail there is probably some shared responsibility. And assigning blame doesn’t really matter. What I care about it preserving what I and many others believe are important efforts to modernize governments so that the effectiveness of government, and trust in its institutions, are maintained.

So what does matter is, if you work for a digital service outside the UK, it is time to go double down on political cover and/or to reach out to the senior public servants around you and build some alliances. Because your Whitehall equivalent definitely has relationships with the UK Whitehall, and they are asking them what they think and know about GDS, and their story probably isn’t a good one.

The empire of traditional siloed bureaucracy is fighting back. You probably won’t beat it. So how are you going to engage it?

Canada’s Draft Open Government Plan — The Promise and Problems Reviewed

Backdrop

On Friday the Canadian Government released its draft national action plan. Although not mentioned overtly in the document, these plans are mandated by the Open Government Partnership (OGP), in which member countries must draft National Action Plans every two years where they lay out tangible goals.

I’ve both written reviews about these plans before and offered suggestions about what should be in them. So this is not my first rodeo, nor is it for the people drafting them.

Purpose of this piece

In the very niche world of open government there are basically two types of people. Those who know about the OGP and Canada’s participation (hello 0.01%!), and those who don’t (hello overwhelming majority of people — excited you are reading this).

If you are a journalist, parliamentarian, hill staffer, academic, public servant, consultant, vendor, or everyday interested citizen already following this topic, here are thoughts and ideas to help shape your criticisms and/or focus your support and advice to government. If you are new to this world, this post can provide context about the work the Canadian Government is doing around transparency to help facilitate your entrance into this world. I’ll be succinct — as the plan is long, and your time is finite.

That said, if you want more details about thoughts, please email me — happy to share more.

The Good

First off, there is lots of good in the plan. The level of ambition is quite high, a number of notable past problems have been engaged, and the goals are tangible.

While there are worries about wording, there are nonetheless a ton of things that have been prioritized in the document that both myself and many people in the community have sought to be included in past plans. Please note, that “prioritized” is distinct from “this is the right approach/answer.” Among these are:

  • Opening up the Access to Information Act so we can update it for the 21st century. (Commitment 1)
  • Providing stronger guarantees that Government scientists — and the content they produce — is made available to the public, including access to reporters (Commitment 14)
  • Finding ways to bake transparency and openness more firmly into the culture and processes of the public service (Commitment 6 and Commitment 7)
  • Ensuring better access to budget and fiscal data (Commitment 9, Commitment 10 and Commitment 11)
  • Coordinating different levels of government around common data standards to enable Canadians to better compare information across jurisdictions (Commitment 16)
  • In addition, the publishing of Mandate Letters (something that was part of the Ontario Open by Default report I helped co-author) is a great step. If nothing else, it helps public servants understand how to better steer their work. And the establishment of Cabinet Committee on Open Government is worth watching.

Lots of people, including myself, will find things to nit pick about the above. And it is always nice to remember:

a) It is great to have a plan we can hold the government accountable to, it is better than the alternative of no plan

b) I don’t envy the people working on this plan. There is a great deal to do, and not a lot of time. We should find ways to be constructive, even when being critical

Three Big Ideas the Plan Gets Right

Encouragingly, there are three ideas that run across several commitments in the plan that feel thematically right.

Changing Norms and Rules

For many of the commitments, the plan seeks to not simply get tactical wins but find ways to bake changes into the fabric of how things get done. Unlike previous plans, one reads a real intent to shift culture and make changes that are permanent and sustainable.

Executing on this is exceedingly difficult. Changing culture is both hard to achieve and measure. And implementing reforms that are difficult or impossible to reverse is no cake walk either, but the document suggests the intent is there. I hope we can all find ways to support that.

User Centric

While I’m not a fan of all the metrics of success, there is a clear focus on making life easier for citizens and users. Many of the goals have an underlying interest of creating simplicity for users (e.g. a single place to find “x” or “y”). This matters. An effective government is one that meets the needs of its citizens. Figuring out how to make things accessible and desirable to use, particularly with information technology, has not been a strength of governments in the past. This emphasis is encouraging.

There is also intriguing talk of a “Client-First” service strategy… More on that below.

Data Standards

There is lots of focus on data standards. Data standards matter because it is hard to use data — particularly across government, or from different governments and organizations — if they have different standards. Imagine trying if every airline used a different standard to their tickets, so to book a trip involving more than one airline would be impossible as their computers wouldn’t be able to share information with one another, or you. That challenging scenario is what government looks like today. So finding standards can help make government more legible.

So seeing efforts like piloting the Open Contracting Data Standard in Commitment 9, getting provincial and local governments to harmonize data with the feds in Commitment 16 and “Support Openness and Transparency Initiatives around the World” in Commitment Commitment 18 are nice…

… and it also makes it glaring when it is not there. Commitment 17 — Implement the Extractives Sector Transparency Measures Act — is still silent about implementing the Extractive Industries Transparency Initiative standard and so feels inconsistent with the other commitments. More on this below.

The Big Concerns

While there are some good themes, there are also some concerning ones. Three in particular stand out:

Right Goal, Wrong Strategy

At the risk of alienating some colleagues, I’m worried about the number of goals that are about transparency for transparency’s sake. While I’m broadly in favour of such goals… they often risk leading nowhere.

I’d much rather see specific problems the government wants to focus its resources on open data or sharing scientific materials on. When no focus is defined and the goal is to just “make things transparent” what tends to get made transparent is what’s easy, not what’s important.

So several commitments, like numbers 3, 13, 14, 15, essentially say “we are going to put more data sets or more information on our portals.” I’m not opposed to this… but I’m not sure it will build support and alignment because doing that won’t be shaped to help people solve today’s problems.

A worse recommendation in this vein is “establish a national network of open data users within industry to collaborate on the development of standards and practices in support of data commercialization.” There is nothing that will hold these people together. People don’t come together to create open data standards, they come together to solve a problem. So don’t congregate open data users — they have nothing in common — this is like congregating “readers.” They may all read, but their expertise will span a wide variety of interests. Bring people together around a problem and then get focused on the data that will help them solve it.

To get really tangible, on Friday the Prime Minister had a round table with local housing experts in Vancouver. One outcome of that meeting might have been the Prime Minister stating, “this is a big priority, I’m going to task someone with finding all the data the federal government has that is relevant to this area so that all involved have the most up to date and relevant information we can provide to improve all our analyses.”

Now maybe the feds have no interesting data on this topic. Maybe they do. Maybe this is one of the PMOs top 6 priorities, maybe it isn’t. Regardless, the government should pick 3–8 policy areas they care deeply about and focus sharing data and information on those. Not to the exclusion of others, but to provide some focus. Both to public servants internally, so they can focus their efforts, and to the public. That way, experts, the public and anyone can, if they are able, grab the data to help contribute to the public discourse on the topic.

There is one place where the plan comes close to taking this approach, in Commitment 22: Engage Canadians to Improve Key Canada Revenue Agency Services. It talks a lot about public consultations to engage on charitable giving, tax data, and improving access to benefits. This section identifies some relatively specific problem the government wants to solve. If the government said it was going to disclose all data it could around these problems and work with stakeholders to help develop new research and analysis… then they would have nailed it.

Approaches like those suggested above might result in new and innovative policy solutions from both traditional and non-traditional sources. But equally important, such an approach to a “transparency” effort will have more weight from Ministers and the PMO behind it, rather than just the OGP plan. It might also show governments how transparency — while always a source of challenges— is also a tool that can help advance their agenda by promoting conversations and providing feedback. Above all it would create some models, initially well supported politically, that could then be scaled to other areas.

Funding

I’m not sure if I read this correctly but the funding, with $11.5M in additional funds over 5 years (or is that the total funding?) is feeling like not a ton to work with given all the commitments. I suspect many of the commitments have their own funding in various departments… but it makes it hard to assess how much support there is for everything in the plan.

Architecture

This is pretty nerdy… but there are several points in the plan where it talks about “a single online window” or “single, common search tool.” This is a real grey area. There are times when a single access point is truly transformative… it creates a user experience that is intuitive and easy. And then there are times when a single online window is a Yahoo! portal when what you really want is to just go to google and do your search.

The point to this work is the assumption that the main problem to access is that things can’t be found. So far, however, I’d say that’s an assumption, I’d prefer the government test that assumption before making it a plan. Why Because a LOT of resources can be expended creating “single online windows.”

I mean, if all these recommendations are just about creating common schemas that allow multiple data sources to be accessed by a single search tool then… good? (I mean please provide the schema and API as well so others can create their own search engines). But if this involves merging databases and doing lots of backend work… ugh, we are in for a heap of pain. And if we are in for a heap of pain… it better be because we are solving a real need, not an imaginary one.

The Bad

There are a few things that I think have many people nervous.

Access to Information Act Review

While there is lots of good in that section, there is also some troubling language. Such as:

  • A lot of people will be worried about the $5 billing fee for FOIA requests. If you are requesting a number of documents, this could represent a real barrier. I also think it creates the wrong incentives. Governments should be procuring systems and designing processes that make sharing documents frictionless — this implies that costs are okay and that there should be friction in performing this task.
  • The fact that Government institutions can determine a FOIA request is “frivolous” or “vexatious” so can thus be denied. Very worried about the language here.
  • I’m definitely worried about having mandatory legislative review of the Access to Information Act every five years. I’d rather get it right once every 15–30 years and have it locked in stone than give governments a regular opportunity to tinker with and dilute it.

Commitment 12: Improve Public Information on Canadian Corporations

Having a common place for looking up information about Canadian Corporations is good… However, there is nothing in this about making available the trustees or… more importantly… the beneficial owners. The Economist still has the best article about why this matters.

Commitment 14: Increase Openness of Federal Science Activities (Open Science)

Please don’t call it “open” science. Science, by definition, is open. If others can’t see the results or have enough information to replicate the experiment, then it isn’t science. Thus, there is no such thing as “open” vs. “closed” science. There is just science, and something else. Maybe it’s called alchemy or bullshit. I don’t know. But don’t succumb to the open science wording because we lose a much bigger battle when you do.

It’s a small thing. But it matters.

Commitment 15: Stimulate Innovation through Canada’s Open Data Exchange (ODX)

I already talked about about how I think bringing together “open data” users is a big mistake. Again, I’d focus on problems, and congregate people around those.

I also suspect that incubating 15 new data-driven companies by June 2018 is not a good idea. I’m not persuaded that there are open data businesses, just businesses that, by chance, use open data.

Commitment 17: Implement the Extractives Sector Transparency Measures Act

If the Extractives Sector Transparency Measures Act is that same as it was before then… this whole section is a gong show. Again, no EITI standard in this. Worse, the act doesn’t require extractive industries to publish payments to foreign governments in a common standard (so it will be a nightmare to do analysis across companies or industry wide). Nor does it require that companies submit their information to a central repository, so aggregating the data about the industry will be nigh high impossible (you’ll have to search across hundreds of websites).

So this recommendation: “Establish processes for reporting entities to publish their reports and create means for the public to access the reports” is fairly infuriating as it a terrible non-solution to a problem in the legislation.

Maybe the legislation has been fixed. But I don’t think so.

The Missing

Not in the plan is any reference to the use of open source software or shares that software across jurisdictions. I’ve heard rumours of some very interesting efforts of sharing software between Ontario and the federal government that potentially saves tax payers millions of dollars. In addition, by making the software code open, the government could employ security bug bounties to try to make it more secure. Lots of opportunity here.

The Intriguing

The one thing that really caught my eye, however, was this (I mentioned it earlier):

The government is developing a Service Strategy that will transform service design and delivery across the public service, putting clients at the centre.

Now that is SUPER interesting. A “Service Strategy”? Does this mean something like the Government Digital Service in the UK? Because that would require some real resources. Done right it wouldn’t just be about improving how people get services, but a rethink of how government organizes services and service data. Very exciting. Watch that space.

On Journalism, Government and the cost of Digital Illiteracy

Earlier today the CBC published a piece by Alison Crawford about Canadian public servants editing wikipedia. It draws from a clever twitter bot — @gccaedits— that tracks edits to wikipedia from government IP address. I love the twitter account — fun!

The article, not so much. How sad that the digital literacy of the CBC is such that this is deemed newsworthy. How equally sad that government comms people feel they need to respond to things like this.

The article is pretty formulaic. It pulls out the more sensational topics that got edited on wikipedia, such as one on hockey (aren’t we so Canadian!) and one sexual positions (obligatory link bait).

It then has an ominous warning letting you know this is a serious issue:

“It’s yet another edit in a string of embarrassing online encyclopedia changes made by federal employees during the work day.”

It then lists other well known scandals of public servants editing wikipedia you are almost certainly familiar with, such as The “poopitch” and the “ Rush incident.”

See the problem? The waste!

I do. I see the colossal problem of a media institution that does not understand digital or how to deploy its power and privilege. And of a government unable to summon a response appropriate in scale to these types of stories in the past.

Let’s break it down:

  1. This is Not a Problem

Look at @gccaedits. There are on average maybe 7 edits per day. There are 257,034 public servants in Canada not counting the RCMP or the military. Assume each edit comes from a unique individual 0.0027% of public servants are spending say 15 minutes editing wikipedia each day.

But how do we know employees weren’t doing this during their break? Can anyone of us say that we’ve never looked at a sports score, sent a tweet, conducted an elaborate prank, called a friend or read an article unrelated to our work while at the office? How is this any different? In what world is a handful of people making an edit to wikipedia an indication of a problem?

I’ll bet the percent of CBC employees making edits to wikipedia is equal or greater than 0.0027%. Will they share their IP addresses so we can check?

2. Articles Like This Create Waste

Did you know that because this article was written there is probably 1–10 people in government who spent hours, if not their whole day: investigating what the government’s policies are regarding wikipedia editing; calling IT and asking them to trace the IP addresses that made the changes; bringing in HR to call the person responsible or figure out consequences. Maybe the union rep got pulled from their normal job to defend the poor “offender” from this overbearing response. It is possible tens of thousands of dollars in time was spent “managing” this issue. That, right there, is the true waste

You know that those public servants were not doing? They were NOT finding ways to help with the fire in Fort McMurray or addressing suicides in First Nations communities or the millions of other PRESSING AND URGENT PROBLEMS WE NEED GOVERNMENT FOCUSED ON.

3. Articles Like These Diminish Journalism

It isn’t just about the cost to government having to deal with this. What other issues could have been written about today? What about something actually important that held the government to account?

Remember the other week when Amanda Pfeffer of the CBC wrote about how IBM won a $32M Ontario government contract to fix software IBM itself had created? More of that please. I don’t mind attacking public servants judgement, but let’s do on shit that matters.

4. Journalists and Managers Cooperate for Terrible Outcomes

This isn’t just the fault of the CBC. Governments need to learn the consequence to reacting — as opposed to opportunity of simply ignoring — these types of articles.

The reflexive response to articles like these for management is to grope for a “solution.” The end game is almost always expensive and Orwellian network monitoring software sold by companies like Blue Coat and a heap of other “security” tools. As a result public servants are blocked from accessing wikipedia and a range of other deeply useful tools on the web all while their computers become slower and more painful to use.

This is not a path to more effective government. Nor a path to valuable journalism. We can do better.

Okay, rant mostly done.

I feel bad for public servants who had a crappy day today because of this.

I feel bad for Alison Crawford — who has lots of important stories to her credit, and wasn’t even the first to write about this, but whose article is the focus of this piece. Please keep reading her stuff — especially on the judicial file.

Ultimately, this CBC could have been an amazing article about how the ‘poopitch’ and ‘Rush’ incidents were crazy over reactions and we need to figure out how to manage the public service in a digital age.

But it wasn’t. It was a cheap thrill. Less of that please CBC. More of the real journalism we desperately need.

Moving to Harvard

Hi friends.

Just a brief note to say that I’ve been invited to come to the Kennedy School of Government to be a Research Fellow in the Science, Technology and Public Policy Program (STPP) at the Belfer Center for Science and International Affairs at the Kennedy School of Government at Harvard University.  I’ve also been invited to be an Adjunct Lecturer in Public Policy at the Kennedy School and to teach on Technology, Policy and Government.

A number of other changes flow out of this news!

  1. I’ll be moving to be Boston in the New Year. Looking forward to reconnecting with old friends from a previous life there, and making new friends. There’s a wonderful community of people there that includes the likes of Nigel Jacobs, Debbie Chachra, Nick Sinai, Susan Crawford, Nick Grossman, Colin Maclay, Mitchell Weiss and many others that I hope I get to see more of.
  2. As I’ll be teaching, please send me any ideas, cases, readings you think I should include as course materials. I’m working on my initial course – an introduction to technology and government – which will focus on how technology can and is changing the ways we deliver services, organize government and make policy. Opportunities and challenges around user-centric design, collaboration vs. cooperation, influence and power, and open data/methodologies will all figure prominently. It is also about bringing what I tried to impart on fellows during the boot camps at Code for America and the Presidential Innovation Fellows Program.
  3. Finally, with my children (only somewhat, but nonetheless…) a little older and these new academic responsibilities, I hope to write more again. Or maybe, more precisely, I hope to write more here again. This blog enables me to organize and structure thoughts (and, on occasion, is a place to get things off my chest). With luck it, along with the classroom, will be that again. While I’ve been silent and busy, the last few years have been an incredible time of learning – particularly in the civic startup space, but also vis-a-vis technology and government more generally – and I look forward to sharing more of what I’ve gleaned.

Leaving Vancouver is hard; I’ve got wonderful roots, family and friends here, and a community and city I care about enormously. But I’m also excited about engaging with colleagues and students at the Kennedy School.

Looking forward to it all.

The promise and challenges of open government – Toronto Star OpEd

As some readers many know it was recently announced that I’ve been asked by Ontario Premier Wynn and Government Services Minister John Milloy to be part of the Government of Ontario’s task force on Open Government.

The task force will look at best practices around the world as well as engage a number of stakeholders and conduct a series of public consultations across Ontario to make a number of recommendations around opening up the Ontario government.

I have an opinion piece in the Toronto Star today titled The Promise and Challenges of Open Government where I try (in a few words) to outline some of the challenges the task force faces as well as some of the opportunities I hope it can capitalize on.

The promise and challenges of open government

Last week, Premier Kathleen Wynne announced the launch of Ontario’s Open Government initiative, including an engagement task force (upon which I sit).

The premier’s announcement comes on the heels of a number of “open government” initiatives launched in recent years. President Barack Obama’s first act in 2009 was to sign the Memorandum on Transparency and Open Government. Since then numerous city, state and provincial governments across North America are finding new ways to share information. Internationally, 60 countries belong to the Open Government Partnership, a coalition of states and non-profits that seeks to improve accountability, transparency, technology and innovation and citizen participation.

Some of this is, to be blunt, mere fad. But there is a real sense among many politicians and the public that governments need to find new ways to be more responsive to a growing and more diverse set of citizen needs, while improving accountability.

Technology has a certainly been – in part – a driver, if only because it shifts expectations. Today a Google search takes about 30 milliseconds, with many users searching for mere minutes before locating what they are looking for. In contrast, access to information requests can take weeks, or months to complete. In an age of computers, government processes often seem more informed by the photocopier – clinging to complex systems for sorting, copying and sharing information – than using computer systems that make it easy to share information by design.

There is also growing recognition that government data and information can empower people both inside and outside government. In British Columbia, the province’s open data portal is widely used by students – many of whom previously used U.S. data as it was the only free source. Now the province benefits from an emerging workforce that uses local data while studying everything from the environment to demography to education. Meanwhile the largest user of B.C.’s open data portal are public servants, who are able to research and create policy while drawing on better information, all without endless meetings to ask for permission to use other departments’ data. The savings from fewer meetings alone is likely significant.

The benefits of better leveraging government data can affect us all. Take the relatively mundane but important issue of transit. Every day hundreds of thousands of Ontarians check Google Maps or locally developed applications for transit information. The accumulated minutes not spent waiting for transit has likely saved citizens millions of hours. Few probably realize however that it is because local governments “opened” transit data that it has become so accessible on our computers and phones.

Finally, there are a number of new ways to think about how to “talk” to Ontarians. It is possible that traditional public consultations could be improved. But there is also an opportunity to think more broadly about how the government interacts with citizens. Projects like Wikipedia demonstrate how many small contributions can create powerful resources and public assets. Could such a model apply to government?

All of these opportunities are exciting – and the province is right to explore them. But important policy questions remain. For example: how do we safeguard the data government collects to minimize political interference? The country lost a critical resource when the federal government destroyed the reliability of the long form census by making it voluntary. If crowdsourcing and other new forms of public engagement can be adopted for government, how do we manage privacy concerns and preserve equality of opportunity? And how will such changes affect public representation? Canada’s political system has been marked by increasing centralization of power over the past several decades – will new technologies and approaches further this trend? Or could they be shaped to arrest it? These are not simple questions.

It is also easy to dismiss these efforts. This will neither be the first nor the last time people talk about open government. Indeed, there is a wonderfully cynical episode of Yes, Minister from 1980 titled “Open Government.” More recently, various revelations about surveillance and national governments’ desire to snoop in on our every email and phone call reveals much about what is both opaque and to be feared about our governments. Such cynicism is both healthy and necessary. It is also a reason why we should demand more.

Open government is not something we will ever fully achieve. But I do hope that it can serve as an objective and a constantly critical lens for thinking about what we should demand. I can’t speak for the other panelists of the task force, but that will be how I approach my work.

David Eaves is a public policy entrepreneur, open government activist and negotiation expert. He is a member of the Ontario government’s new Engagement Task Force.

Government Procurement Reform – It matters

Earlier this week I posted a slidecast on my talk to Canada’s Access to Information Commissioners about how, as they do their work, they need to look deeper into the government “stack.”

My core argument was how decisions about what information gets made accessible is no longer best managed at the end of a policy development or program delivery process but rather should be embedded in it. This means monkeying around and ensuring there is capacity to export government information and data from the tools (e.g. software) government uses every day. Logically, this means monkeying around in procurement policy (see slide below) since that is where the specs for the tools public servants use get set. Trying to bake “access” into processes after the software has been chosen is, well, often an expensive nightmare.

Gov stack

Privately, one participant from a police force, came up to me afterward and said that I was simply guiding people to another problem – procurement. He is right. I am. Almost everyone I talk to in government feels like procurement is broken. I’ve said as much myself in the past. Clay Johnson is someone who has thought about this more than others, here he is below at the Code for America Summit with a great slide (and talk) about how the current government procurement regime rewards all the wrong behaviours and often, all the wrong players.

Clay Risk profile

So yes, I’m pushing the RTI and open data community to think about procurement on purpose. Procurement is borked. Badly. Not just from a wasting tax dollars money perspective, or even just from a service delivery perspective, but also because it doesn’t serve the goals of transparency well. Quite the opposite. More importantly, it isn’t going to get fixed until more people start pointing out that it is broken and start contributing to solving this major bottle neck of a problem.

I highly, highly recommend reading Clay Johnson’s and Harper Reed’s opinion piece in today’s New York Times about procurement titled Why the Government Never Gets Tech Right.

All of this becomes more important if the White House’s (and other governments’ at all levels) have any hope of executing on their digital strategies (image below).  There is going to be a giant effort to digitize much of what governments do and a huge number of opportunities for finding efficiencies and improving services is going to come from this. However, if all of this depends on multi-million (or worse 10 or 100 million) dollar systems and websites we are, to put it frankly, screwed. The future of government isn’t to be (continue to be?) taken over by some massive SAP implementation that is so rigid and controlled it gives governments almost no opportunity to innovate. And this is the future our procurement policies steer us toward. A future with only a tiny handful of possible vendors, a high risk of project failure and highly rigid and frail systems that are expensive to adapt.

Worse there is no easy path here. I don’t see anyone doing procurement right. So we are going to have to dive into a thorny, tough problem. However, the more governments that try to tackle it in radical ways, the faster we can learn some new and interesting lessons.

Open Data WH

New Zealand: The World’s Lab for Progressive Tech Legislation?

Cross posted with TechPresident.

One of the nice advantage of having a large world with lots of diverse states is the range of experiments it offers us. Countries (or regions within them) can try out ideas, and if they work, others can copy them!

For example, in the world of drug policy, Portugal effectively decriminalized virtually all drugs. The result has been dramatic. And much of it positive. Some of the changes include a decline in both HIV diagnoses amongst drug users by 17% and drug use among adolescents (13-15 yrs). For those interested you can read more about this in a fantastic report by the Cato Institute written by Glenn Greenwald back in 2009 before he started exposing the unconstitutional and dangerous activities of the NSA. Now some 15 years later there have been increasing demands to decriminalize and even legalize drugs, especially in Latin America. But even the United States is changing, with both the states of Washington and Colorado opting to legalize marijuana. The lessons of Portugal have helped make the case, not by penetrating the public’s imagination per se, but by showing policy elites that decriminalization not only works but it saves lives and saves money. Little Portugal may one day be remembered for changing the world.

I wonder if we might see a similar paper written about New Zealand ten years from now about technology policy. It may be that a number of Kiwis will counter the arguments in this post by exposing all the reasons why I’m wrong (which I’d welcome!) but at a glance, New Zealand would probably be the place I’d send a public servant or politician wanting to know more about how to do technology policy right.

So why is that?

First, for those who missed it, this summer New Zealand banned software patents. This is a stunning and entirely sensible accomplishment. Software patents, and the legal morass and drag on innovation they create, are an enormous problem. The idea that Amazon can patent “1-click” (e.g. the idea that you pre-store someone’s credit card information so they can buy an item with a single click) is, well, a joke. This is a grand innovation that should be protected for years?

And yet, I can’t think of single other OECD member country that is likely to pass similar legislation. This means that it will be up to New Zealand to show that the software world will survive just fine without patents and the economy will not suddenly explode into flames. I also struggle to think of an OECD country where one of the most significant industry groups – the Institute of IT Professionals appeared – would not only both support such a measure but help push its passage:

The nearly unanimous passage of the Bill was also greeted by Institute of IT Professionals (IITP) chief executive Paul Matthews, who congratulated [Commerce Minister] Foss for listening to the IT industry and ensuring that software patents were excluded.

Did I mention that the bill passed almost unanimously?

Second, New Zealanders are further up the learning curve around the dangerous willingness their government – and foreign governments – have for illegally surveilling them online.

The arrest of Kim Dotcom over MegaUpload has sparked some investigations into how closely the country’s police and intelligence services follow the law. (For an excellent timeline of the Kim Dotcom saga, check out this link). This is because Kim Dotcom was illegally spied on by New Zealand’s intelligence services and police force, at the behest of the United States, which is now seeking to extradite him. The arrest and subsequent fall out has piqued public interest and lead to investigations including the Kitteridge report (PDF) which revealed that “as many as 88 individuals have been unlawfully spied on” by the country’s Government Communications Security Bureau.

I wonder if the Snowden documents and subsequent furor probably surprised New Zealanders less than many of their counterparts in other countries since it was less a bombshell than another data point on a trend line.

I don’t want to overplay the impact of the Kim Dotcom scandal. It has not, as far as I can tell, lead to a complete overhaul of the rules that govern intelligence gathering and online security. That said, I suspect, it has created a political climate that amy be more (healthily) distrustful of government intelligence services and the intelligence services of the United States. As a result, it is likely that politicians have been more sensitive to this matter for a year or two longer than elsewhere and that public servants are more accustomed at policies through the lens of its impact on rights and privacy of citizens than in many other countries.

Finally, (and this is somewhat related to the first point) New Zealand has, from what I can tell, a remarkably strong open source community. I’m not sure why this is the case, but suspect that people like Nat Torkington – and open source and open data advocate in New Zealand – and others like him play a role in it. More interestingly, this community has had influence across the political spectrum. The centre left labour party deserves much of the credit for the patent reform while the centre-right New Zealand National Party has embraced both open data. The country was among the first to embrace open source as a viable option when procuring software and in 2003 the government developed an official open source policy to help clear the path for greater use of open source software. This contrasts sharply with my experience in Canada where, as late as 2008, open source was still seen by many government officials as a dangerous (some might say cancerous?) option that needed to be banned and/or killed.

All this is to say that in both the public (e.g. civil society and the private sector) and within government there is greater expertise around thinking about open source solutions and so an ability to ask different questions about intellectual property and definitions of the public good. While I recognize that this exists in many countries now, it has existed longer in New Zealand than in most, which suggests that it enjoys greater acceptance in senior ranks and there is greater experience in thinking about and engaging these perspectives.

I share all this for two reasons:

First, I would keep my eye on New Zealand. This is clearly a place where something is happening in a way that may not be possible in other OECD countries. The small size of its economy (and so relative lack of importance to the major proprietary software vendors) combined with a sufficient policy agreement both among the public and elites enables the country to overcome both internal and external lobbying and pressure that would likely sink similar initiatives elsewhere. And while New Zealand’s influence may be limited, don’t underestimate the power of example. Portugal also has limited influence, but its example has helped show the world that the US -ed narrative on the “war on drugs” can be countered. In many ways this is often how it has to happen. Innovation, particularly in policy, often comes from the margins.

Second, if a policy maker, public servant or politician comes to me and asks me who to talk to around digital policy, I increasingly find myself looking at New Zealand as the place that is the most compelling. I have similar advice for PhD students. Indeed, if what I’m arguing is true, we need research to describe, better than I have, the conditions that lead to this outcome as well as the impact these policies are having on the economy, government and society. Sadly, I have no names to give to those I suggest this idea to, but I figure they’ll find someone in the government to talk to, since, as a bonus to all this, I’ve always found New Zealanders to be exceedingly friendly.

So keep an eye on New Zealand, it could be the place where some of the most progressive technology policies first get experimented with. It would be a shame if no one noticed.

(Again If some New Zealanders want to tell me I’m wrong, please do. Obviously, you know your country better than I do).