Tag Archives: public policy

Re-Architecting the City by Changing the Timelines and Making it Disappear

A couple of weeks ago I was asked by one of the city’s near where I live to sit on an advisory board around the creation of their Digital Government strategy. For me the meeting was good since I felt that a cohort of us on the advisory board were really pushing the city into a place of discomfort (something you want an advisory board to do in certain ways). My sense is a big part of that conversation had to do with a subtle gap between the city staff and some of the participants around what a digital strategy should deal with.

Gord Ross (of Open Roads) – a friend and very smart guy – and I were debriefing afterwards about where and why the friction was arising.

We had been pushing the city hard on its need to iterate more and use data to drive decisions. This was echoed by some of the more internet oriented members of the board. But at one point I feel like I got healthy push back from one of the city staff. How, they asked, can I iterate when I’ve got 10-60 years timelines that I need to plan around? I simply cannot iterate when some of the investments I’m making are that longterm.

Gord raised Stewart Brands building layers as a metaphor which I think sums up the differing views nicely.

Brand presents his basic argument in an early chapter, “Shearing Layers,” which argues that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In his business-consulting manner, he calls these the “Six S’s” (borrowed in part from British architect and historian F. Duffy’s “Four S’s” of capital investment in buildings).

The Site is eternal; the Structure is good for 30 to 300 years (“but few buildings make it past 60, for other reasons”); the Skin now changes every 15 to 20 years due to both weathering and fashion; the Services (wiring, plumbing, kitchen appliances, heating and cooling) change every seven to 15 years, perhaps faster in more technological settings; Space Planning, the interior partitioning and pedestrian flow, changes every two or three years in offices and lasts perhaps 30 years in the most stable homes; and the innermost layers of Stuff (furnishings) change continually.

My sense is the city staff are trying to figure out what the structure, skin and services layers should be for a digital plan, whereas a lot of us in the internet/tech world live occasionally in the services layer but most in the the space planning and stuff layers where the time horizons are WAY shorter. It’s not that we have to think that way, it is just that we have become accustomed to thinking that way… doubly so since so much of what works on the internet isn’t really “planned” it is emergent. As a result, I found this metaphor useful for trying to understanding how we can end up talking past one another.
It also goes to the heart of what I was trying to convey to the staff: that I think there are a number of assumptions governments make about what has been a 10 or 50 year lifecycle versus what that lifecycle could be in the future.
In other words, a digital strategy could allow some things “phase change” from being say in the skin or service layer to being able to operate on the faster timeline, lower capital cost and increased flexibility of a space planning layer. This could have big implications on how the city works. If you are buying software or hardware on the expectation that you will only have to do it every 15 years your design parameters and expectations will be very different than if it is designed for 5 years. It also has big implications for the systems that you connect to or build around that software. If you accept that the software will constantly be changing, easy integration becomes a necessary feature. If you think you will have things for decades than, to a certain degree, stability and rigidity are a byproduct.
This is why, if the choice is between trying to better predict how to place a 30 year bet (e.g. architect something to be in the skin or services layer) or place a 5 year bet (architect it to be in the space planning or stuff layer) put as much of it in the latter as possible. If you re-read my post on the US government’s Digital Government strategy, this is functionally what I think they are trying to do. By unbundling the data from the application they are trying to push the data up to the services layer of the metaphor, while pushing the applications built upon it down to the space planning and stuff layer.
This is not to say that nothing should be long term, or that everything long term is bad. I hope not to convey this. Rather, that by being strategic about what we place where we can foster really effective platforms (services) that can last for decades (think data) while giving ourselves a lot more flexibility around what gets built around them (think applications, programs, etc…).
The Goal
The reason why you want to do all this, is because you actually want to give the city the flexibility to a) compete in a global marketplace and b) make itself invisible to its citizens. I hinted at this goal the other day at the end of my piece in TechPresident on the UK’s digital government strategy.
On the competitive front I suspect that across Asia and Africa about 200 cities, and maybe a lot more, are going to get brand new infrastructure over the coming 100 years. Heck some of these cities are even being built from scratch. If you want your city to compete in that environment, you’d better be able to offer new and constantly improving services in order to keep up. If not, others may create efficiencies and discover improvements that given them structural advantages in the competition for talent and other resources.
But the other reason is that this kind of flexibility is, I think, critical to making (what Gord now has me referring to as the big “C” city) disappear. I like my government services best when they blend into my environment. If you live a privilidged Western World existence… how often do you think about electricity? Only when you flick the switch and it doesn’t work. That’s how I suspect most people want government to work. Seamless, reliable, designed into their lives, but not in the way of their lives. But more importantly, I want the “City” to be invisible so that it doesn’t get in the way of my ability to enjoy, contribute to, and be part of the (lower case) city – the city that we all belong to. The “city” as that messy, idea swapping, cosmopolitan, wealth and energy generating, problematic space that is the organism humans create where ever the gather in large numbers. I’d rather be writing the blog post on a WordPress installation that does a lot of things well but invisibly, rather than monkeying around with scripts, plugins or some crazy server language I don’t want to know. Likewise, the less time I spend on “the City,” and the more seamlessly it works, the more time I spend focused on “the city” doing the things that make life more interesting and hopefully better for myself and the world.
Sorry for the rambling post. But digesting a lot of thoughts. Hope there were some tasty pieces in that for you. Also, opaque blog post title eh? Okay bed time now.

The UK's Digital Government Strategy – Worth a Peek

I’ve got a piece up on TechPresident about the UK Government’s Digital Strategy which was released today.

The strategy (and my piece!) are worth checking out. They are saying a lot of the right things – useful stuff for anyone in industry or sector that has been conservative vis-a-vis online services (I’m looking at you governments and banking).

As  I note in the piece… there is reason we should expect better:

The second is that the report is relatively frank, as far as government reports go. The website that introduces the three reports is emblazoned with an enormous title: “Digital services so good that people prefer to use them.” It is a refreshing title that amounts to a confession I’d like to see from more governments: “sorry, we’ve been doing it wrong.” And the report isn’t shy about backing that statement up with facts. It notes that while the proportion of Internet users who shop online grew from 74 percent in 2005 to 86 percent in 2011, only 54 percent of UK adults have used a government service online. Many of those have only used one.

Of course the real test will come with execution. The BC Government, the White House and others have written good reports on digital government, but it is rolling it out that is the tricky part. The UK Government has pretty good cred as far as I’m concerned, but I’ll be watching.

You can read the piece here – hope you enjoy!

Broken Government: A Case Study in Penny Wise but Pound Foolish Management

Often I write about the opportunities of government 2.0, but it is important for readers to be reminded of just how challenging the world of government 1.0 can be, and how far away any uplifting future can feel.

I’ve stumbled upon a horrifically wonderful example of how tax payers are about to spend an absolutely ridiculous amount of money so that a ton of paper can be pushed around Ottawa to little or no effect. Ironically, it will all in the name of savings and efficiency.

And, while you’ll never see this reported in a newspaper it’s a perfect case study of the type of small decision that renders (in this case the Canadian) government both less effective and more inefficient. Governments: take note.

First, the context. Treasury Board (the entity that oversees how money is spent across the Canadian government) recently put out a simple directive. It stipulates all travel costs exceeding $25,000 must get Ministerial approval and costs from $5000-$25,000 must get Deputy Head approval.

Here are the relevant bits of texts since no sane human should read the entire memo (infer what you wish about me from that):

2.5.1 Ministerial approval is required when total departmental costs associated with the event exceed $25,000.

and

2.5.5 Deputy head approval of an event is required when the event has the following characteristics:

Total departmental costs associated with the event exceed $5,000 but are less than $25,000; or

Total hospitality costs associated with the event exceed $1,500 but are less than $5,000; and

None of the elements listed in 2.5.2 a. to g. are present for which delegated authority has not been provided.

This sounds all very prudent-like. Cut down on expenses! Make everyone justify travel! Right? Except the memo suggests (and, I’m told is being interpreted) as meaning that it should be applied to any event – including an external conference, but even internal planning meetings.

To put this in further context for those who work in the private sector: if you worked for a large publicly traded company – say one with over 5,000, 10,000 or even more employees – the Minister is basically the equivalent of the Chairman of the Board. And the Deputy head? They are like the CEO.

Imagine creating a rule at such a company like Ford, that required an imaginary “safety engineering team” to get the chairman of the company to sign off on their travel expenses – months in advance – if, say, 10 of them needed to collectively spend $25,000 to meet in person or attend an important safety conference. It gets worse. If the team were smaller, say 3-5 people and they could keep the cost to $5000 they would still need approval from the CEO. In such a world it would be hard to imagine new products being created, new creative cost saving ideas getting hammered out. In fact, it would be hard for almost any distributed team to meet without creating a ton of paperwork. Over time, customers would begin to notice as work slowly ground to a halt.

This is why this isn’t making government more efficient. It is going to make it crazier.

It’s also going to make it much, much, more ineffective and inefficient.

For example, this new policy may cause a large number of employees to determine that getting approval for travel is too difficult and they’ll simply give up. Mission accomplished! Money saved! And yes, some of this travel was probably not essential. But much, and likely a significant amount was. Are we better governed? Are we safer? Is our government smarter, in a country where say inspectors, auditors, policy experts and other important decision makers (especially those in the regions) are no longer learning at conferences, participating in key processes or attending meetings about important projects because the travel was too difficult to get approval for? Likely not.

But there is a darker conclusion to draw as well.  There is probably a significant amount of travel that remains absolutely critical. So now we are going to have public servants writing thousands of briefing notes every year seeking to get approval by directors, and then revising them again for approval by director generals (DGs), and again for the Assistant Deputy Ministers (ADMs), and again for the Deputy Minister (DMs) and again, possibly, for Ministerial approval.

That is a truly fantastic waste of the precious time of a lot of very, very senior people. To say nothing of the public servants writing, passing around, revising and generally pushing all these memos.

I’ll go further. I have every confidence that for every one dollar in travel this policy managed to deter from being requested, $500 dollars in the time of Directors, DGs, ADMs, DMs and other senior staff will have been wasted.  Given Canada is a place where the population – and thus a number of public servants – are thinly spread across an over 4000 kilometer wide stretch I suspect there is a fair bit of travel that needs to take place. Using Access to Information Requests you might even be able to ball park how much time was wasted on these requests/memos.

Worse, I’m not even counting the opportunity cost. Rather than tackling the critical problems facing our country, the senior people will be swatting away TPS reports travel budget requests. The only companies I know that run themselves this way are those that have filed for bankruptcy and essentially are not spending any money as they wait to be restructured or sold. They aren’t companies that are trying to solve any new problems, and are certainly not those trying to find creative or effective ways to save money.

In the end, this tells us a lot of about the limits of hierarchical systems. Edicts are a blunt tool – they seldom (if ever) solve the root of a problem and more often simply cause new, bigger problems since the underlying issues remain unresolved. There are also some wonderful analogies to wikileaks and denial of service attacks that I’ll save that for tomorrow.

 

 

How Government should interact with Developers, Data Geeks and Analysts

Below is a screen shot from the Opendatabc google group from about two months ago. I meant to blog about this earlier but life has been in the way. For me, this is a prefect example of how many people in the data/developer/policy world probably would like to interact with their local, regional or national government.

A few notes on this interaction:

  • I occasionally hear people try to claim the governments are not responsive to requests for data sets. Some aren’t. Some are. To be fair, this was not a request for the most controversial data set in the province. But it is was a request. And it was responded to. So clearly there are some governments that are responsive. The questions is figuring out which one’s are, why they are, and see if we can export that capacity to other jurisdictions.
  • This interaction took place in a google group – so the whole context is social and norm driven. I love that public officials in British Columbia as well as with the City of Vancouver are checking in every once in a while on google groups about open data, contributing to conversations and answering questions that citizens have about government, policies and open data. It’s a pretty responsive approach. Moreover, when people are not constructive it is the group that tends to moderate the behaviour, rather than some leviathan.
  • Yes, I’ve blacked out the email/name of the public servant. This is not because I think they’d mind being known or because they shouldn’t be know, but because I just didn’t have a chance to ask for permission. What’s interesting is that this whole interaction is public and the official was both doing what that government wanted and compliant with all social media rules. And yet, I’m blacking it out, which is a sign of how messed up current rules and norms make citizens relationships with public officials they interact with online -I’m worried of doing something wrong by telling others about a completely public action. (And to be clear, the province of BC has really good and progressive rules around these types of things)
  • Yes, this is not the be all end all of the world. But it’s a great example of a small thing being doing right. It’s nice to be able to show that to other government officials.

 

The US Government's Digital Strategy: The New Benchmark and Some Lessons

Last week the White House launched its new roadmap for digital government. This included the publication of Digital Government: Building a 21st Century Platform to Better Serve the American People (PDF version), the issuing of a Presidential directive and the announcement of White House Innovation Fellows.

In other words, it was a big week for those interested in digital and open government. Having had some time to digest these docs and reflect upon them, below are some thoughts on these announcement and lessons I hope governments and other stakeholders take from it.

First off, the core document – Digital Government: Building a 21st Century Platform to Better Serve the American People – is a must read if you are a public servant thinking about technology or even about program delivery in general. In other words, if your email has a .gov in it or ends in something like .gc.ca you should probably read it. Indeed, I’d put this document right up there with another classic must read, The Power of Information Taskforce Report commissioned by the Cabinet Office in the UK (which if you have not read, you should).

Perhaps most exciting to me is that this is the first time I’ve seen a government document clearly declare something I’ve long advised governments I’ve worked with: data should be a layer in your IT architecture. The problem is nicely summarized on page 9:

Traditionally, the government has architected systems (e.g. databases or applications) for specific uses at specific points in time. The tight coupling of presentation and information has made it difficult to extract the underlying information and adapt to changing internal and external needs.

Oy. Isn’t that the case. Most government data is captured in an application and designed for a single use. For example, say you run the license renewal system. You update your database every time someone wants to renew their license. That makes sense because that is what the system was designed to do. But, maybe you like to get track, in real time, how frequently the database changes, and by who. Whoops. System was designed for that because that wasn’t needed in the original application. Of course, being able to present the data in that second way might be a great way to assess how busy different branches are so you could warn prospective customers about wait times. Now imagine this lost opportunity… and multiply it by a million. Welcome to government IT.

Decoupling data from application is pretty much close to the first think in the report. Here’s my favourite chunk from the report (italics mine, to note extra favourite part).

The Federal Government must fundamentally shift how it thinks about digital information. Rather than thinking primarily about the final presentation—publishing web pages, mobile applications or brochures—an information-centric approach focuses on ensuring our data and content are accurate, available, and secure. We need to treat all content as data—turning any unstructured content into structured data—then ensure all structured data are associated with valid metadata. Providing this information through web APIs helps us architect for interoperability and openness, and makes data assets freely available for use within agencies, between agencies, in the private sector, or by citizens. This approach also supports device-agnostic security and privacy controls, as attributes can be applied directly to the data and monitored through metadata, enabling agencies to focus on securing the data and not the device.

To help, the White House provides a visual guide for this roadmap. I’ve pasted it below. However, I’ve taken the liberty to highlight how most governments try to tackle open data on the right – just so people can see how different the White House’s approach is, and why this is not just an issue of throwing up some new data but a total rethink of how government architects itself online.

There are of course, a bunch of things that flow out of the White House’s approach that are not spelled out in the document. The first and most obvious is once you make data an information layer you have to manage it directly. This means that data starts to be seen and treated as a asset – this means understanding who’s the custodian and establishing a governance structure around it. This is something that, previously, really only libraries and statistical bureaus have really understand (and sometimes not even!).

This is the dirty secret about open data – is that to do it effectively you actually have to start treating data as an asset. For the White House the benefit of taking that view of data is that it saves money. Creating a separate information layer means you don’t have to duplicate it for all the different platforms you have. In addition, it gives you more flexibility in how you present it, meaning the costs of showing information on different devices (say computers vs. mobile phones) should also drop. Cost savings and increased flexibility are the real drivers. Open data becomes an additional benefit. This is something I dive into deeper detail in a blog post from July 2011: It’s the icing, not the cake: key lesson on open data for governments.

Of course, having a cool model is nice and all, but, as like the previous directive on open government, this document has hard requirements designed to force departments to being shifting their IT architecture quickly. So check out this interesting tidbit out of the doc:

While the open data and web API policy will apply to all new systems and underlying data and content developed going forward, OMB will ask agencies to bring existing high-value systems and information into compliance over a period of time—a “look forward, look back” approach To jump-start the transition, agencies will be required to:

  • Identify at least two major customer-facing systems that contain high-value data and content;
  • Expose this information through web APIs to the appropriate audiences;
  • Apply metadata tags in compliance with the new federal guidelines; and
  • Publish a plan to transition additional systems as practical

Note the language here. This is again not a “let’s throw some data up there and see what happens” approach. I endorse doing that as well, but here the White House is demanding that departments be strategic about the data sets/APIs they create. Locate a data set that you know people want access to. This is easy to assess. Just look at pageviews, or go over FOIA/ATIP requests and see what is demanded the most. This isn’t rocket science – do what is in most demand first. But you’d be surprised how few governments want to serve up data that is in demand.

Another interesting inference one can make from the report is that its recommendations embrace the possibility of participants outside of government – both for and non-profit – can build services on top of government information and data. Referring back to the chart above see how the Presentation Layer includes both private and public examples? Consequently, a non-profits website dedicated to say… job info veterans could pull live data and information from various Federal Government websites, weave it together and present in a way that is most helpful to the veterans it serves. In other words the opportunity for innovation is fairly significant. This also has two addition repercussions. It means that services the government does not currently offer – at least in a coherent way – could be woven together by others. It also means there may be information and services the government simply never chooses to develop a presentation layer for – it may simply rely on private or non-profit sector actors (or other levels of government) to do that for it. This has interesting political ramifications in that it could allow the government to “retreat” from presenting these services and rely on others. There are definitely circumstances where this would make me uncomfortable, but the solution is not to not architect this system this way, it is to ensure that such programs are funded in a way that ensures government involvement in all aspects – information, platform and presentation.

At this point I want to interject two tangential thoughts.

First, if you are wondering why it is your government is not doing this – be it at the local, state or national level. Here’s a big hint: this is what happens when you make the CIO an executive who reports at the highest level. You’re just never going to get innovation out of your government’s IT department if the CIO reports into the fricking CFO. All that tells me is that IT is a cost centre that should be focused on sustaining itself (e.g. keeping computers on) and that you see IT as having no strategic relevance to government. In the private sector, in the 21st century, this is pretty much the equivalent of committing suicide for most businesses. For governments… making CIO’s report into CFO’s is considered a best practice. I’ve more to say on this. But I’m taking a deep breath and am going to move on.

Second, I love how the document also is so clear on milestones – and nicely visualized as well. It may be my poor memory but I feel like it is rare for me to read a government road map on any issues where the milestones are so clearly laid out.

It’s particularly nice when a government treats its citizens as though they can understand something like this, and aren’t afraid to be held accountable for a plan. I’m not saying that other governments don’t set out milestones (some do, many however do not). But often these deadlines are buried in reams of text. Here is a simply scorecard any citizen can look at. Of course, last time around, after the open government directive was issued immediately after Obama took office, they updated these score cards for each department, highlight if milestones were green, yellow or red, depending on how the department was performing. All in front of the public. Not something I’ve ever seen in my country, that’s for sure.

Of course, the document isn’t perfect. I was initially intrigued to see the report advocates that the government “Shift to an Enterprise-Wide Asset Management and Procurement Model.” Most citizens remain blissfully unaware of just how broken government procurement is. Indeed, I say this dear reader with no idea where you live and who your government is, but I enormously confident your government’s procurement process is totally screwed. And I’m not just talking about when they try to buy fighter planes. I’m talking pretty much all procurement.

Today’s procurement is perfectly designed to serve one group. Big (IT) vendors. The process is so convoluted and so complicated they are really the only ones with the resources to navigate it. The White House document essentially centralizes procurement further. On the one hand this is good, it means the requirements around platforms and data noted in the document can be more readily enforced. Basically the centre is asserting more control at the expense of the departments. And yes, there may be some economies of scale that benefit the government. But the truth is whenever procurement decision get bigger, so to do the stakes, and so to does the process surrounding them. Thus there are a tiny handful of players that can respond to any RFP and real risks that the government ends up in a duopoly (kind of like with defense contractors). There is some wording around open source solutions that helps address some of this, but ultimately, it is hard to see how the recommendations are going to really alter the quagmire that is government procurement.

Of course, these are just some thoughts and comments that struck me and that I hope, those of you still reading, will find helpful. I’ve got thoughts on the White House Innovation Fellows especially given it appears to have been at least in part inspired by the Code for America fellowship program which I have been lucky enough to have been involved with. But I’ll save those for another post.

Open Data Movement is a Joke?

Yesterday, Tom Slee wrote a blog post called “Why the ‘Open Data Movement’ is a Joke,” which – and I say this as a Canadian who understands the context in which Slee is writing – is filled with valid complaints about our government, but which I feel paints a flawed picture of the open data movement.

Evgeny Morozov tweeted about the post yesterday, thereby boosting its profile. I’m a fan of Evgeny. He is an exceedingly smart and critical thinker on the intersection of technology and politics. He is exactly what our conversation needs (unlike, say, Andrew Keen). I broadly felt his comments (posted via his Twitter stream) were both on target: we need to think critically about open data; and lacked nuance: it is possible for governments to simultaneously become more open and more closed on different axis. I write all this confident that Evgeny may turn his ample firepower on me, but such is life.

So, a few comments on Slee’s post:

First, the insinuation that the open data movement is irretrievably tainted by corporate interests is so over the top it is hard to know where to begin to respond. I’ve been advocating for open data for several years in Canada. Frankly, it would have been interesting and probably helpful if a large Canadian corporation (or even a medium sized one) took notice. Only now, maybe 4-5 years in, are they even beginning to pay attention. Most companies don’t even know what open data is.

Indeed, the examples of corporate open data “sponsors” that Slee cites are U.S. corporations, sponsoring U.S. events (the Strata conference) and nonprofits (Code for America – of which I have been engaged with). Since Slee is concerned primarily with the Canadian context, I’d be interested to hear his thoughts on how these examples compare to Canadian corporate involvement in open data initiatives – or even foreign corporations’ involvement in Canadian open data.

And not to travel too far down the garden path on this, but it’s worth noting that the corporations that have jumped on the open data bandwagon in the US often have two things in common: First, their founders are bona fide geeks, who in my experience are both interested in hard data as an end unto itself (they’re all about numbers and algorithms), and want to see government-citizen interactions – and internal governmental interactions, too – made better and more efficient. Second, of course they are looking after their corporate interests, but they know they are not at the forefront of the open data movement itself. Their sponsorship of various open data projects may well have profit as one motive, but they are also deeply interested in keeping abreast of developments in what looks to be a genuine Next Big Thing. For a post the Evgeny sees as being critical of open data, I find all this deeply uncritical. Slee’s post reads as if anything that is touched by a corporation is tainted. I believe there are both opportunities and risks. Let’s discuss them.

So, who has been advocating for open data in Canada? Who, in other words, comprises the “open data movement” that Slee argues doesn’t really exist – and that “is a phrase dragged out by media-oriented personalities to cloak a private-sector initiative in the mantle of progressive politics”? If you attend one of the hundreds of hackathons that have taken place across Canada over the past couple years – like those that have happened in Vancouver, Regina, Victoria, Montreal and elsewhere – you’ll find they are generally organized in hackspaces and by techies interested in ways to improve their community. In Ottawa, which I think does the best job, they can attract hundreds of people, many who bring spouses and kids as they work on projects they think will be helpful to their community. While some of these developers hope to start businesses, many others try to tackle issues of public good, and/or try to engage non-profits to see if there is a way they can channel their talent and the data. I don’t for a second pretend that these participants are a representative cross-section of Canadians, but by and large the profile has been geek, technically inclined, leaning left, and socially minded. There are many who don’t fit that profile, but that is probably the average.

Second, I completely agree that this government has been one of the most – if not the most – closed and controlling in Canada’s history. I, like many Canadians, echo Slee’s frustration. What’s worse, is I don’t see things getting better. Canadian governments have been getting more centralized and controlling since at least Trudeau, and possibly earlier (Indeed, I believe polling and television have played a critical role in driving this trend). Yes, the government is co-opting the language of open data in an effort to appear more open. All governments co-opt language to appear virtuous. Be it on the environment, social issues or… openness, no government is perfect and indeed, most are driven by multiple, contradictory goals.

As a member of the Federal Government’s Open Government Advisory Panel I wrestle with this challenge constantly. I’m try hard to embed some openness into the DNA of government. I may fail. I know that I won’t succeed in all ways, but hopefully I can move the rock in the right direction a little bit. It’s not perfect, but then it’s pretty rare that anything involving government is. In my (unpaid, advisory, non-binding) role I’ve voiced that the government should provide the Access to Information Commissioner with a larger budget (they cut it) and that they enable government scientists to speak freely (they have not so far). I’ve also advocated that they should provide more open data. There they have, including some data sets that I think are important – such as aid data (which is always at risk of being badly spent). For some, it isn’t enough. I’d like for there to be more open data sets available, and I appreciate those (like Slee – who I believe is writing from a place of genuine care and concern) who are critical of these efforts.

But, to be clear, I would never equate open government data as being tantamount to solving the problems of a restrictive or closed government (and have argued as much here). Just as an authoritarian regime can run on open-source software, so too might it engage in open data. Open data is not the solution for Open Government (I don’t believe there is a single solution, or that Open Government is an achievable state of being – just a goal to pursue consistently), and I don’t believe anyone has made the case that it is. I know I haven’t. But I do believe open data can help. Like many others, I believe access to government information can lead to better informed public policy debates and hopefully some improved services for citizens (such as access to transit information). I’m not deluded into thinking that open data is going to provide a steady stream of obvious “gotcha moments” where government malfeasance is discovered, but I am hopeful that government data can arm citizens with information that the government is using to inform its decisions so that they can better challenge, and ultimately help hold accountable, said government.

Here is where I think Evgeny’s comments on the problem with the discourse around “open” are valid. Open Government and Open Data should not be used interchangeably. And this is an issue Open Government and Open Data advocates wrestle with. Indeed, I’ve seen a great deal of discussion and reflection come as a result of papers such as this one.

Third, the arguments around StatsCan all feel deeply problematic. I say this as the person who wrote the first article (that I’m aware of) about the long form census debacle in a major media publication and who has been consistently and continuously critical of it. This government has had a dislike for Statistics Canada (and evidence) long before open data was in their vocabulary, to say nothing of a policy interest. StatsCan was going to be a victim of dramatic cuts regardless of Canada’s open data policy – so it is misleading to claim that one would “much rather have a fully-staffed StatsCan charging for data than a half-staffed StatsCan providing it for free.” (That quote comes from Slee’s follow-up post, here.) That was never the choice on offer. Indeed, even if it had been, it wouldn’t have mattered. The total cost of making StatsCan data open is said to have been $2 million; this is a tiny fraction of the payroll costs of the 2,500 people they are looking to lay off.

I’d actually go further than Slee here, and repeat something I say all the time: data is political. There are those who, naively, believed that making data open would depoliticize policy development. I hope there are situations where this might be true, but I’ve never taken that for granted or assumed as much: Quite the opposite. In a world where data increasingly matters, it is increasingly going to become political. Very political. I’ve been saying this to the open data community for several years, and indeed was a warning that I made in the closing part of my keynote at the Open Government Data Camp in 2010. All this has, in my mind, little to do with open data. If anything, having data made open might increase the number of people who are aware of what is, and is not, being collected and used to inform public policy debates. Indeed, if StatsCan had made its data open years ago it might have had a larger constituency to fight on its behalf.

Finally, I agree with the Nat Torkington quote in the blog post:

Obama and his staff, coming from the investment mindset, are building a Gov 2.0 infrastructure that creates a space for economic opportunity, informed citizens, and wider involvement in decision making so the government better reflects the community’s will. Cameron and his staff, coming from a cost mindset, are building a Gov 2.0 infrastructure that suggests it will be more about turning government-provided services over to the private sector.

Moreover, it is possible for a policy to have two different possible drivers. It can even have multiple contradictory drivers simultaneously. In Canada, my assessment is that the government doesn’t have this level of sophistication around its thinking on this file, a conclusion I more or less wrote when assessing their Open Government Partnership commitments. I have no doubt that the conservatives would like to turn government provided services over to the private sector, and open data has so far not been part of that strategy. In either case, there is, in my mind, a policy infrastructure that needs to be in place to pursue either of these goals (such as having a data governance structure in place). But from a more narrow open data perspective, my own feeling is that making the data open has benefits for public policy discourse, public engagement, and economic reasons. Indeed, making more government data available may enable citizens to fight back against policies they feel are unacceptable. You may not agree with all the goals of the Canadian government – as someone who has written at least 30 opeds in various papers outlining problems with various government policies, neither do I – but I see the benefits of open data as real and worth pursuing, so I advocate for it as best I can.

So in response to the opening arguments about the open data movement…

It’s not a movement, at least in any reasonable political or cultural sense of the word.

We will have to agree to disagree. My experience is quite the opposite. It is a movement. One filled with naive people, with skeptics, with idealists focused on accountability, developers hoping to create apps, conservatives who want to make government smaller and progressives who want to make it more responsive and smarter. There was little in the post that persuaded me there wasn’t a movement. What I did hear is that the author didn’t like some parts of the movement and its goals. Great! Please come join the discussion; we’d love to have you.

It’s doing nothing for transparency and accountability in government,

To say it is doing nothing for transparency seems problematic. I need only cite one data set now open to say that isn’t true. And certainly publication of aid data, procurement data, publications of voting records and the hansard are examples of places where it may be making government more transparent and accountable. What I think Slee is claiming is that open data isn’t transforming the government into a model of transparency and accountability, and he’s right. It isn’t. I don’t think anyone claimed it would. Nor do I think the public has been persuaded that because it does open data, the government is somehow open and transparent. These are not words the Canadian public associates with this government no matter what it does on this file.

It’s co-opting the language of progressive change in pursuit of what turns out to be a small-government-focused subsidy for industry.

There are a number of sensible, critical questions in Slee’s blog post. But this is a ridiculous charge. Prior to the data being open, you had an asset that was paid for by taxpayer dollars, then charged for at a premium that created a barrier to access. Of course, this barrier was easiest to surmount for large companies and wealthy individuals. If there was a subsidy for industry, it was under the previous model, as it effectively had the most regressive tax for access of any government service.

Indeed, probably the biggest beneficiaries of open data so far have been Canada’s municipalities, which have been able to gain access to much more data than they previously could, and have saved a significant amount of money (Canadian municipalities are chronically underfunded.) And of course, when looking at the most downloaded data sets from the site, it would appear that non-profits and citizens are making good use of them. For example, the 6th most downloaded was the Anthropogenic disturbance footprint within boreal caribou ranges across Canada used by many environmental groups; number 8 was weather data; 9th was Sales of fuel used for road motor vehicles, by province and territory, used most frequently to calculate Green House Gas emissions; and 10th the Government of Canada Core Subject Thesaurus – used, I suspect, to decode the machinery of government. Most of the other top downloaded data sets related to immigration, used it appears, to help applicants. Hard to see the hand of big business in all this, although if open data helped Canada’s private sector become more efficient and productive, I would hardly complain.

If your still with me, thank you, I know that was a long slog.

Public Policy: The Big Opportunity For Health Record Data

A few weeks ago Colin Hansen – a politician in the governing party in British Columbia (BC) – penned an op-ed in the Vancouver Sun entitled Unlocking our data to save lives. It’s a paper both the current government and opposition should read, as it is filled with some very promising ideas.

In it, he notes that BC has one of the best collections of health data anywhere in the world and that, data mining these records could yield patterns – like longitudinal adverse affects when drugs are combined or the correlations between diseases – that could save billions as well as improve health care outcomes.

He recommends that the province find ways to share this data with researchers and academics in ways that ensure the privacy of individuals are preserved. While I agree with the idea, one thing we’ve learned in the last 5 years is that, as good as academics are, the wider public is often much better in identifying patterns in large data sets. So I think we should think bolder. Much, much bolder.

Two years ago California based Heritage Provider Network, a company that runs hospitals, launched a $3 Million predictive health contest that will reward the team who, in three years, creates the algorithm that best predicts how many days a patient will spend in a hospital in the next year. Heritage believes that armed with such an algorithm, they can create strategies to reach patients before emergencies occur and thus reduce the number of hospital stays. As they put it: “This will result in increasing the health of patients while decreasing the cost of care.”

Of course, the algorithm that Heritage acquires through this contest will be proprietary. They will own it and I can choose who to share it with. But a similar contest run by BC (or say, the VA in the United States) could create a public asset. Why would we care if others made their healthcare system more efficient, as long as we got to as well. We could create a public good, as opposed to Heritage’s private asset. More importantly, we need not offer a prize of $3 million dollars. Several contests with prizes of $10,000 would likely yield a number of exciting results. Thus for very little money with might help revolutionize BC, and possibly Canada’s and even the world’s healthcare systems. It is an exciting opportunity.

Of course, the big concern in all of this is privacy. The Globe and Mail featured an article in response to Hansen’s oped (shockingly but unsurprisingly, it failed to link back to – why do newspaper behave that way?) that focused heavily on the privacy concerns but was pretty vague about the details. At no point was a specific concern by the privacy commissioner raised or cited. For example, the article could have talked about the real concern in this space, what is called de-anonymization. This is when an analyst can take records – like health records – that have been anonymized to protect individual’s identity and use alternative sources to figure out who’s records belong to who. In the cases where this occurs it is usually only only a handful of people whose records are identified, but even such limited de-anonymization is unacceptable. You can read more on this here.

As far as I can tell, no one has de-anonymized the Heritage Health Prize data. But we can take even more precautions. I recently connected with Rob James – a local epidemiologist who is excited about how opening up anonymized health care records could save lives and money. He shared with me an approach taking by the US census bureau which is even more radical than de-anonymization. As outlined in this (highly technical) research paper by Jennifer C. Huckett and Michael D. Larsen, the approach involves creating a parallel data set that has none of the features of the original but maintains all the relationships between the data points. Since it is the relationships, not the data, that is often important a great deal of research can take place with much lower risks. As Rob points out, there is a reasonably mature academic literature on these types of privacy protecting strategies.

The simple fact is, healthcare spending in Canada is on the rise. In many provinces it will eclipse 50% of all spending in the next few years. This path is unsustainable. Spending in the US is even worse. We need to get smarter and more efficient. Data mining is perhaps the most straightforward and accessible strategy at our disposal.

So the question is this: does BC want to be a leader in healthcare research and outcomes in an area the whole world is going to be interested in? The foundation – creating a high value data set – is already in place. The unknown is if can we foster a policy infrastructure and public mandate that allows us to think and act in big ways. It would be great if government officials, the privacy commissioner and some civil liberties representatives started to dialogue to find some common ground.  The benefits to British Columbians – and potentially to a much wider population – could be enormous, both in money and, more importantly, lives, saved.

Canada Post’s War on the 21st Century, Innovation & Productivity

The other week Canada Post announced it was suing Geocoder.ca – an alternative provider of postal code data. It’s a depressing statement on the status of the digital economy in Canada for a variety of reasons. The three that stand out are:

1) The Canadian Government has launched an open government initiative which includes a strong emphasis on open data and innovation. Guess which data set is the most requested data set by the public: Postal Code data.

2) This case risks calling into question the government’s commitment to (and understanding of) digital innovation, and

3) it is an indication – given the flimsiness of the case – of how little crown corporations understand the law (or worse, how willing they are to use the taxpayer funded litigation to bully others irrespective of the law).

Let me break down the situation into three parts. 1) Why this case matters to the digital economy (and why you should care), 2) Why the case is flimsy (and a ton of depressingly hilariously facts) and 3) What the Government could be doing about, but isn’t.

Why this case matters.

So… funny thing the humble postal code. One would have thought that, in a digital era, the lowly postal code would have lost its meaning.

The interesting truth however, is that the lowly postal code has, in many ways, never been more important. For better for worse, postal codes have become a core piece of data for both the analog and especially digital economy. These simple, easy to remember, six digit numbers, allow you to let a company, political party, or non-profit to figure out what neighborhood, MP riding or city you are in. And once we know where you are, there are all sorts of services the internet can offer you: is that game you wanted available anywhere near you? Who are your elected representatives (and how did they vote on that bill)? What social services are near you? Postal codes, quite simply, one of the easiest ways for us to identify where we are, so that governments, companies and others can better serve us. For example, after to speaking to Geocoder.ca founder Ervin Ruci, it turns out that federal government ministries are a major client of his, dozens of different departments using his service including… the Ministry of Justice.

Given how important postal code data is given it can enable companies, non-profits and government’s to be more efficient and productive (and thus competitive), one would think government would want to make it as widely available as possible. This is, of course, what several governments do.

But not Canada. Here postal code data is managed by Canada Post, which charges, I’m told, between $5,000-$50,000 dollars for access to the postal code database (depending on what you want). This means, in theory, every business (or government entity at the local, provincial or federal level) in Canada that wants to use postal code information to figure out where its customers are located must pay this fee, which, of course, it passes along to its customers. Worse, for others the fee is simple not affordable. For non-profits, charities and, of course, small businesses and start-ups, they either choose to be less efficient, or test their business model in a jurisdiction where this type of data is easier to access.

Why this case is flimsy

Of course, because postal codes are so important, Geocoder came up with an innovative solution to the problem. Rather than copy Canada Post’s postal code data base (which would have violated Canada Post’s terms of use) they did something ingenious… they got lots of people to help them manually recreate the data set. (There is a brief description of how here) As the Canadian Internet Policy and Public Interest Clinic (CIPPIC) brilliant argues in their defense of Geocoder: “The Copyright Act confers on copyright owners only limited rights in respect of particular works: it confers no monopoly on classes of works (only limited rights in respect of specific original works of authorship), nor any protection against independent creation. The Plaintiff (Canada Post) improperly seeks to use the Copyright Act to craft patent-like rights against competition from independently created postal code databases.”

And, of course, there are even deeper problems with Canada Post’s claims:

The first is that an address – including the postal code – is a fact. And facts cannot be copyrighted. And, of course, if Canada Post won, we’d all be hooped since writing a postal code down on say… an envelop, would violate Canada Post’s copyright.

The second, was pointed out to me by a mail list contributor who happened to work for a city. He pointed out that it is local governments that frequently create the address data and then share it with Canada Post. Can you imagine if cities tried to copyright their address data? The claim is laughable. Canada post claims that it must charge for the data to recoup the cost of creating it, but the data it gets from cities it gets for free – the creation of postal code data should not be an expensive proposition.

But most importantly… NON OF THIS SHOULD MATTER. In a world of where our government is pushing an open data strategy, the economic merits of making one of the most important open data sets public, should stand on their own without the fact that the law is on our side.

There is also a bonus 4th element which makes for fun reading in the CIPPIC defense that James McKinney pointed out:

“Contrary to the Plaintiff’s (Canada Post’s) assertion at paragraph 11 of the Statement of Claim that ‘Her Majesty’s copyright to the CPC Database was transferred to Canada Post’ under section 63 of the Canada Post Corporation, no section 63 of the current Canada Post Corporation Act  even exists. Neither does the Act that came into force in 1981 transfer such title.”

You can read the Canada Post Act on the Ministry of Justice’s website here and – as everyone except, apparently, Canada Post’s lawyers has observed – it has only 62 sections.

What Can Be Done.

Speaking of The Canada Post Act, while there is no section 63, there is a section 22, which appears under the header “Directives” and, intriguingly, reads:

22. (1) In the exercise of its powers and the performance of its duties, the Corporation shall comply with such directives as the Minister may give to it.

In other words… the government can compel Canada Post to make its Postal Code data open. Sections 22 (3), (4) and (5) suggest that the government may have to compensate Canada Post for the cost of implementing such a directive, but it is not clear that it must do so. Besides, it will be interesting to see how much money is actually at stake. As an aside, if Canada were to explore privatizing Canada Post, separating out the postal code function and folding it back into government would be a logical decision since you would want all players in the space (a private Canada Post, FedEx, Puralator, etc…) to all be able to use a single postal code system.

Either way, the government cannot claim that Canada Post’s crown corporation status prevents it from compelling the organization to apply an open license to its postal code data. The law is very clear that it can.

What appears to be increasingly obvious is that the era of closed postal code data will be coming to an end. It may be in a slow, expensive and wasteful lawsuit that costs both Canada Post, Canadian taxpayers and CIPPIC resources and energy they can ill afford, or it can come quickly through a Ministerial directive.

Let’s hope that latter prevails.

Indeed, the postal code has arguably become the system for physical organizing our society. Everything from the census to urban planning to figuring out where to build a Tim Horton’s or Starbucks will often use postal code data as the way to organize data about who we are and where we live. Indeed it is the humble postal code that frequently allows all these organizations – from governments to non-profits to companies – to be efficient about locating people and allocating resources. Oh. And it also really helps for shipping stuff quickly that you bought online.

It would be nice to live in a country that really understood how to support a digital economy. Sadly, last week, I was once again reminded of how frustrating it is to try to be 21st century company in Canada.

What happened?

Directives
  • 22. (1) In the exercise of its powers and the performance of its duties, the Corporation shall comply with such directives as the Minister may give to it.

The Surveillance State – No Warrant Required

Yesterday a number of police organizations came out in support of bill C-30 – the online surveillance bill proposed by Minister Vic Toews. You can read the Vancouver Police Department’s full press release here – I’m referencing theirs not because it is particularly good or bad, but simply because it is my home town.

For those short on time, the very last statement, at the bottom of the post, is by far the worst and is something every Canadian should know. The authors of these press releases would have been wise to read Michael Geist’s blog posts from yesterday before publishing. Geist’s analysis shows that, at best, the police are misinformed, at worst, they are misleading the public.

So let’s look at some of the details of the press release that are misleading:

Today I speak to you as the Deputy Chief of the VPD’s Investigation Division, but also as a member of the Canadian Association of Chiefs of Police, and I’m pleased to be joined by Tom Stamatakis, President of both the Vancouver Police Union and Canadian Police Association.
The Canadian Association of Chiefs of Police (CACP) is asking Canadians to consider the views of law enforcement as they debate what we refer to as “lawful access,” or Bill C-30 – “An Act to enact the Investigating and Preventing Criminal Electronic Communications Act and to amend the Criminal Code and other Acts.”
This Bill was introduced by government last week and it has generated much controversy. There is no doubt that the Bill is complex and the technology it refers to can be complex as well.
I would, however, like to try to provide some understanding of the Bill from a police perspective. We believe new legislation will:
  • assist police with the necessary tools to investigate crimes while balancing, if not strengthening, the privacy rights for Canadians through the addition of oversight not currently in place

So first bullet point, first problem. While it is true the bill brings in some new process, to say it strengthens privacy rights is misleading. It has become easier, not harder, to gain access to people’s personal data. Before, when the police requested personal information from internet service providers (ISPs) the ISPs could say no. Now, we don’t even have that. Worse, the bill apparently puts a gag on order on these warrantless demands, so you can’t even find out if a government agency has requested information about you.

  • help law enforcement investigate and apprehend those who are involved in criminal activity while using new technologies to avoid apprehension due to outdated laws and technology
  • allow for timely and consistent access to basic information to assist in investigations of criminal activity and other police duties in serving the public (i.e. suicide prevention, notifying next of kin, etc.)

This, sadly, is a misleading statement. As Michael Geist notes in his blog post today “The mandatory disclosure of subscriber information without a warrant has been the hot button issue in Bill C-30, yet it too is subject to unknown regulations. These regulations include the time or deadline for providing the subscriber information (Bill C-30 does not set a time limit)…”

In other words, for the police to say the bill will get timely access to basic information – particularly timely enough to prevent a suicide, which would have to be virtually real time access – is flat out wrong. The bill makes no such promise.

Moreover, this underlying justification is itself fairly ridiculous while the opportunities for abuse are not trivial. It is interesting that none of the examples have anything to do with preventing crime. Suicides are tragic, but do not pose a risk to society. And speedily notifying next of kin is hardly such an urgent issue that it justifies warrantless access to Canadians private information. These examples speak volumes about the strength of their case.

Finally, it is worth noting that while the Police (and the Minister) refer to this as “basic” information, the judiciary disagrees. Earlier this month the Saskatchewan Court of Appeal concluded in R v Trapp, 2011 SKCA 143 that an individual has a reasonable expectation of privacy in the IP address assigned to him or her by an internet service provider, a point which appeared not to have been considered previously by an appellate court in Canada

The global internet, cellular phones and social media have all been widely adopted and enjoyed by Canadians, young and old. Many of us have been affected by computer viruses, spam and increasingly, bank or credit card fraud.

This is just ridiculous and is designed to do nothing more than play on Canadians fears. I mean spam? Really? Google Mail has virtually eliminated spam for its users. No government surveillance infrastructure was required. Moreover, it is very, very hard to see how the surveillance bill will help with any of the problems cited about – viruses, spam or bank fraud.

Okay skipping ahead (again you can read the full press release here)

2. Secondly, the matter of basic subscriber information is particularly sensitive.
The information which companies would be compelled to release would be: name, address, phone number, email address, internet protocol address, and the name of the service provider to police who are in the lawful execution of their duties.
Actually to claim that these are going to police who are in the lawful execution of their duties is also misleading. This data would be made available to police who, at best, believe they are in the lawful execution of their duties. This is precisely why we have warrants so that an independent judiciary can assess whether or not the police are actually engaged in the lawful execution of their duties. Strip away that check and there will be abuses. Indeed, the Sun Newspaper phone hacking scandal in the UK serves as a perfect example of the type of abuse that is possible. In this case police officers were able to access “under extraordinary circumstances” without a warrant or oversight, the names and phone numbers of people whose phones they wanted to, or had already, hacked.
While this information is important to police in all types of investigations, it can be critical in cases where it is urgent that police locate a caller or originator of information that reasonably causes the police to suspect that someone’s safety is at risk.
Without this information the police may not be able to quickly locate and help the person who was in trouble or being victimized.
An example would be a message over the internet indicating someone was contemplating suicide where all we had was an email address.
Again, see above. The bill does not stipulate any timelines around sharing this data. This statement is designed to lead readers to believe readers that the bill will grant necessary and instant access so that a situation could be defused in the moment. The bill does nothing of the sort.
Currently, there is no audited process for law enforcement to gain access to basic subscriber information. In some cases, internet service providers (ISPs) provide the information to police voluntarily — others will not, or often there are lengthy delays. The problem is that there is no consistency in providing this information to police nationally.

This, thankfully is a sensible statement.

3. Lastly, and one of the most important things to remember, this bill does NOT allow the police to monitor emails, phone calls or internet surfing at will without a warrant, as has been implied or explicitly stated.
There is no doubt that those who are against the legislation want you to believe that it does. I have read the Bill and I cannot find that anywhere in it. There are no changes in this area from the current legislation.

This is the worst part of the press release as it is definitely not true. See Michael Geist’s – the Ottawa professor most on top of this story – blog post from yesterday, which was written before this press release went out. According to Geist, there is a provision in the law that “…opens the door to police approaching ISPs and asking them to retain data on specified subscribers or to turn over any subscriber information – including emails or web surfing activities – without a warrant. ISPs can refuse, but this provision is designed to remove any legal concerns the ISP might have in doing so, since it grants full criminal and civil immunity for the disclosures.” In other words the Police can conduct warantless surveillance. It just requires the permission of the ISPs. This flat out contradicts the press release.

 

Algorithmic Regulation Spreading Across Government?

I was very, very excited to learn that the City of Vancouver is exploring implementing a program started in San Francisco in which “smart” parking meters adjust their price to reflect supply and demand (story is here in the Vancouver Sun).

For those unfamiliar with the program, here is a breakdown. In San Francisco, the city has the goal of ensuring at least one free parking spot is available on every block in the downtown core. As I learned during the San Fran’s presentation at the Code for America summit, such a goal has several important consequences. Specifically, it reduces the likelihood of people double parking, reduces smog and greenhouse gas emissions as people don’t troll for parking as long and because trolling time is reduced, people searching for parking don’t slow down other traffic and buses as they drive around slowly looking for a spot. In short, it has a very helpful impact on traffic more broadly.

So how does it work? The city’s smart parking meters are networked together and constantly assess how many spots on a given block are free. If, at the end of the week, it turns out that all the spaces are frequently in use, the cost of parking on that block is increased by 25 cents. Conversely if many of the spots were free, the price is reduced by 25 cents. Generally, each block finds an equilibrium point where the cost meets the demand but is also able to adjust in reaction to changing trends.

Technologist Tim O’Reilly has referred to these types of automated systems in the government context as “algorithmic regulation” – a phrase I think could become more popular over the coming decade. As software is deployed into more and more systems, the algorithms will be creating market places and resource allocation systems – in effect regulating us. A little over a year ago I said that contrary to what many open data advocates believe, open data will make data political – e.g. that open data wasn’t going to depoliticize public policy and make it purely evidenced base, quite the opposite, it will make the choices around what data we collect more contested (Canadians, think long form census). The same is also – and already – true of the algorithms, the code, that will increasingly regulate our lives. Code is political.

Personally I think the smart parking meter plan is exciting and hope the city will consider it seriously, but be prepared, I’m confident that much like smart electrical meters, an army of naysayers will emerge who simply don’t want a public resource (roads and parking spaces) to be efficiently used.

It’s like the Spirit of the West said: Everything is so political.