Category Archives: open data

Why not open flu data?

On Monday, Nov. 23 the Globe ran this piece I wrote as a Special to The Globe and Mail. I’m cross-posting it back here for those who may have missed it. Hope you enjoy!

An interesting thread keeps popping up in The Globe’s reporting on H1N1. As you examine the efforts of the federal and provincial governments to co-ordinate their response to the crisis only one thing appears to be more rare than the vaccine itself: information.

For example, on Nov. 11, Patrick Brethour reported that “The premiers resolved to press the federal government to give them more timely information on vaccine supplies during their own conference call last Friday. Health officials across Canada have expressed frustration that Ottawa has been slow to inform them about how much vaccine provinces and territories will get each week.”

And of course, it isn’t just the provinces complaining about the feds. The feds are similarly complaining about the vaccine suppliers. In response to an unforeseen and last-minute vaccine shortage by GlaxoSmithKline (a manufacturer of the vaccine), David Butler-Jones, Canada’s Chief Public Health Officer, acknowledged in The Globe on Oct. 31 that “what I know today is not what I knew yesterday morning. And tomorrow I may find out something new.”

For those of you who are wondering what this shortage of information reminds you of, the answer is simple: life before the Internet. Here, in the digital age, we continue to treat the Public Health Officer like a town crier, waiting for him to share how much vaccine the country is going to receive. And the government is treating GSK like a 20th century industrial manufacturer you would bill with a paper invoice.

This in an era of just-in-time delivery, radio-frequency identification chips and a FedEx website that lets me track packages from my home computer. We could resolve this information shortage quite simply by insisting the vaccine suppliers publish a website or data feed, updated hourly or daily, of the vaccine production pipeline, delivery schedule and inventory. That way, if there is a sudden change in the delivery amount the press, health officials or any average citizen could instantly know and plan accordingly. Conversely, the government of Canada could publish its inventory and the process it uses to allocate it to the provinces online for anyone to see. Using this data, local health authorities could calculate how much vaccine they can expect without having to talk to the feds at all. Time and energy would be saved by everyone.

Better still, no more conference calls with the premiers sitting around complaining to the Prime Minister about a lack of information. By insisting on open data – that is sharing the data and information relating to the vaccine supply publicly – the government could both improve transparency, reduce transaction costs and greatly facilitate co-ordination between the various ministries and levels of government. No more waiting for that next meeting or an email from the Chief Public Health Officer to get an update on how much vaccine to expect – just pop online and take a look for yourself.

As noted by Doug Bastien over at GC2.0, the federal government has done an excellent job informing the Canadian public about the need to get vaccinated, including using social media like Twitter, Facebook and YouTube videos. Indeed, they were so successful they helped contribute to the current vaccine shortage. To ensure we respond to the next crisis successfully, however, we need more than a citizen-centric social media strategy. We need a social media and open data strategy that ensures our governments communicate effectively with one another.

Toronto Innovation Summit on Open Government

Today I’m at Toronto City Hall doing a panel on Open Government for the Innovation Showcase. If you are reading this before 10am EST you can catch a webcast of the panel at the above link.

I’ve pasted in my slides for those who would like to follow along. Down below I’ve included a few links that those who are new to my site (or who haven’t read my writing on government 2.0) might find interesting.

Some of my favourite posts of open government, open data and gov 2.0:

The Three Laws of Open Government Data

Open Data: USA vs Canada

Create the Open Data Bargain in Cities

Globe and Mail Op-Ed: Don’t Ban Facebook

If I could start with a blank sheet of paper… (written for the Australian Government’s Web 2.0 Taskforce)

Mapping Government 2.0 against the Hype Curve

Feeding the next economy – Give us a stimulus that stimulates, not placates

Why the Government of Canada needs bloggers

Why StatCan could be like Google

The Public Service as Gift Economy

Public Service Sector Renewal and Gen Y: Don’t be efficient

Public Service Sector Renewal: Starting at the APEX

The Stimulus Map: Open Data and enhancing our democracy

The subject of the distribution of stimulus monies has been generating a fair amount of interest. Indeed, the Globe published this piece and the Halifax Chronicle-Herald published this piece analyzing the spending. But something more interesting is also happening…

Yesterday, my friend Ducky Sherwood and her husband Jim published their own analysis, an important development for two reasons.

First, their analysis is just plain interesting… they’ve got an excellent breakdown of who is receiving what (Ontario is a big winner in absolute and per capita terms, Quebec is the big loser). Moreover, they’ve made the discussion fun and engaging by creating this map. It shows you every stimulus project in the country and where you click it will highlight nearby projects. The map also displays and colour-codes every riding in the country by party (blue for Conservatives, magenta for everyone else) and the colour’s strength correlates to the quantity of monies received.

Stimulus Map

Second, and more interesting for me, is how their analysis hints at the enormous possibilities of what citizens can do when Government’s share their data and information about programs with the public in useful formats. (You can get spreadsheets of the data and for those more technically-minded the API can be found here). This is an example of the Long Tail of Public Policy Analysis in action.

This could have a dramatic impact on public discourse. Open data shifts the locus of power in the debate. Previously, simply getting the data was of value since your analysis would likely only compete, at best, with one or two other peoples (usually a news organization, or maybe a professor). But when anyone can access the information the value shifts. Simply doing an analysis is no longer interesting (since anyone can do it). Now the quality, relevance, ideological slant, assumptions, etc… of the analysis are of paramount value. This has serious implications – implications I believe bode well for debate and democracy in this country. Indeed, I hope more people will play with the stimulus data (like these guys have) and that a more rigorous debate about both where it is being spent and how it is being spent will ensue. (Needless to say, I believe that spending money on auto bailouts and building roads does little to promote recovery – the real opportunity would have been in seeding the country with more data to power the businesses of tomorrow).

There are, however, limits to Ducky’s analysis that are no fault of her own. While she can crunch the numbers and create a great map she is… ultimately… limited to the information that government gives her (and all of us). For example the data set she uses is fairly vague about the value of projects: the government labels them “under $100K” or “between $100K and $1M.” These are hardly precise figures.

Nor does the data say anything about the quality of these projects or their impact. Of course, this is what the debate should be about. Where, how effectively, and to what end is our money being spent? Ducky’s analysis allows us to get to these questions more quickly. The point here is that by opening up this stimulus money to popular analysis we can have a debate about effectiveness.

I don’t, for a second, believe that this will be an easy debate – one in which a “right” answer will magically emerge out of the “data.” Quite the opposite, as I pointed out above the debate will now shift to the economic, ideological and other assumptions that inform each opinion.  This could in fact create a less clear picture – but it will also be a picture that is more reflective of the diversity of opinions found in our country and that can scarcely be represented in the two national newspapers. And this is what is most important. Open data allows for a greater debate, one that more citizens can contribute and be a part of rather than just passively observe from their newspapers and TV screens. That is the real opportunity of open data is not that it enables a perfect discussion, but a wider, more democratic and thus, as far as I’m concerned, a better one.

(An additional note, while it is great that the government has created an API to share this data, let us not get too excited; it is very limited in what it tells us. More data, shared openly would be better still. Don’t expect this anytime soon. Yesterday the Government dropped 4,476 pages off at the Parliamentary Budget Office rather than send them a electronic spreadsheet (h/t Tim Wilson). Clearly they don’t want the PBO to be able to crunch the numbers on the stimulus package – which means they probably don’t want you to either.)

Upcoming talk: Toronto Innovation Showcase

Just a little FYI to let people know I’m going to be in Toronto on Monday, November 2nd for the City of Toronto’s Innovation Showcase.

I’ll be doing a panel Open Government with Maryantonett Flumian (President of the Institute On Governance, I remember meeting her when she was Deputy Minister of Service Canada), Nick Vitalari (Executive Vice President at nGenera), and Peter Corbett (CEO of iStrategyLabs – which runs the Apps for Democracy Competitions for Washington DC).

The Showcase will be running November 2nd and 3rd and our panel will be on Monday the 2nd from 10:15am until noon in the City Council chambers. Registration is free for those who’d like to come and for those interested but not in Toronto, you will be able to watch a live webcast of the event online from their website. You’ll also be able to follow the event on twitter hashtags #TOshowcase and #opendataTO

The goal of the showcase is to provide:

“a venue for you to come and meet with your colleagues to discuss these questions, hear their success stories, share experiences about opportunities and challenges in the public sector using social media, propose suggestions, exchange information on IT and trends, create connections, knowledge, tools and policies that address the increased demand by citizens for better public service, transparency, civic engagement and democratic empowerment.”

Should be fun – hope to catch you there and to have something fun to blog about after it’s over.

Searching The Vancouver Public Library Catalog using Amazon

A few months ago I posted about a number of civic applications I’d love to see. These are computer, iphone, blackberry applications or websites that leverage data and information shared by the government that would help make life in Vancouver a little nicer.

Recently I was interviewed on CBC’s spark about some of these ideas that have come to fruition because of the hard work and civic mindedness of some local hackers. Mostly, I’ve talked about Vantrash (which sends emails or tweets to remind people of their upcoming garbage day), but during the interviewed I also mentioned that Steve Tannock created a script that allows you to search the Vancouver Public Library (VPL) Catalog from the Amazon website.

Firstly – why would you want you want to use Amazon to search the VPL? Two reasons: First, it is WAY easier to find books on the Amazon site then the library site, so you can leverage Amazon’s search engine to find books (or book recommendations) at the VPL. Second, it’s a great way to keep the book budget in check!

To use the Amazon website to search the VPL catalog you need to follow these instructions:

1. You need to be using the Firefox web browser. You can download and install it for free here. It’s my favourite browser and if you use it, I’m sure it will become yours too.

2. You will need to install the greasemonkey add-on for Firefox. This is really easy to do as well! After you’ve installed Firefox, simply go here and click on install.

3. Finally, you need to download the VPL-Amazon search script from Steve Tannock’s blog here.

4. While you are at Steve’s blog, write something nice – maybe a thank you note!

5. Go to the Amazon website and search for a book. Under the book title will be a small piece of text letting you know if the VPL has the book in its catalog! (See example picture below) Update: I’m hearing from some users that the script works on the Amazon.ca site but not the Amazon.com site.

I hope this is helpful! And happy searching.

Also, for those who are more technically inclined feel free to improve on the script – fix any bugs (I’m not sure there are any) or make it better!

Amazon shot

Spark Interview on VanTrash – The Open Source Garbage Reminder Service

A couple of weeks ago I was interviewed by the CBC’s Nora Young for her show Spark:  a weekly audio blog of smart and unexpected trendwatching about the way technology affects our lives and world.

The interview (which was fun!) dives a little deeper into some of the cool ways citizens – in working to make their lives better – can make cool things happen (and improve their community) when government’s make their data freely available. The interview focuses mostly on VanTrash, the free garbage reminder service created by Luke Closs and Kevin Jones based on a blog post I wrote. It’s been getting a lot of positive feedback and is helping make the lives of Vancouverites just a little less hectic.

You can read more about the episode here and listen to it on CBC radio at 1:05 local time in most parts of Canada and 4:05 on the west coast.

You can download a podcast of the Spark episode here or listen to it on the web here.

If you live in Vancouver – check out VanTrash.ca and sign up! (or sign your parents or neighbour up!) Never forget to take the garbage out again. It works a whole lot better than this approach my friends mom uses for her:

Van trash reminder

19th Century Net Neutrality (and what it means for the 21st Century)

So what do bits of data and coal locomotive have in common?

It turns out a lot.

In researching an article for a book I’ve discovered an interesting parallel between the two in regard to the issue of Net Neutrality. What is Net Neutrality? It is the idea that when you use the Internet, you do so free of restrictions. That any information you download gets treated the same as any other piece of information. This means that your Internet service provider (say Rogers, Shaw or Bell) can’t choose to provide you with certain content faster than other content (or worse, simply block you from accessing certain content altogether).

Normally the issue of Net Neutrality gets cast in precisely those terms – do bits of data flowing through fibre optic and copper cables get treated the same, regardless of whose computer they are coming from and whose computer they are going to. We often like to think these types of challenges are new, and unique, but one thing I love about being a student of history, is that there are almost always interesting earlier examples to any problem.

Take the late 19th and early 20th century. Although the term would have been foreign to them, Net Neutrality was a raging issue, but not in regard to the telegraph cables of the day.  No, it was an issue in regards to railway networks.

In 1903 the United States Congress passed the Elkins Act. The Act forbade railway companies from offering, and railway customers from demanding, preferential rates for certain types of goods. Any “good” that moved over the (railway) network had to be priced and treated the same as any other “good.” In short, the (railway) network had to be neutral and price similar goods equally. What is interesting is that many railway companies welcomed the act because some trusts (corporations) paid the standard rail rate but would then demand that the railroad company give them rebates.

What’s interesting to me is that

a) Net Neutrality was a problem back in the late 19th and early 20th century; and

b) Government regulation was seen as an effective solution to ensuring a transparent and fair market place on these networks

The question we have to ask ourselves is, do we want to ensure that the 21st century (fibre optic) networks will foster economic growth, create jobs and improve productivity in much the same way the 19th and 20th century (railway) networks did for that era? If the answer is yes, we’d be wise to look back and see how those networks were managed effectively and poorly.  The Elkins Act is an interesting starting point, as it represented progressives efforts to ensure transparency and equality of opportunity in the marketplace so that it could function as an effective platform for commerce.

Open Data – USA vs. Canada

open-data-300x224When it comes to Open Data in Canada and the United States, things appear to be similar. Both countries have several municipalities with Open Data portals: Washington, D.C., San Francisco, and now New York City in the US, Vancouver and Nanaimo in Canada with Toronto, Edmonton, Calgary and Ottawa thinking about or initiating plans.

But the similarities end there. In particular there is a real, yawning gap at the federal level. America has data.gov but here in Canada there is no movement on the Open Data front. There are some open data sets, but nothing comprehensive, and nothing that follows is dedicated to following the three laws of open data. No data.gc.ca in the works. Not even a discussion. Why is that?

As esoteric as it may sound, I believe the root of the issues lies in the country’s differing political philosophies. Let me explain.

It is important to remember that the United States was founded on the notion of popular sovereignty. As such its sovereignty lies with the people, or as Wikipedia nicely puts it:

The American Revolution marked a departure in the concept of popular sovereignty as it had been discussed and employed in the European historical context. With their Revolution, Americans substituted the sovereignty in the person of the English king, George III, with a collective sovereign—composed of the people. Henceforth, American revolutionaries by and large agreed and were committed to the principle that governments were legitimate only if they rested on popular sovereignty – that is, the sovereignty of the people. (italics are mine)

Thus data created by the US government is, quite literally, the people’s data. Yes, nothing legally prevents the US government from charging for information and data but the country’s organizing philosophy empowers citizens to stand up and say – this is our data, we’d like it please. In the United States the burden is on the government to explain why it is withholding that which the people own (a tradition that admittedly is hardly perfect as anyone alive from the years 2000-2008 will attest to).  But don’t underestimate the power of this norm. Its manifestations are everywhere, such as in the legal requirement that any document created by the United States government be published in the public domain (e.g. it cannot have any copyright restrictions placed on it) or in America’s vastly superior Freedom of Information laws.

This is very different notion of sovereignty than exists in Canada. This country never deviated from the European context described above. Sovereignty in Canada does not lie with the people, indeed, it resides in King George the III’s descendant, the present day Queen of England. The government’s data isn’t your, mine, or “our” data. It’s hers. Which means it is at her discretion, or more specifically, the discretion of her government servants, to decide when and if it should be shared. This is the (radically different) context under which our government (both the political and public service), and its expectations around disclosure, have evolved. As an example, note that government documents in Canada are not public domain, they are published under a Crown Copyright that, while less restrictive than copyright, nonetheless constrains reuse (no satire allowed!) and is a constant reminder of the fact that Canadian citizens don’t own what their tax dollars create. The Queen does.

The second reason why open data has a harder time taking root in Canada is because of the structure of our government. In America, new projects are easier to kick start because the executive welds greater control over the public service. The Open Data initiative that started in Washington, D.C. spread quickly to the White House because its champion and mastermind, the District’s of Columbia’s CTO Vivek Kundra, was appointed Federal CIO by President Obama. Yes, Open Data tapped into an instinctual reflex to disclose that (I believe) is stronger down south than here, but it was executed because America’s executive branch is able to appoint officials much deeper into government (for those who care, in Canada Deputy Ministers are often appointed, but in the United States appointments go much deeper, down into the Assistant Deputy and even into the Director General level). Both systems have merits, and this is not a critic of Canada’s approach, simply an observation. However, it does mean that a new priority, like open data, can be acted upon quickly and decisively in the US. (For more on these difference I recommend reading John Ibbitson’s book Open & Shut).

These difference have several powerful implications for open data in Canada.

As a first principle, if Canadians care about open data we will need to begin fostering norms in our government, among ourselves, and in our politicians, that support the idea that what our government creates (especially in terms of research and data) is ours and that we should not only have unfettered access to it, but the right to analyze and repurpose it. The point here isn’t just that this is a right, but that open data enhances democracy, increases participation and civic engagement and strengthens our economy. Enhancing this norm is a significant national challenge, one that will take years to succeed. But instilling it into the culture of our public service, our civic discourse and our political process is essential. In the end, we have to ask ourselves – in a way our American counterparts aren’t likely to (but need to) – do we want an open country?

This means that secondly, Canadians are going to have to engage in a level of education of – particularly senior – public servants on open data that is much broader and more comprehensive than our American counterparts had to. In the US, an executive fiat and appointment has so far smoothed the implementation of open data solutions. That will likely not work here. We have many, many, many allies in the public service who believe in open data (and who understand it is integral to public service sector renewal). The key is to spread that knowledge and support upwards, to educate senior decision-makers, especially those at the DG, ADM and DM level to whom both the technology and concept is essentially foreign. It is critical that these decision-makers become comfortable with and understand the benefits of open data quickly. If not we are unlikely to keep pace with (or even follow) our American counterparts, something, I believe is essential for our government and economy.

Second, Canadians are going to have to mobilize to push for open data as a political issue. Even if senior public servants get comfortable with the idea, it is unlikely there will be action unless politicians understand that Canadians want both greater transparency and the opportunity to build new services and applications on government data.

(I’d also argue that another reason why Open Data has taken root in the US more quickly than here is the nature of its economy. As a country that thrives on services and high tech, open data is the basic ingredient that helps drive growth and innovation. Consequently, there is increasing corporate support for open data. Canada, in contrast, with its emphasis on natural resources, does not have a corporate culture that recognizes these benefits as readily.)

The Three Laws of Open Government Data

Yesterday, at the Right To Know Week panel discussion – Conference for Parliamentarians: Transparency in the Digital Era – organized by the Office of the Information Commissioner I shared three laws for Open Government Data that I’d devised on the flight from Vancouver.

The Three Laws of Open Government Data:

  1. If it can’t be spidered or indexed, it doesn’t exist
  2. If it isn’t available in open and machine readable format, it can’t engage
  3. If a legal framework doesn’t allow it to be repurposed, it doesn’t empower

To explain, (1) basically means: Can I find it? If Google (and/or other search engines) can’t find it, it essentially doesn’t exist for most citizens. So you’d better ensure that you are optimized to be crawled by all sorts of search engine spiders.

After I’ve found it, (2) notes that, to be useful, I need to be able to play with the data. Consequently, I need to be able to pull or download it in a useful format (e.g. an API, subscription feed, or a documented file). Citizens need data in a form that lets them mash it up with Google Maps or other data sets, or analyze in Excel. This is essentially the difference between VanMaps (look, but don’t play) and the Vancouver Data Portal, (look, take and play!). Citizens who can’t play with information are citizens who are disengaged/marginalized from the discussion.

Finally, even if I can find it and play with it, (3) highlights that I need a legal framework that allows me to share what I’ve created, to mobilize other citizens, provide a new service or just point out an interesting fact. This is the difference between Canada’s House of Parliament’s information (which, due to crown copyright, you can take, play with, but don’t you dare share or re-publish) and say, Whitehouse.gov which “pursuant to federal law, government-produced materials appearing on this site are not copyright protected.”

Find, Play and Share. That’s want we want.

Of course, a brief scan of the internet has revealed that others have also been thinking about this as well. There is this excellent 8 Principle of Open Government Data that are more detailed, and admittedly better, especially for a CIO level and lower conversation.  But for talking to politicians (or Deputy Ministers or CEOs), like those in attendance during yesterday’s panel or, later that afternoon, the Speaker of the House, I found the simplicity of three resonated more strongly; it is a simpler list they can remember and demand.

Today: "right to know" panel for parliamentarians

Today from 10am-12am EST I’ll be a panelist for Conference for Parliamentarians: Transparency in the Digital Era a panel convened by the Office of the Information Commissioner as part of Right to Know Week. Apparently the Canadian School of Public Service will provide access to this conference as part of its Armchair Discussions (www.righttoknow.ca).

More on the panel:

This conference aims to engage Parliamentarians in a debate and reflection on the new paradigm that the digital world has introduced for the right to know. Greater transparency in the digital era requires more than sound information management and the use of state-of-the-art information technology. It calls for a fundamental change of attitudes from disclosing information on a need-to-know basis to managing information with the presumption of disclosure as the default mode. How can public institutions trigger and accelerate this change of attitudes for the benefit of Canadians?

For those who are interested you can see my slides (sans audio, I’m afraid) below.