Category Archives: commentary

Open Data Movement is a Joke?

Yesterday, Tom Slee wrote a blog post called “Why the ‘Open Data Movement’ is a Joke,” which – and I say this as a Canadian who understands the context in which Slee is writing – is filled with valid complaints about our government, but which I feel paints a flawed picture of the open data movement.

Evgeny Morozov tweeted about the post yesterday, thereby boosting its profile. I’m a fan of Evgeny. He is an exceedingly smart and critical thinker on the intersection of technology and politics. He is exactly what our conversation needs (unlike, say, Andrew Keen). I broadly felt his comments (posted via his Twitter stream) were both on target: we need to think critically about open data; and lacked nuance: it is possible for governments to simultaneously become more open and more closed on different axis. I write all this confident that Evgeny may turn his ample firepower on me, but such is life.

So, a few comments on Slee’s post:

First, the insinuation that the open data movement is irretrievably tainted by corporate interests is so over the top it is hard to know where to begin to respond. I’ve been advocating for open data for several years in Canada. Frankly, it would have been interesting and probably helpful if a large Canadian corporation (or even a medium sized one) took notice. Only now, maybe 4-5 years in, are they even beginning to pay attention. Most companies don’t even know what open data is.

Indeed, the examples of corporate open data “sponsors” that Slee cites are U.S. corporations, sponsoring U.S. events (the Strata conference) and nonprofits (Code for America – of which I have been engaged with). Since Slee is concerned primarily with the Canadian context, I’d be interested to hear his thoughts on how these examples compare to Canadian corporate involvement in open data initiatives – or even foreign corporations’ involvement in Canadian open data.

And not to travel too far down the garden path on this, but it’s worth noting that the corporations that have jumped on the open data bandwagon in the US often have two things in common: First, their founders are bona fide geeks, who in my experience are both interested in hard data as an end unto itself (they’re all about numbers and algorithms), and want to see government-citizen interactions – and internal governmental interactions, too – made better and more efficient. Second, of course they are looking after their corporate interests, but they know they are not at the forefront of the open data movement itself. Their sponsorship of various open data projects may well have profit as one motive, but they are also deeply interested in keeping abreast of developments in what looks to be a genuine Next Big Thing. For a post the Evgeny sees as being critical of open data, I find all this deeply uncritical. Slee’s post reads as if anything that is touched by a corporation is tainted. I believe there are both opportunities and risks. Let’s discuss them.

So, who has been advocating for open data in Canada? Who, in other words, comprises the “open data movement” that Slee argues doesn’t really exist – and that “is a phrase dragged out by media-oriented personalities to cloak a private-sector initiative in the mantle of progressive politics”? If you attend one of the hundreds of hackathons that have taken place across Canada over the past couple years – like those that have happened in Vancouver, Regina, Victoria, Montreal and elsewhere – you’ll find they are generally organized in hackspaces and by techies interested in ways to improve their community. In Ottawa, which I think does the best job, they can attract hundreds of people, many who bring spouses and kids as they work on projects they think will be helpful to their community. While some of these developers hope to start businesses, many others try to tackle issues of public good, and/or try to engage non-profits to see if there is a way they can channel their talent and the data. I don’t for a second pretend that these participants are a representative cross-section of Canadians, but by and large the profile has been geek, technically inclined, leaning left, and socially minded. There are many who don’t fit that profile, but that is probably the average.

Second, I completely agree that this government has been one of the most – if not the most – closed and controlling in Canada’s history. I, like many Canadians, echo Slee’s frustration. What’s worse, is I don’t see things getting better. Canadian governments have been getting more centralized and controlling since at least Trudeau, and possibly earlier (Indeed, I believe polling and television have played a critical role in driving this trend). Yes, the government is co-opting the language of open data in an effort to appear more open. All governments co-opt language to appear virtuous. Be it on the environment, social issues or… openness, no government is perfect and indeed, most are driven by multiple, contradictory goals.

As a member of the Federal Government’s Open Government Advisory Panel I wrestle with this challenge constantly. I’m try hard to embed some openness into the DNA of government. I may fail. I know that I won’t succeed in all ways, but hopefully I can move the rock in the right direction a little bit. It’s not perfect, but then it’s pretty rare that anything involving government is. In my (unpaid, advisory, non-binding) role I’ve voiced that the government should provide the Access to Information Commissioner with a larger budget (they cut it) and that they enable government scientists to speak freely (they have not so far). I’ve also advocated that they should provide more open data. There they have, including some data sets that I think are important – such as aid data (which is always at risk of being badly spent). For some, it isn’t enough. I’d like for there to be more open data sets available, and I appreciate those (like Slee – who I believe is writing from a place of genuine care and concern) who are critical of these efforts.

But, to be clear, I would never equate open government data as being tantamount to solving the problems of a restrictive or closed government (and have argued as much here). Just as an authoritarian regime can run on open-source software, so too might it engage in open data. Open data is not the solution for Open Government (I don’t believe there is a single solution, or that Open Government is an achievable state of being – just a goal to pursue consistently), and I don’t believe anyone has made the case that it is. I know I haven’t. But I do believe open data can help. Like many others, I believe access to government information can lead to better informed public policy debates and hopefully some improved services for citizens (such as access to transit information). I’m not deluded into thinking that open data is going to provide a steady stream of obvious “gotcha moments” where government malfeasance is discovered, but I am hopeful that government data can arm citizens with information that the government is using to inform its decisions so that they can better challenge, and ultimately help hold accountable, said government.

Here is where I think Evgeny’s comments on the problem with the discourse around “open” are valid. Open Government and Open Data should not be used interchangeably. And this is an issue Open Government and Open Data advocates wrestle with. Indeed, I’ve seen a great deal of discussion and reflection come as a result of papers such as this one.

Third, the arguments around StatsCan all feel deeply problematic. I say this as the person who wrote the first article (that I’m aware of) about the long form census debacle in a major media publication and who has been consistently and continuously critical of it. This government has had a dislike for Statistics Canada (and evidence) long before open data was in their vocabulary, to say nothing of a policy interest. StatsCan was going to be a victim of dramatic cuts regardless of Canada’s open data policy – so it is misleading to claim that one would “much rather have a fully-staffed StatsCan charging for data than a half-staffed StatsCan providing it for free.” (That quote comes from Slee’s follow-up post, here.) That was never the choice on offer. Indeed, even if it had been, it wouldn’t have mattered. The total cost of making StatsCan data open is said to have been $2 million; this is a tiny fraction of the payroll costs of the 2,500 people they are looking to lay off.

I’d actually go further than Slee here, and repeat something I say all the time: data is political. There are those who, naively, believed that making data open would depoliticize policy development. I hope there are situations where this might be true, but I’ve never taken that for granted or assumed as much: Quite the opposite. In a world where data increasingly matters, it is increasingly going to become political. Very political. I’ve been saying this to the open data community for several years, and indeed was a warning that I made in the closing part of my keynote at the Open Government Data Camp in 2010. All this has, in my mind, little to do with open data. If anything, having data made open might increase the number of people who are aware of what is, and is not, being collected and used to inform public policy debates. Indeed, if StatsCan had made its data open years ago it might have had a larger constituency to fight on its behalf.

Finally, I agree with the Nat Torkington quote in the blog post:

Obama and his staff, coming from the investment mindset, are building a Gov 2.0 infrastructure that creates a space for economic opportunity, informed citizens, and wider involvement in decision making so the government better reflects the community’s will. Cameron and his staff, coming from a cost mindset, are building a Gov 2.0 infrastructure that suggests it will be more about turning government-provided services over to the private sector.

Moreover, it is possible for a policy to have two different possible drivers. It can even have multiple contradictory drivers simultaneously. In Canada, my assessment is that the government doesn’t have this level of sophistication around its thinking on this file, a conclusion I more or less wrote when assessing their Open Government Partnership commitments. I have no doubt that the conservatives would like to turn government provided services over to the private sector, and open data has so far not been part of that strategy. In either case, there is, in my mind, a policy infrastructure that needs to be in place to pursue either of these goals (such as having a data governance structure in place). But from a more narrow open data perspective, my own feeling is that making the data open has benefits for public policy discourse, public engagement, and economic reasons. Indeed, making more government data available may enable citizens to fight back against policies they feel are unacceptable. You may not agree with all the goals of the Canadian government – as someone who has written at least 30 opeds in various papers outlining problems with various government policies, neither do I – but I see the benefits of open data as real and worth pursuing, so I advocate for it as best I can.

So in response to the opening arguments about the open data movement…

It’s not a movement, at least in any reasonable political or cultural sense of the word.

We will have to agree to disagree. My experience is quite the opposite. It is a movement. One filled with naive people, with skeptics, with idealists focused on accountability, developers hoping to create apps, conservatives who want to make government smaller and progressives who want to make it more responsive and smarter. There was little in the post that persuaded me there wasn’t a movement. What I did hear is that the author didn’t like some parts of the movement and its goals. Great! Please come join the discussion; we’d love to have you.

It’s doing nothing for transparency and accountability in government,

To say it is doing nothing for transparency seems problematic. I need only cite one data set now open to say that isn’t true. And certainly publication of aid data, procurement data, publications of voting records and the hansard are examples of places where it may be making government more transparent and accountable. What I think Slee is claiming is that open data isn’t transforming the government into a model of transparency and accountability, and he’s right. It isn’t. I don’t think anyone claimed it would. Nor do I think the public has been persuaded that because it does open data, the government is somehow open and transparent. These are not words the Canadian public associates with this government no matter what it does on this file.

It’s co-opting the language of progressive change in pursuit of what turns out to be a small-government-focused subsidy for industry.

There are a number of sensible, critical questions in Slee’s blog post. But this is a ridiculous charge. Prior to the data being open, you had an asset that was paid for by taxpayer dollars, then charged for at a premium that created a barrier to access. Of course, this barrier was easiest to surmount for large companies and wealthy individuals. If there was a subsidy for industry, it was under the previous model, as it effectively had the most regressive tax for access of any government service.

Indeed, probably the biggest beneficiaries of open data so far have been Canada’s municipalities, which have been able to gain access to much more data than they previously could, and have saved a significant amount of money (Canadian municipalities are chronically underfunded.) And of course, when looking at the most downloaded data sets from the site, it would appear that non-profits and citizens are making good use of them. For example, the 6th most downloaded was the Anthropogenic disturbance footprint within boreal caribou ranges across Canada used by many environmental groups; number 8 was weather data; 9th was Sales of fuel used for road motor vehicles, by province and territory, used most frequently to calculate Green House Gas emissions; and 10th the Government of Canada Core Subject Thesaurus – used, I suspect, to decode the machinery of government. Most of the other top downloaded data sets related to immigration, used it appears, to help applicants. Hard to see the hand of big business in all this, although if open data helped Canada’s private sector become more efficient and productive, I would hardly complain.

If your still with me, thank you, I know that was a long slog.

Canada Post’s War on the 21st Century, Innovation & Productivity

The other week Canada Post announced it was suing Geocoder.ca – an alternative provider of postal code data. It’s a depressing statement on the status of the digital economy in Canada for a variety of reasons. The three that stand out are:

1) The Canadian Government has launched an open government initiative which includes a strong emphasis on open data and innovation. Guess which data set is the most requested data set by the public: Postal Code data.

2) This case risks calling into question the government’s commitment to (and understanding of) digital innovation, and

3) it is an indication – given the flimsiness of the case – of how little crown corporations understand the law (or worse, how willing they are to use the taxpayer funded litigation to bully others irrespective of the law).

Let me break down the situation into three parts. 1) Why this case matters to the digital economy (and why you should care), 2) Why the case is flimsy (and a ton of depressingly hilariously facts) and 3) What the Government could be doing about, but isn’t.

Why this case matters.

So… funny thing the humble postal code. One would have thought that, in a digital era, the lowly postal code would have lost its meaning.

The interesting truth however, is that the lowly postal code has, in many ways, never been more important. For better for worse, postal codes have become a core piece of data for both the analog and especially digital economy. These simple, easy to remember, six digit numbers, allow you to let a company, political party, or non-profit to figure out what neighborhood, MP riding or city you are in. And once we know where you are, there are all sorts of services the internet can offer you: is that game you wanted available anywhere near you? Who are your elected representatives (and how did they vote on that bill)? What social services are near you? Postal codes, quite simply, one of the easiest ways for us to identify where we are, so that governments, companies and others can better serve us. For example, after to speaking to Geocoder.ca founder Ervin Ruci, it turns out that federal government ministries are a major client of his, dozens of different departments using his service including… the Ministry of Justice.

Given how important postal code data is given it can enable companies, non-profits and government’s to be more efficient and productive (and thus competitive), one would think government would want to make it as widely available as possible. This is, of course, what several governments do.

But not Canada. Here postal code data is managed by Canada Post, which charges, I’m told, between $5,000-$50,000 dollars for access to the postal code database (depending on what you want). This means, in theory, every business (or government entity at the local, provincial or federal level) in Canada that wants to use postal code information to figure out where its customers are located must pay this fee, which, of course, it passes along to its customers. Worse, for others the fee is simple not affordable. For non-profits, charities and, of course, small businesses and start-ups, they either choose to be less efficient, or test their business model in a jurisdiction where this type of data is easier to access.

Why this case is flimsy

Of course, because postal codes are so important, Geocoder came up with an innovative solution to the problem. Rather than copy Canada Post’s postal code data base (which would have violated Canada Post’s terms of use) they did something ingenious… they got lots of people to help them manually recreate the data set. (There is a brief description of how here) As the Canadian Internet Policy and Public Interest Clinic (CIPPIC) brilliant argues in their defense of Geocoder: “The Copyright Act confers on copyright owners only limited rights in respect of particular works: it confers no monopoly on classes of works (only limited rights in respect of specific original works of authorship), nor any protection against independent creation. The Plaintiff (Canada Post) improperly seeks to use the Copyright Act to craft patent-like rights against competition from independently created postal code databases.”

And, of course, there are even deeper problems with Canada Post’s claims:

The first is that an address – including the postal code – is a fact. And facts cannot be copyrighted. And, of course, if Canada Post won, we’d all be hooped since writing a postal code down on say… an envelop, would violate Canada Post’s copyright.

The second, was pointed out to me by a mail list contributor who happened to work for a city. He pointed out that it is local governments that frequently create the address data and then share it with Canada Post. Can you imagine if cities tried to copyright their address data? The claim is laughable. Canada post claims that it must charge for the data to recoup the cost of creating it, but the data it gets from cities it gets for free – the creation of postal code data should not be an expensive proposition.

But most importantly… NON OF THIS SHOULD MATTER. In a world of where our government is pushing an open data strategy, the economic merits of making one of the most important open data sets public, should stand on their own without the fact that the law is on our side.

There is also a bonus 4th element which makes for fun reading in the CIPPIC defense that James McKinney pointed out:

“Contrary to the Plaintiff’s (Canada Post’s) assertion at paragraph 11 of the Statement of Claim that ‘Her Majesty’s copyright to the CPC Database was transferred to Canada Post’ under section 63 of the Canada Post Corporation, no section 63 of the current Canada Post Corporation Act  even exists. Neither does the Act that came into force in 1981 transfer such title.”

You can read the Canada Post Act on the Ministry of Justice’s website here and – as everyone except, apparently, Canada Post’s lawyers has observed – it has only 62 sections.

What Can Be Done.

Speaking of The Canada Post Act, while there is no section 63, there is a section 22, which appears under the header “Directives” and, intriguingly, reads:

22. (1) In the exercise of its powers and the performance of its duties, the Corporation shall comply with such directives as the Minister may give to it.

In other words… the government can compel Canada Post to make its Postal Code data open. Sections 22 (3), (4) and (5) suggest that the government may have to compensate Canada Post for the cost of implementing such a directive, but it is not clear that it must do so. Besides, it will be interesting to see how much money is actually at stake. As an aside, if Canada were to explore privatizing Canada Post, separating out the postal code function and folding it back into government would be a logical decision since you would want all players in the space (a private Canada Post, FedEx, Puralator, etc…) to all be able to use a single postal code system.

Either way, the government cannot claim that Canada Post’s crown corporation status prevents it from compelling the organization to apply an open license to its postal code data. The law is very clear that it can.

What appears to be increasingly obvious is that the era of closed postal code data will be coming to an end. It may be in a slow, expensive and wasteful lawsuit that costs both Canada Post, Canadian taxpayers and CIPPIC resources and energy they can ill afford, or it can come quickly through a Ministerial directive.

Let’s hope that latter prevails.

Indeed, the postal code has arguably become the system for physical organizing our society. Everything from the census to urban planning to figuring out where to build a Tim Horton’s or Starbucks will often use postal code data as the way to organize data about who we are and where we live. Indeed it is the humble postal code that frequently allows all these organizations – from governments to non-profits to companies – to be efficient about locating people and allocating resources. Oh. And it also really helps for shipping stuff quickly that you bought online.

It would be nice to live in a country that really understood how to support a digital economy. Sadly, last week, I was once again reminded of how frustrating it is to try to be 21st century company in Canada.

What happened?

Directives
  • 22. (1) In the exercise of its powers and the performance of its duties, the Corporation shall comply with such directives as the Minister may give to it.

Some thoughts on the Open Government Partnership

It is hard to sum up what is happening at the Open Government Partnership this year. Whether it is the geography the conference covers (over 40 countries), the range of issues affected by openness, or the sheer number of people, there is a great deal to wrap your arms around.

Here are some reflections after a day and a half.

First is the sheer size of the conference. I’m told there are roughly 1200 registered participants. And you feel it. The buzz is louder, the crowds are bigger, and the number of people you don’t know is larger.

For one, governments get to see what others are up to, but more important are the connections made among civil society members. In many ways the OGP’s biggest benefit may be the way it builds capacity by enabling civil society organizations and individuals to learn from one another and trade stories.

The potential for this is particularly true (and remains unrealized) between civil society communities that do not tend to interact. There remain important and interest gaps particularly between the more mature “Access to Information” community and the younger, still coalescing “Gov2.0/OpenGov/Tech/Transparency” community. It often feels like members of the access to information community are dismissive of the technology aspects of the open government movement in general and the OGP in particular. This is disappointing as technology is likely going to have a significant impact on the future of access to information. As more and more government work gets digitized, how way we access information is going to change, and the opportunities to architect for accessibility (or not) will become more important. These are important conversations and finding a way to knit these two communities together more could help the advance everyone’s thinking.

Moreover, concerns among access to information types that the OGP will be dominated by technology issues feel overplayed, every “official” civil society representative I witnessed respond to a government presentation on its OGP goals was someone out of the Access to Information community, not the Gov.20/tech community. In a real sense, it is the access to information community that has greater influence over the discourse at the OGP and so concerns about the reverse feel, to some measure, overblown.

Finally, and perhaps most intriguingly, there are some very early debates about the future of the OGP, particularly in relation to its members. The articles of governance published yesterday by the OGP do lay out a process for removing members, but the criteria is vague regarding many issues the virtually all civil society members feel strongly about. The OGP has already demonstrated that the term open can capture the imaginations of a broad group of people and is a desirable trait to which governments want to be associated. In this regard it has some realized and a great deal more potential of being an important carrot that can provoke governments to make commitments around openness that they might not have otherwise make or prioritize. But the stick – which is essential to many civil society participants – remains still somewhat vague. And without it, it is hard to imagine the project working. If, once you are in the OGP, it does not much matter what you do, then the project loses a great deal of its meaning, at least, based on conversations I had, to many of its civil society participants.

And the tests on this issue are real and immediate.

South Africa – a OGP steerting committee member(!) – is in the process of enacting the “Protection of Information Bill” which effectively makes leaks illegal. If this can happen without any sanctions to its OGP status, then I suspect, the process loses a great deal of credibility. The participation of Russia raises similar questions. While it speaks volumes about the attractiveness of the OGP and Russia’s participation may help foster some domestic positive changes, to admit a country that is regularly accused of rigging elections and where journalists routinely go missing is likely to frustrate many who wish to use the OGP as a stick by which to hold their own governments to account. How worried will Mexico, Turkey or Canada be about reneging on its commitments if South Africa is allowed to pass draconian laws around access to information, or journalists are allowed to go missing in Russia?

To date, there are not heated arguments over the issue (at least publicly) and my sense is the topic is only just beginning to peculate for most civil society members, but given the immediate challenges South Africa and Russia pose to the OGP expect this issue to become much more heated, barring some clear resolution that satisfies the civil society participants.

Much less important, but still worth noting, is the simple fact that the logistics must be better next time. While the Brazilians were generous and warm hosts and, unlike in New York, civil society participants were thankfully not segregated from the government representatives, the failure to have internet access on the first day was unacceptable. It meant that anyone not on site could not follow along to the presentations and those at the conference could not engage those at home, or at the conference, online. For a conference about openness and engagement, it was an unfortunate reminder of even some of the more basic challenges still confronting us.

Canada's Action Plan on Open Government: A Review

The other day the Canadian Government published its Action Plan on Open Government, a high level document that both lays out the Government’s goals on this file as well as fulfill its pledge to create tangible goals as part of its participation in next week’s Open Government Partnership 2012 annual meeting in Brazil.

So what does the document say and what does it mean? Here is my take.

Take Away #1: Not a breakthrough document

There is much that is good in the government’s action plan – some of which I will highlight later. But for those hoping that Canada was going to get the Gov 2.0 bug and try to leapfrog leaders like the United States or the United Kingdom, this document will disappoint. By and large this document is not about transforming government – even at its most ambitious it appears to be much more about engaging in some medium sized experiments.

As a result the document emphasizes a number of things that the UK and US started doing several years ago such  getting license that adheres to international norms or posting government resource allocation and performance management information online in machine readable forms or refining the open data portal.

What you don’t see are explicit references to try to re-think how government leverages citizens experience and knowledge with a site like Challenge.gov, engage experts in innovative ways such as with Peer to Patent, or work with industries or provinces to generate personal open data such as the US has done with the Blue Button (for Healthcare) or the Green Button (for utilities).

Take Away #2: A Solid Foundation

This said, there is much in the document that is good. Specifically, in many areas, it does lay a solid foundation for some future successes. Probably the most important statements are the “foundational commitments” that appear on this page. Here are some key points:

Open Government Directive

In Year 1 of our Action Plan, we will confirm our policy direction for Open Government by issuing a new Directive on Open Government. The Directive will provide guidance to 106 federal departments and agencies on what they must do to maximize the availability of online information and data, identify the nature of information to be published, as well as the timing, formats, and standards that departments will be required to adopt… The clear goal of this Directive is to make Open Government and open information the ‘default’ approach.

This last sentence is nice to read. Of course the devil will be in the detail (and in the execution) but establishing a directive around open information could end being as important (although admittedly not as powerful – an important point) as the establishment of Access to Information. Done right such a directive could vastly expand the range of documents made available to the public, something that should be very doable as more and more government documentation moves into digital formats.

For those complaining about the lack of ATI reform in the document this directive, and its creation will be with further exploration. There is an enormous opportunity here to reset how government discloses information – and “the default to open” line creates a public standard that we can try to hold the government to account on.

And of course the real test for all this will come in years 2-3 when it comes time to disclose documents around something sensitive to the government… like, say, around the issue of the Northern Gateway Pipeline (or something akin to the Afghan Prisoner issue). In theory this directive should make all government research and assessments open, when this moment happens we’ll have a real test of the robustness of any new such directive.

Open Government License:

To support the Directive and reduce the administrative burden of managing multiple licensing regimes across the Government of Canada, we will issue a new universal Open Government License in Year 1 of our Action Plan with the goal of removing restrictions on the reuse of published Government of Canada information (data, info, websites, publications) and aligning with international best practices… The purpose of the new Open Government License will be to promote the re-use of federal information as widely as possible...

Full Disclosure: I have been pushing (in an unpaid capacity) for the government to reform its license and helping out in its discussions with other jurisdictions around how it can incorporate the best practices and most permissive language possible.

This is another important foundational piece. To be clear, this is not about an “open data” license. This is about creating a licensing for all government information and media. I suspect this appeals to this government in part because it ends the craziness of having lawyers across government constantly re-inventing new licenses and creating a complex set of licenses to manage. Let me be clear about what I think this means: This is functionally about neutering crown copyright. It’s about creating a licensing regime that makes very clear what the users rights are (which crown copyright does not doe) and that is as permissive as possible about re-use (which crown copyright, because of its lack of clarity, is not). Achieving such a license is a critical step to doing many of the more ambitious open government and gov 2.0 activities that many of us would like to see happen.

Take Away #3: The Good and Bad Around Access to Information

For many, I think this may be the biggest disappointment is that the government has chosen not to try to update the Access to Information Act. It is true that this is what the Access to Information Commissioners from across the country recommended they do in an open letter (recommendation #2 in their letter). Opening up the act likely has a number of political risks – particularly for a government that has not always been forthcoming documents (the Afghan detainee issue and F-35 contract both come to mind) – however, I again propose that it may be possible to achieve some of the objectives around improved access through the Open Government Directive.

What I think shouldn’t be overlooked, however, is the government’s “experiment” around modernizing the administration of Access to Information:

To improve service quality and ease of access for citizens, and to reduce processing costs for institutions, we will begin modernizing and centralizing the platforms supporting the administration of Access to Information (ATI). In Year 1, we will pilot online request and payment services for a number of departments allowing Canadians for the first time to submit and pay for ATI requests online with the goal of having this capability available to all departments as soon as feasible. In Years 2 and 3, we will make completed ATI request summaries searchable online, and we will focus on the design and implementation of a standardized, modern, ATI solution to be used by all federal departments and

These are welcome improvements. As one colleague – James McKinney – noted, the fact that you have to pay with a check means that only people with Canadian bank accounts can make ATIP requests. This largely means just Canadian citizens. This is ridiculous. Moreover, the process is slow and painful (who uses check! the Brits are phasing them out by 2018 – good on em!). The use of checks creates a real barrier – particularly I think, for young people.

Also, being able search summaries of previous requests is a no-brainer.

Take Away #4: The is a document of experiments

As I mentioned earlier, outside the foundational commitments, the document reads less like a grand experiment and more like a series of small experiments.

Here the Virtual Library is another interesting commitment – certainly during the consultations the number one complaint was that people have a hard time finding what they are looking for on government websites. Sadly, even if you know the name of the document you want, it is still often hard to find. A virtual library is meant to address this concern – obviously it is all going to be in the implementation – but it is a response to a genuine expressed need.

Meanwhile the Advancing Recordkeeping in the Government of Canada and User-Centric Web Services feel like projects that were maybe already in the pipeline before Open Government came on the scene. They certainly do conform with the shared services and IT centralization announced by Treasury Board last year. They could be helpful but honestly, these will all be about execution since these types of projects can harmonize processes and save money, or they can become enormous boondoggles that everyone tries to work around since they don’t meet anyone’s requirements. If they do go the right way, I can definitely imagine how they might help the management of ATI requests (I have to imagine it would make it easier to track down a document).

I am deeply excited about the implementation of International Aid Transparency Initiative (IATI). This is something I’ve campaigned for and urged the government to adopt, so it is great to see. I think these types of cross jurisdictional standards have a huge role to play in the open government movement, so joining one, figuring out what about the implementation works and doesn’t work, and assessing its impact, is important both for Open Government in general but also for Canada, as it will let us learn lessons that, I hope, will become applicable in other areas as more of these types of standards emerge.

Conclusion:

I think it was always going to be a stretch to imagine Canada taking a leadership role in Open Government space, at least at this point. Frankly, we have a lot of catching up to do, just to draw even with places like the US and the UK which have been working hard to keep experimenting with new ideas in the space. What is promising about the document is that it does present an opportunity for some foundational pieces to be put into play. The bad news is that real efforts to rethink governments relationship with citizens, or even the role of the public servant within a digital government, have not been taken very far.

So… a C+?

 

Additional disclaimer: As many of my readers know, I sit on the Federal Government’s Open Government Advisory Panel. My role on this panel is to serve as a challenge function to the ideas that are presented to us. In this capacity I share with them the same information I share with you – I try to be candid about what I think works and doesn’t work around ideas they put forward. Interestingly, I did not see even a draft version of the Action Plan until it was posted to the website and was (obviously by inference) not involved in its creation. Just want to share all that to be, well, transparent, about where I’m coming from – which remains as a citizen who cares about these issues and wants to push governments to do more around gov 2.0 and open gov.

Also, sorry or the typos, but I’m sick and it is 1am. So I’m checking out. Will proof read again when I awake.

Here's a prediction: A Canadian F-35 will be shot down by a drone in 2035

One of the problems with living in a country like Canada is that certain people become the default person on certain issues. It’s a small place and the opportunity for specialization (and brand building) is small, so you can expect people to go back to the same well a fair bit on certain issues. I know, when it comes to Open Data, I can often be that well.

Yesterday’s article by Jack Granastein – one of the country’s favourite commentator’s on (and cheerleaders of) all things military – is a great case in point. It’s also a wonderful example of an article that is not designed to answer deep questions, but merely reassure readers not to question anything.

For those not in the know, Canada is in the midst of a scandal around the procurement of new fighter jets which, it turns out, the government not only chose to single source, but has been caught lying misleading the public about the costs despite repeated attempts by both the opposition and the media to ask for the full cost. Turns out the plans will cost twice as much as previously revealed, maybe more. For those interested in reading a case study in how not to do government procurement Andrew Coyne offers a good review in his two latest columns here and here. (Granastein, in the past, has followed the government script, using the radically low-ball figure of $16 billion, it is now accepted to be $26 billion).

Here is why Jack Granastein’s piece is so puzzling. The fact is, there really aren’t that many articles about whether the F-35 is the right plane or not. People are incensed about being radically mislead about the cost and the sole source process – not that we chose the F-35. But Granastein’s piece is all about assuring us that a) a lot of thought has gone into this choice and b) we shouldn’t really blame the military planners (nor apparently, the politicians). It is the public servants fault. So, some thoughts.

These are some disturbing and confusing conclusions. I have to say, it is very, very depressing to read someone as seasoned and knowledgeable as Granastein write:

But the estimates of costs, and the spin that has so exercised the Auditor-General, the media and the Opposition, are shaped and massaged by the deputy minister, in effect DND’s chief financial officer, who advises the minister of national defence.

Errr….Really? I think they are shaped by them at the direction or with the approval of the Minister of Defence. I agree that the Minister and Cabinet probably are not up to speed on the latest in airframe technology and so probably aren’t hand picking the fighter plane. But you know what they are up to speed on? Spinning budgets and political messages to sell to the public. To somehow try to deflect the blame onto the public servants feels, well, like yet another death nail for the notion of ministerial accountability.

But even Granastein’s love of the F-35 is hard to grasp. Apparently:

“we cannot see into the future, and we do not know what challenges we might face. Who foresaw Canadian fighters participating in Kosovo a dozen years ago? Who anticipated the Libyan campaign?”

I’m not sure I want to live and die on those examples. I mean in Libya alone our CF-18’s were joined by F-16s, Rafale fighters, Mirage 2000s and Mirage 2000Ds, Tornados, Eurofighter Typhoons, and JAS 39C Gripen (are you bored yet?). Apparently there were at least 7 other choices that would have worked out okay for the mission. The Kosovo mission had an even wider assortment of planes. Apparently, this isn’t a choice of getting it “just right” more like, “there are a lot of options that will work.”

But looking into the future there are some solid and strong predictions we can make:

1) Granastein himself argued in 2010 that performing sovereignty patrols in the arctic is one of the reasons we need to buy new planes. Here is a known future scenario. So frankly I’m surprised he’s bullish on the F-35s since the F-35’s will not be able to operate in the arctic for at least 5 years and may not for even longer. Given that, in that same article, Granastein swallowed the now revealed to be bogus total cost of owernship figures provided by the Department of National Defence hook, line and sinke, you think he might be more skeptical about other facts. Apparently not.

2) We can’t predict the future. I agree. But I’m going to make a prediction anyway. If Canada fights an enemy with any of the sophistication that would require us to have the F-35 (say, a China in 25 years) I predict that an F-35 will get shot down by a pilotless drone in that conflict.

What makes drones so interesting is that because they don’t have to have pilots they can be smaller, faster and more maneuverable. Indeed in the 1970s UAVs were able to outmaneuver the best US pilots of the day. Moreover, the world of aviation may change very quickly in the coming years. Everyone will tell you a drone can’t beat a piloted plane. This is almost likely true today (although a pilot-less drone almost shot down a Mig in 2002 in Iraq).

But may have two things going for them. First, if drones become cheaper to build and operate, and you don’t have to worry about losing the expensive pilot, you may be able to make up for competency with numbers. Imagining an F-35 defeating a single drone – such as the US Navy’s experimental X-47B – is easy. What about defeating a swarm of 5 of them that are working seamlessly together?

Second, much like nature, survival frequently favours those who can reproduce frequently. The F-35 is expected to last Canada 30-35 years. Yes there will be upgrades and changes, but that is a slow evolutionary pace. In that time, I suspect we’ll see somewhere between 5 (and likely a lot more) generations of drones. And why not? There are no pilots to retrain, just new lessons from the previous generation of drones to draw from, and new technological and geo-political realities to adapt to.

I’m not even beginning to argue that air-to-air combat capable drones are available today, but it isn’t unlikely that they could be available in 5-10 years. Of course, many air forces hate talking about this because, well, drones mean no more pilots and air forces are composed of… well… pilots. But it does suggest that Canada could buy a fighter that is much cheaper, would still enable us to participate in missions like Kosovo and Libya, without locking us into a 30-35 year commitment at the very moment the military aerospace industry is entering what is possibly the most disruptive period in its history.

It would seem that, at the very least, since we’ve been mislead about pretty much everything involved in this project, asking these questions now feels like fair game.

(Oh, and as an aside, as we decide to pay somewhere between $26-44 Billion for fighter planes, our government cut the entire $5 million year budget of the National Aboriginal Health Organization which over research and programs, in areas like suicide prevention, tobacco cessation, housing and midwifery. While today Canada ranks 6th in the world in the UN’s Quality of Life index, it was calculated that in 2007 Canada’s first nation’s population, had they been ranked as a separate group, would have ranked 63rd. Right above healthy countries like Belarus, Russia and Libya. Well at least now we’ll have less data about the problem, which means we won’t know to worry about it.)

 

Using BHAG's to Change Organizations: A Management, Open Data & Government Mashup

I’m a big believer in the ancillary benefits of a single big goal. Set a goal that has one clear objective, but as a result a bunch of other things have to change as well.

So one of my favourite Big Hairy Audacious Goals (BHAG) for an organization is to go paperless. I like the goal for all sorts of reasons. Much like a true BHAG it is is clear, compelling, and has obvious “finish line.” And while hard, it is achievable.

It has the benefit of potentially making the organization more “green” but, what I really like about it is that it requires a bunch of other steps to take place that should position the organization to become more efficient, effective and faster.

This is because paper is dumb technology. Among many, many other things, information on paper can’t be tracked, changes can’t be noted, pageviews can’t be recorded, data can’t be linked. It is hard to run a lean business when you’re using paper.

Getting rid of it often means you have get a better handle on workflow and processes so they can be streamlined. It means rethinking the tools you use. It means getting rid of checks and into direct deposit, moving off letters and into email, getting your documents, agendas, meeting minutes, policies and god knows what else out of MS Word and onto wikis, shifting from printed product manuals to PDFs or better still, YouTube videos. These changes in turn require a rethinking of how your employees work together and the skills they require.

So what starts off as a simple goal – getting rid of paper – pretty soon requires some deep organizational change. Of course, the rallying cry of “more efficient processes!” or “better understanding our workflow” have pretty limited appeal and, can be hard for everyone to wrap their head around. However, “getting rid of paper”? It is simple, clear and, frankly, is something that everyone in the organization can probably contribute an idea towards achieving. And, it will achieve many of the less sexy but more important goals.

Turns out, maybe some governments may be thinking this way.

The State of Oklahoma has a nice website that talks about all their “green” initiatives. Of course, it just so happens that many of these initiatives – reducing travel, getting rid of paper, etc… also happen to reduce costs and improve service but are easier to measure. I haven’t spoken with anyone at the State of Oklahoma to see if this is the real goal, but the website seems to acknowledges that it is:

OK.gov was created to improve access to government, reduce service-processing costs and enable state agencies to provide a higher quality of service to their constituents.

So for Oklahoma, going paperless becomes a way to get at some larger transformations. Nice BHAG. Of course, as with any good BHAG, you can track these changes and share them with your shareholders, stakeholders or… citizens.

And behold! The Oklahoma go green website invites different state agencies to report data on how their online services reduce paper consumption and/or carbon emissions. Data that they in turn track and share with the public via the state’s Socrata data portal. This graph shows how much agencies have reduced their paper output over the past four years.

Notice how some departments have no data – if I were an Oklahoma taxpayer, I’m not too sure I’d be thrilled with them.But take a step back. This is a wonderful example of how transparency and open data can help drive a government initiative. Not only can that data make it easier for the public to understand what has happened (and so be more readily engaged) but it can help cultivating a culture of accountability as well as – and perhaps more importantly – promote a culture of metrics that I believe will be critical for the future of government.

I often say to governments “be strategic about how you use some of the data you make open.” Don’t just share a bunch of stuff, use what you share to achieve policy or organizational objectives. This is a great example. It’s also a potentially a great example at organizational change in a large and complex environment. Interesting stuff.

 

 

Next Generation Open Data: Personal Data Access

Background

This Monday I had the pleasure of being in Mexico City for the OECD’s High Level Meeting on e-Government. CIO’s from a number of countries were present – including Australia, Canada, the UK and Mexico (among others). But what really got me going was a presentation by Chris Vein, the Deputy United States Chief Technology Officer for Government Innovation.

In his presentation he referenced work around the Blue Button and the Green Button – both efforts I was previously familiar with. But my conversation with Chris sparked several new ideas and reminded me of just how revolutionary these initiatives are.

For those unacquainted with them, here’s a brief summary:

The Blue Button Initiative emerged out of the US Department of Veterans Affairs (VA) with a simple goal – create a big blue button on their website that would enable a logged in user to download their health records. That way they can then share those records with whoever they wish, a new doctor, a hospital, an application or even just look at it themselves. The idea has been deemed so good, so important and so popular, that it is now being championed as industry standard, something that not just the VA but all US health providers should do.

The Green Button Initiative is similar. I first read about it on ReadWriteWeb under the catchy and insightful title “Green Button” Open Data Just Created an App Market for 27M US Homes. Essentially the Green Button would enable users to download their energy consumption data from their utility. In the United States 9 utilities have already launched Green Buttons and an app ecosystem – applications that would enable people to monitor their energy use – is starting to emerge. Indeed Chris Vein talked about one app that enabled a user to see their thermostat in real time and then assess the financial and environmental implications of raising and/or lowering it. I personally see the Green Button evolving into an API that you can give others access to… but that is a detail.

Why it Matters

Colleagues like Nigel Shadbolt in the UK have talked a lot about enabling citizens to get their data out of websites like Facebook. And Google has it’s own very laudable Data Liberation Front run by great guy and werewolf expert, Brian Fitzpatrick. But what makes the Green Button and Blue Button initiatives unique and important is that they create a common industry standard for sharing consumer data. This creates incentives for third parties to develop applications and websites that can analyze this data because these applications will scale across jurisdictions. Hence the Read Write Web article’s focus on a new market. It also makes the data easy to share. Healthcare records downloaded using the blue button are easily passed on to a new doctor or a new hospital since now people can design systems to consumer these healthcare records. Most importantly, it gives the option of sharing these records so they don’t have to wait for lumbering bureaucracies.

This is a whole new type of open data. Open not to the public but to the individual to whom the data really belongs.

A Proposal

I would love to see the blue button and green button initiative spread to companies and jurisdictions outside the United States. There is no reason why for examples there cannot be Blue Buttons on Provincial Health Care website in Canada, or the UK. Nor is there any reason why provincial energy corporations like BC Hydro or Bullfrog Energy (there’s a progressive company that would get this) couldn’t implement the Green Button. Doing so would enable Canadian software developers to create applications that could use this data and help citizens and tap into the US market. Conversely, Canadian citizens could tap into applications created in the US.

The opportunity here is huge. Not only could this revolutionize citizens access to their own health and energy consumption data, it would reduce the costs of sharing health care records, which in turn could potentially create savings for the industry at large.

Action

If you are a consumer, tell your local health agency, insurer and energy utility about this.

If you are a energy utility or Ministry of Health and are interested in this – please contact me.

Either way, I hope this is interesting. I believe there is huge potential in Personal Open Data, particular around data currently held by crown corporations and in critical industries, like healthcare.

When Industries Get Disrupted: Toronto Real Estate Boards Sad Campaign

As some of my readers know I’ve been engaged by the real estate industry at various points over the last year to share thoughts about how they might be impacted in a world where listings data might be more open.

So I was saddened to read the other day about this misleading campaign the Toronto Real Estate Board (TREB) has launched against the Competition Bureau. It’s got all the makings of a political attack ad. Ominous warnings, supportive polling and a selective use of facts. You can check it out at the Protectyourprivacy.ca website. (As an aside, those concerned with online issues like myself should be beating ourselves up for letting TREB snag that URL. There are literally dozens of more compelling uses for that domain, from Bill c-30 to advocacy around privacy setting in Facebook or Google.)

The campaign does, however, make a wonderful mini-case study in how some industries react when confronted with disruptive change. They don’t try to innovate out of the problem, they go to the lawyers (and the pollsters and marketers). To be fair, not everyone in the Real Estate industry is behaving this way. Over the past several months I’ve had the real pleasure of meeting many, many real estate agents across the country who have been finding ways to innovate.

Which is why I suspect this campaign is actually quite divisive. Indeed, since the public doesn’t really know or care who does what in the real estate industry, they’re just going lump everyone in together. Consequently, if this campaign backfires (and there is a risk that if anyone pays attention to it, it could) than the entire industry could be tarred, not just those at TREB.

So what is the big scary story? Well according to TREB the Competition Bureau has gone rogue and is going to force Canadians to disclose their every personal detail to the world! Specifically, in the words of the Protectyourprivacy.ca website:

The Competition Bureau is trying to dismantle the safeguards for consumers’ personal and private information.

If they get their way, your sensitive personal home information could be made publicly available to anyone on the internet.

Are your alarm bells going off yet? If you’re like me, the answer is probably yes. But like me it not for any of the reasons TREB wants.

To begin with, Canada has a fairly aggressive Privacy Commissioner who is very willing to speak her mind. I suspect she (and possibly her provincial counterparts) were consulted before Competition Commissioner issued her request. And like most Canadians I likely trust the Privacy Commissioner more than TREB. She’s been fairly quiet.

But of course, why speculate about issues! Let’s go straight to the source. What did the Competition Bureau actually ask for? Well you can find all the relevant documents here (funny how TREB’s campaign website does not link to any of these), but check it out yourself. Here is my breakdown of the issue:

1. This is actually about enabling new services – TREB essentially uses MLS – the online listing service where you look for homes, as a mechanism to prevent new ways of looking for homes online from emerging. I suspect that consumers are not well served by this outcome. That is certainly how the Competition Bureau feels.

2. The Competition Bureau is not asking for information like your name and street address to be posted online for all to see (although I actually think consumers should be given that choice). Indeed you can tell a lawyer was involved in drafting the protectyourprivacy.ca website. There are all these strategically inserted “could’s” as in “your sensitive personal home information could be made publicly available.” Err… that’s a fair degree less alarming.

What the Competition Bureau appears to want is to enable brokers’ client to browse homes on a password-protected site (called a “virtual office website”). Here they could get more details than what is currently available to the public at large on MLS. However, even these password protected site might not include things like the current occupants name. It would however (or at least hopefully) include previous sales prices, as knowing the history of the market is quite helpful. I think most consumers agree that a little more transparency around pricing in the real estate industry would be good for consumers.

3. Of course, anything that happens on such a website would still have to comply with Privacy Laws and would, ultimately, still require the sellers consent.

According to TREB however, implementing these recommendations will lead to mayhem and death. Literally. Here is a quote from their privacy officer:

“There is a real possibility of break-ins and assaults; you only have to read the headlines to imagine what might happen. You hear stories about realtors getting attacked and killed. Can you imagine if we put that information out there about consumers? You can only imagine the headlines.”

Happily the Globe confirmed that the Toronto Police department is not aware of realtors being targeted for attack.

But here is the real punchline. Everything the Competition Commissioner is asking for already exists in places like Nova Scotia or across the entire United States.

Here’s what these lucky jurisdictions have not experienced: a rash of violence resulting from burglars and others browsing homes online (mostly because if they were going to do that… they could JUST USE GOOGLE STREET VIEW.).

And here’s what they have experienced: an explosion in new and innovative ways to browse, buy and sell homes. From Trulia to Zillow to Viewpoint consumers can get a radically better online experience than what is available in Toronto.

I suspect that if consumers actually hear about this campaign many – including most under the age of 40 – are going to see it as an effort by an industry to protect itself from new competition, not as an effort to protect them. If the story does break that way, it will be evidence to many consumers that the gap between them and the Real Estate industry is growing, not shrinking.

 

Data.gc.ca – Data Sets I found that are interesting, and some suggestions

Yesterday was the one year anniversary of the Canadian federal government’s open data portal. Over the past year government officials have been continuously adding to the portal, but as it isn’t particularly easy to browse data sets on the website, I’ve noticed a lot of people aren’t aware of what data is now available (self included!). Consequently, I want to encourage people to scan the available data sets and blog about ones that they think might be interesting to them personally, to others, or to communities of interests they may know.

Such an undertaking has been rendered MUCH easier thanks to the data.gc.ca administrators decision to publish a list of all the data sets available on the site. Turns out, there are 11680 data sets listed in this file. Of course, reviewing all this data took me much longer than I thought it would! (and to be clear, I didn’t explore each one in detail), but the process has been deeply interesting. Below are some thoughts, ideas and data sets that have come out of this exploration – I hope you’ll keep reading, and that it will be of interest to ordinary citizens, prospective data users and to managers of open government data portals.

TagCloud_GC_OpenData

A TagCloud of the Data Sets on data.gc.ca

Some Brief Thoughts on the Portal (and for others thinking about exploring the data)

Trying to review all the data sets on the portal is a enormous task and trying to do it has taught me some lessons about what works and doesn’t. The first is that, while the search function on the website is probably good if you have a keyword or a specific data you are looking for, it is much easier to browse the data in an excel than on the website. What was particularly nice about this is that, in excel, the data was often clustered by type. This made easy to spot related data sets – a great example of this when I found the data on “Building permits, residential values and number of units, by type of dwelling” I could immediately see there were about 12 other data sets on building permits available.

Another issue that became clear to me is the problem of how a data set is classified. For example, because of the way the data is structured (really as a report) the Canadian Dairy Exports data has a unique data file for every month and year (you can look at May 1988 as an example). That means each month is counted as a unique “data set” in the catalog. Of course, French and English versions are also counted as unique. This means that what I would consider to be a single data set “Canadian Dairy Exports Month Dairy Year from 1988 to present” actually counts as 398 data sets. This has two outcomes. First, it is hard to imagine anyone wants the data for just one month. This means a user looking for longitudinal data on this subject has to download 199 distinct data sets (very annoying). Why not just group it into one? Second, given that governments like to keep score about how many data sets they share – counting each month as a unique data set feels… unsportsmanlike. To be clear, this outcome is an artifact of how Agriculture Canada gathers and exports this data, but it is an example of the types of problems an open data catalog needs to come to grips with.

Finally, many users – particularly, but not exclusively, developers – are looking for data that is up to date. Indeed, real time data is particularly sexy since its dynamic nature means you can do interesting things with it. This it was frustrating to occasionally find data sets that were no longer being collected. A great example of this was the Provincial allocation of corporate taxable income, by industry. This data set jumped out at me as I thought it could be quite interesting. Sadly, StatsCan stopped collecting data on this in 1987 so any visualization will have limited use today. This is not to say data like this should be pulled from the catalog, but it might be nice to distinguish between datasets that are being collected on an ongoing basis versus those that are no longer being updated.

Data Sets I found Interesting

Just quickly before I begin, some quick thoughts on my very unscientific methodology for identifying interesting data sets.

  • First, browsing the data sets really brought home to me how many will be interesting to different groups – we really are in the world of the long tail of public policy. As a result, there is lots of data that I think will be interesting to many, many people that is not on this list.
  • Second, I tried to not include too much of StatsCan’s data. StatsCan data already has a fairly well developed user base. And while I’m confident that base is going to get bigger still now that its data is free, I figure there are already a number of people who will be sharing/talking about it
  • Finally, I’ve tried to identify some data sets that I think would make for good mashups or apps. This isn’t easy with federal government data sets since they tend do be more aggregate and high-level than say municipal data sets… but I’ve tried to tease out what I can. That said, I’m sure there is much, much more.

New GeoSpatial API!

So the first data set is a little bit of a cheat since it is not on the open data portal, but I was emailed about it yesterday and it is so damn exciting, I’ve got to share it. It is a recently released public BETA of a new RESTful API from the very cool people at GeoGratis that provides a consolidated access point to several repositories of geospatial data and information products including GeoGratis, GeoPub and Mirage. (huge thank you to the GeoGratis team for sending this to me).

Documentation can be found here (and in french here) and a sample search client that demonstrates some of its functionality and how to interact with the API can be found here. Formats include ATOM, HTML Fragment, CSV, RSS, JSON, and KML. (So you can see results – for example – in Google Earth by using the KML format (example here).

I’m also told that these fine folks have been working on geolocation service, so you can do sexy things like search by place name, by NTS map or by the first three characters of a postal code. Documentation will be posted here in english and french. Super geeks may notice that there is a field in the JSON called CGNDBkey. I’m also told you can use this key to select an individual placename according to the Canadian Geographic names board. Finally, you can also search all their Metadata through search engines like google (here is a sample search for gold they sent me).

All data is currently licensed under GeoGratis.

The National Pollutant Release Inventory

Description: The National Pollutant Release Inventory (NPRI) is Canada’s public inventory of pollutant releases (to air, water and land), disposals and transfers for recycling.

Notes: This is the same data set (but updated) that we used to create emitter.ca. I frankly feel like the opportunities around this data set, for environmentalists, investors (concerned about regulatory and lawsuit risks), the real estate industry, and others, is enormous. The public could be very interested in this.

Greenhouse Gas Emissions Reporting Program

Description: The Greenhouse Gas Emissions Reporting Program (GHGRP) is Canada’s legislated, publicly-accessible inventory of facility-reported greenhouse gas (GHG) data and information.

Notes: What interesting here is that while it doesn’t have lat/longs, it does have facility names and addresses. That means you should be able to cross reference it with the NPRI (which does have lat/longs) to be able to plot where the big greenhouse gas emitters are on a map. Think the same people as the NPRI might be interested in this data.

The Canadian Ice Thickness Program

Description: The Ice Thickness program dataset documents the thickness of ice on the ocean. Measurements begin when the ice is safe to walk on and continue until it is no longer safe to do so. This data can help gauge the impact of global warming and is relevant to shipping data in the north of Canada.

Notes: Students interested in global warming… this could make for some fun visualization.

Argo: Canadian Tracked Data

Description: Argo Data documents some of the approximately 3000 profiling floats were deployed around the world. Once at sea, the float sinks to a preprogrammed target depth of 2000 meters for a preprogrammed period of time. It then floats to the surface, taking temperature and salinity values during its ascent at set depths. — The Canadian Tracked Argo Datadescribes the Argo programme in Canada and provides data and information about Canadian floats.

Notes: Okay, so I can think of no use for this data, but I just that it was so awesome that people are doing this that I totally geeked out.

Civil Aircraft Register Database

Description: Civil Aircraft Register Database – this file contains the current mark, aircraft and owner information of all Canadian civil registered aircraft.

Notes: Here I really think there could be a geeky app. Just a simple app that you can type an aircraft’s number into and it will tell you the owner and details about the plane. I actually think the government could do a lot of work with this data. If regulatory and maintenance data were made available as well – then you’d have a powerful app that would tell you a lot about the planes you fly in. At a minimum would be of interest to flight enthusiasts.

Real Time Hydrometric Data Tool

Description: Real Time Hydrometric Data Tool – this site provides public access to real-time hydrometric (water level and streamflow) data collected at over 1700 locations in Canada. These data are collected under a national program jointly administered under federal-provincial and federal-territorial cost-sharing agreements. It is through partnerships that the Water Survey of Canada program has built a standardized and credible environmental information base for Canada. This dataset contains both current and historical datasets. The current month can be viewed in an HTML table, and historical data can be downloaded in CSV format.

Notes: So ripe for an API! What is cool is that the people at Environment Canada have integrated it into google maps. I could imagine fly fisherman and communities at risk of flooding being interested in this data set.

Access to information data sets

Description: 2006-2010 Access to Information and Privacy Statistics (With the previous years here, here and here.) is a compilation of statistical information about access to information and privacy submitted by government institutions subject to the Access to Information Act and the Privacy Act for 2006-2010.

Notes: I’d love to crunch this stuff again and see whose naughty and nice in the ATIP world…

Poultry and Forestry data

No links, BECAUSE THERE IS SO MUCH OF IT. Anyone interested in the Poultry or Forestry industry will find lots of data… obviously this stuff is useful to people who analyze these industries but I suspect there are a couple of “A” university level papers hidden in that data set as well.

Building Permits

There is tons on building permits., construction.. Actually one of the benefits of looking at the data in a spread sheet, easy to see other related data sets.

StatsCan

It really is amazing how much Statistic Canada data there is. Even reviewing something like the supply and demand of natural gas liquids got me thinking about the wealth of information trapped in there. One thing I do hope statscan starts to do is geolocate its data whenever possible.

Crime Data

As this has been in the news I couldn’t help but include it. It’s nice that any citizen can look at the crime data direct from StatsCan too see how our crime rate is falling (which is why we should build more expensive prisons) Crime statistics, by detailed offences. Of course unreported crime, which we all know is climbing at 3000% a year, is not included in these stats.

Legal Aid Applications

Legal aid applications, by status and type of matter. This was interesting to me since, here in BC there is much talk about funding for the Justice system and yet, the number of legal aid applications has remained more or less flat over the past 5 years.

National Broadband Coverage data

Description: The National Broadband Coverage Data represents broadband coverage information, by technology, for existing broadband service providers as of January 2012. Coverage information for Broadband Canada Program projects is included for all completed projects. Coverage information is aggregated over a grid of hexagons, which are each 6 km across. The estimated range of unserved / underserved population within in each hexagon location is included.

Notes: What’s nice is that there is lat/long data attached to all this, so mapping it, and potentially creating a heat map is possible. I’m certain the people at OpenMedia might appreciate such a map.

Census Consolidated Subdivision

Description: Census Consolidated Subdivision Cartographic Boundary Files portrays the geographic limits used for the 2006 census dissemination. The Census Consolidated Subdivision Boundary Files contain the boundaries of all 2,341 census consolidated subdivisions.

Notes: Obviously this one is on every data geeks radar, but just in case you’ve been asleep for the past 5 months, I wanted to highlight it.

Non-Emergency Surgeries, distribution of waiting times

Description: Non-emergency surgeries, distribution of waiting times, household population aged 15 and over, Canada, provinces and territories

Notes: Would love to see this at the hospital and clinic level!

Border Wait Times

Description: Estimates Border Wait Times (commercial and travellers flow) for the top 22 Canada Border Services Agency land border crossings.

Notes: Here I really think there is an app that could be made. At the very least there is something that could tell you historical averages and ideally, could be integrated into Google and Bing maps when calculating trip times… I can also imagine a lot of companies that export goods to the US are concerned about this issue and would be interested in better data to predict the costs and times of shipping goods. Big potential here.

Okay, that’s my list. Hope it inspires you to take a look yourself, or play with some of the data listed above!

Want to Find Government Innovation? US Military is often leading the way.

When it comes to see what trends will impact government in 20-30 years I’m a big fan of watching the US military. They may do lot of things wrong but, when it comes to government, they are on the bleeding edge of being a “learning organization.” It often feels like they are less risk averse, more likely to experiment, and, (as noted) more likely to learn, than almost any government agency I can think of (hint, those things maybe be interconnected). Few people realize that to rise above Colonel in many military organizations, you must have at least a masters degree. Many complete PhDs. And these schools often turn into places where people challenge authority and the institution’s conventional thinking.

Part of it, I suspect, has to do with the whole “people die when you make mistakes” aspect of their work. It may also have to do with the seriousness with which they take their mandate. And part of it has to do with the resources they have at their disposal.

But regardless of the cause, I find they are often at the cutting edge of ideas in the public sector. For example, I can’t think of a government organization that empowers the lowest echelons of its employee base more than the US military. Their network centric vision of the world means those on the front lines (both literally and figuratively) are often empowered, trusted and strongly supported with tools, data and technology to make decisions on the fly. In an odd way, the very hierarchical system that the rest of government has been modeled on, has really transcended into something different. Still very hierarchical but, at the same time, networked.

Frankly, if a 1-800 call operator at Service Canada or the guy working at the DMV in Pasadena, CA (or even just their managers) had 20% of the autonomy of a US Sargent, I suspect government would be far more responsive and innovative. Of course, Service Canada or the California DMV would have to have a network centric approach to their work… and that’s a ways off since it demands serious cultural challenge, the hardest thing to shift in an org.

Anyways… long rant. Today I’m interested in another smart call the US military is making that government procurement departments around the world should be paying attention to (I’m especially looking at you Public Works – pushers of Beehive over GCPEDIA). This article, Open source helicopters trivialize Europe’s ODF troubles, on Computer World’s Public Sector IT blog outlines how the next generation of US helicopters will be built on an open platform. No more proprietary software that binds hardware to a particular vendor.

Money quote from the piece:

Weapons manufacturers and US forces made an unequivocal declaration for royalty-free standards in January through the FACE (Future Airborne Capabilities Environment) Consortium they formed in response to US Defence Secretary Leon Panetta’s call for a “common aircraft architecture and subsystems”.

“The FACE Standard is an open, nonproprietary technical specification that is publicly available without restrictive contracts, licensing terms, or royalties,” the Consortium announced from its base at The Open Group, the industry association responsible for the POSIX open Unix specification.

“In business terms, the open standards specified for FACE mean that programmers are freely able to use them without monetary remuneration or other obligation to the standards owner,” it said.

While business software producers have opposed governments that have tried to implement identical open standards policies with the claim it will handicap innovation and dampen competition, the US military is embracing open standards for precisely the opposite reasons.

So suddenly the we are going to have an open source approach to innovation and program delivery (helicopter manufacturing, operation and maintenance) at major scale. Trust me, if the US military is trying to do this with helicopters you can convert you proprietary intranet to a open source wiki platform. I can’t believe the complexity is as great. But the larger point here is that this approach could be used to think about any system a government wants to develop, from earthquake monitoring equipment to healthcare systems to transit passes. From a “government as a platform perspective” this could be a project to watch. Lots of potential lessons here.