Tag Archives: government of canada

Canada's Action Plan on Open Government: A Review

The other day the Canadian Government published its Action Plan on Open Government, a high level document that both lays out the Government’s goals on this file as well as fulfill its pledge to create tangible goals as part of its participation in next week’s Open Government Partnership 2012 annual meeting in Brazil.

So what does the document say and what does it mean? Here is my take.

Take Away #1: Not a breakthrough document

There is much that is good in the government’s action plan – some of which I will highlight later. But for those hoping that Canada was going to get the Gov 2.0 bug and try to leapfrog leaders like the United States or the United Kingdom, this document will disappoint. By and large this document is not about transforming government – even at its most ambitious it appears to be much more about engaging in some medium sized experiments.

As a result the document emphasizes a number of things that the UK and US started doing several years ago such  getting license that adheres to international norms or posting government resource allocation and performance management information online in machine readable forms or refining the open data portal.

What you don’t see are explicit references to try to re-think how government leverages citizens experience and knowledge with a site like Challenge.gov, engage experts in innovative ways such as with Peer to Patent, or work with industries or provinces to generate personal open data such as the US has done with the Blue Button (for Healthcare) or the Green Button (for utilities).

Take Away #2: A Solid Foundation

This said, there is much in the document that is good. Specifically, in many areas, it does lay a solid foundation for some future successes. Probably the most important statements are the “foundational commitments” that appear on this page. Here are some key points:

Open Government Directive

In Year 1 of our Action Plan, we will confirm our policy direction for Open Government by issuing a new Directive on Open Government. The Directive will provide guidance to 106 federal departments and agencies on what they must do to maximize the availability of online information and data, identify the nature of information to be published, as well as the timing, formats, and standards that departments will be required to adopt… The clear goal of this Directive is to make Open Government and open information the ‘default’ approach.

This last sentence is nice to read. Of course the devil will be in the detail (and in the execution) but establishing a directive around open information could end being as important (although admittedly not as powerful – an important point) as the establishment of Access to Information. Done right such a directive could vastly expand the range of documents made available to the public, something that should be very doable as more and more government documentation moves into digital formats.

For those complaining about the lack of ATI reform in the document this directive, and its creation will be with further exploration. There is an enormous opportunity here to reset how government discloses information – and “the default to open” line creates a public standard that we can try to hold the government to account on.

And of course the real test for all this will come in years 2-3 when it comes time to disclose documents around something sensitive to the government… like, say, around the issue of the Northern Gateway Pipeline (or something akin to the Afghan Prisoner issue). In theory this directive should make all government research and assessments open, when this moment happens we’ll have a real test of the robustness of any new such directive.

Open Government License:

To support the Directive and reduce the administrative burden of managing multiple licensing regimes across the Government of Canada, we will issue a new universal Open Government License in Year 1 of our Action Plan with the goal of removing restrictions on the reuse of published Government of Canada information (data, info, websites, publications) and aligning with international best practices… The purpose of the new Open Government License will be to promote the re-use of federal information as widely as possible...

Full Disclosure: I have been pushing (in an unpaid capacity) for the government to reform its license and helping out in its discussions with other jurisdictions around how it can incorporate the best practices and most permissive language possible.

This is another important foundational piece. To be clear, this is not about an “open data” license. This is about creating a licensing for all government information and media. I suspect this appeals to this government in part because it ends the craziness of having lawyers across government constantly re-inventing new licenses and creating a complex set of licenses to manage. Let me be clear about what I think this means: This is functionally about neutering crown copyright. It’s about creating a licensing regime that makes very clear what the users rights are (which crown copyright does not doe) and that is as permissive as possible about re-use (which crown copyright, because of its lack of clarity, is not). Achieving such a license is a critical step to doing many of the more ambitious open government and gov 2.0 activities that many of us would like to see happen.

Take Away #3: The Good and Bad Around Access to Information

For many, I think this may be the biggest disappointment is that the government has chosen not to try to update the Access to Information Act. It is true that this is what the Access to Information Commissioners from across the country recommended they do in an open letter (recommendation #2 in their letter). Opening up the act likely has a number of political risks – particularly for a government that has not always been forthcoming documents (the Afghan detainee issue and F-35 contract both come to mind) – however, I again propose that it may be possible to achieve some of the objectives around improved access through the Open Government Directive.

What I think shouldn’t be overlooked, however, is the government’s “experiment” around modernizing the administration of Access to Information:

To improve service quality and ease of access for citizens, and to reduce processing costs for institutions, we will begin modernizing and centralizing the platforms supporting the administration of Access to Information (ATI). In Year 1, we will pilot online request and payment services for a number of departments allowing Canadians for the first time to submit and pay for ATI requests online with the goal of having this capability available to all departments as soon as feasible. In Years 2 and 3, we will make completed ATI request summaries searchable online, and we will focus on the design and implementation of a standardized, modern, ATI solution to be used by all federal departments and

These are welcome improvements. As one colleague – James McKinney – noted, the fact that you have to pay with a check means that only people with Canadian bank accounts can make ATIP requests. This largely means just Canadian citizens. This is ridiculous. Moreover, the process is slow and painful (who uses check! the Brits are phasing them out by 2018 – good on em!). The use of checks creates a real barrier – particularly I think, for young people.

Also, being able search summaries of previous requests is a no-brainer.

Take Away #4: The is a document of experiments

As I mentioned earlier, outside the foundational commitments, the document reads less like a grand experiment and more like a series of small experiments.

Here the Virtual Library is another interesting commitment – certainly during the consultations the number one complaint was that people have a hard time finding what they are looking for on government websites. Sadly, even if you know the name of the document you want, it is still often hard to find. A virtual library is meant to address this concern – obviously it is all going to be in the implementation – but it is a response to a genuine expressed need.

Meanwhile the Advancing Recordkeeping in the Government of Canada and User-Centric Web Services feel like projects that were maybe already in the pipeline before Open Government came on the scene. They certainly do conform with the shared services and IT centralization announced by Treasury Board last year. They could be helpful but honestly, these will all be about execution since these types of projects can harmonize processes and save money, or they can become enormous boondoggles that everyone tries to work around since they don’t meet anyone’s requirements. If they do go the right way, I can definitely imagine how they might help the management of ATI requests (I have to imagine it would make it easier to track down a document).

I am deeply excited about the implementation of International Aid Transparency Initiative (IATI). This is something I’ve campaigned for and urged the government to adopt, so it is great to see. I think these types of cross jurisdictional standards have a huge role to play in the open government movement, so joining one, figuring out what about the implementation works and doesn’t work, and assessing its impact, is important both for Open Government in general but also for Canada, as it will let us learn lessons that, I hope, will become applicable in other areas as more of these types of standards emerge.

Conclusion:

I think it was always going to be a stretch to imagine Canada taking a leadership role in Open Government space, at least at this point. Frankly, we have a lot of catching up to do, just to draw even with places like the US and the UK which have been working hard to keep experimenting with new ideas in the space. What is promising about the document is that it does present an opportunity for some foundational pieces to be put into play. The bad news is that real efforts to rethink governments relationship with citizens, or even the role of the public servant within a digital government, have not been taken very far.

So… a C+?

 

Additional disclaimer: As many of my readers know, I sit on the Federal Government’s Open Government Advisory Panel. My role on this panel is to serve as a challenge function to the ideas that are presented to us. In this capacity I share with them the same information I share with you – I try to be candid about what I think works and doesn’t work around ideas they put forward. Interestingly, I did not see even a draft version of the Action Plan until it was posted to the website and was (obviously by inference) not involved in its creation. Just want to share all that to be, well, transparent, about where I’m coming from – which remains as a citizen who cares about these issues and wants to push governments to do more around gov 2.0 and open gov.

Also, sorry or the typos, but I’m sick and it is 1am. So I’m checking out. Will proof read again when I awake.

Calculating the Value of Canada’s Open Data Portal: A Mini-Case Study

Okay, let’s geek out on some open data portal stats from data.gc.ca. I’ve got three parts to this review: First, an assessment on how to assess the value of data.gc.ca. Second, a look at what are the most downloaded data sets. And third, some interesting data about who is visiting the portal.

Before we dive in, a thank you to Jonathan C sent me some of this data to me the other day after requesting it from Treasury Board, the ministry within the Canadian Government that manages the government’s open data portal.

1. Assessing the Value of data.gc.ca

Here is the first thing that struck me. Many governments talk about how they struggle to find methodologies to measure the value of open data portals/initiatives. Often these assessments focus on things like number of apps created or downloaded. Sometimes (and incorrectly in my mind) pageviews or downloads are used. Occasionally it veers into things like mashups or websites.

However, one fairly tangible value of open data portals is that they cheaply resolve some access to information requests –  a point I’ve tried to make before. At the very minimum they give scale to some requests that previously would have been handled by slow and expensive access to information/freedom of information processes.

Let me share some numbers to explain what I mean.

The Canada Government is, I believe, only obligated to fulfill requests that originate within Canada. Drawing from the information in the charts later in this post, let’s say assume there were a total of 2200 downloads in January and that 1/3 of these originated from Canada – so a total of 726 “Canadian” downloads. Thanks to some earlier research, I happen to know that the office of the information commissioner has assessed that the average cost of fulfilling an access to information request in 2009-2010 was $1,332.21.

So in a world without an open data portal the hypothetical cost of fulfilling these “Canadian” downloads as formal access to information requests would have been $967,184.46 in January alone. Even if I’m off by 50%, then the cost – again, just for January – would still sit at $483,592.23. Assuming this is a safe monthly average, then over the course of a year the cost savings could be around $11,606,213.52 or $5,803,106.76 – depending on how conservative you’d want to be about the assumptions.

Of course, I’m well aware that not every one of these downloads would been an information request in a pre-portal world – that process is simply to burdensome. You have to pay a fee, and it has to be by check (who pays for anything by check any more???) so many of these users would simply have abandoned their search for government information. So some of these savings would not have been realized. But that doesn’t mean there isn’t value. Instead the open data portal is able to more cheaply reveal latent demand for data. In addition, only a fraction of the government’s data is presently on the portal – so all these numbers could get bigger still. And finally I’m only assessing downloads that originated inside Canada in these estimates.

So I’m not claiming that we have arrived at a holistic view of how to assess the value of open data portals – but even the narrow scope of assessment I outline above generates financial savings that are not trivial, and this is to say nothing of the value generated by those who downloaded the data – something that is much harder to measure – or of the value of increased access to Canadians and others.

2. Most Downloaded Datasets at data.gc.ca

This is interesting because… well… it’s just always interesting to see what people gravitate towards. But check this out…

Data sets like the Anthropogenic disturbance footprint within boreal caribou ranges across Canada may not seem interesting, but the ground breaking agreement between the Forest Products Association of Canada and a coalition of Environmental Non-Profits – known as the Canadian Boreal Forest Agreement (CBFA) – uses this data set a lot to assess where the endangered woodland caribou are most at risk. There is no app, but the data is critical in both protecting this species and in finding a way to sustainably harvest wood in Canada. (note, I worked as an adviser on the CBFA so am a) a big fan and b) not making this stuff up).

It is fascinating that immigration and visa data tops the list. But it really shouldn’t be a surprise. We are of course, a nation of immigrants. I’m sure that immigration and visa advisers, to say nothing of think tanks, municipal governments, social service non-profits and English as a second language schools are all very keen on using this data to help them understand how they should be shaping their services and policies to target immigrant communities.

There is, of course, weather. The original open government data set. We made this data open for 100s of years. So useful and so important you had to make it open.

And, nice to see Sales of fuel used for road motor vehicles, by province and territory. If you wanted to figure out the carbon footprint of vehicles, by province, I suspect this is a nice dataset to get. Probably is also useful for computing gas prices as it might let you get a handle on demand. Economists probably like this data set.

All this to say, I’m less skeptical than before about the data sets in data.gc.ca. With the exception of weather, these data sets aren’t likely useful to software developers – the group I tend to hear most from – but then I’ve always posited that apps were only going to be a tiny part of the open data ecosystem. Analysis is king for open data and there does appear to be people out there who are finding data of value for analyses they want to make. That’s a great outcome.

Here are the tables outlining the most popular data sets since launch and (roughly) in February.

  Top 10 most downloaded datasets, since launch

DATASET DEPARTMENT DOWNLOADS
1 Permanent Resident Applications Processed Abroad and Processing Times (English) Citizenship and Immigration Canada 4730
2 Permanent Resident Summary by Mission (English) Citizenship and Immigration Canada 1733
3 Overseas Permanent Resident Inventory (English) Citizenship and Immigration Canada 1558
4 Canada – Permanent residents by category (English) Citizenship and Immigration Canada 1261
5 Permanent Resident Applicants Awaiting a Decision (English) Citizenship and Immigration Canada 873
6 Meteorological Service of Canada (MSC) – City Page Weather Environment Canada 852
7 Meteorological Service of Canada (MSC) – Weather Element Forecasts Environment Canada 851
8 Permanent Resident Visa Applications Received Abroad – English Version Citizenship and Immigration Canada  800
9 Water Quality Indicators – Reports, Maps, Charts and Data Environment Canada 697
10 Canada – Permanent and Temporary Residents – English version Citizenship and Immigration Canada 625

Top 10 most downloaded datasets, for past 30 days

DATASET DEPARTMENT DOWNLOADS
1 Permanent Resident Applications Processed Abroad and Processing Times (English) Citizenship and Immigration Canada 481
2 Sales of commodities of large retailers – English version Statistics Canada  247
3 Permanent Resident Summary by Mission – English Version Citizenship and Immigration Canada 207
4 CIC Operational Network at a Glance – English Version Citizenship and Immigration Canada 163
5 Gross domestic product at basic prices, communications, transportation and trade – English version Statistics Canada 159
6 Anthropogenic disturbance footprint within boreal caribou ranges across Canada – As interpreted from 2008-2010 Landsat satellite imagery Environment Canada  102
7 Canada – Permanent residents by category – English version Citizenship and Immigration Canada  98
8 Meteorological Service of Canada (MSC) – City Page Weather Environment Canada  61
9 Sales of fuel used for road motor vehicles, by province and territory – English version  Statistics Canada 52
10 Government of Canada Core Subject Thesaurus – English Version  Library and Archives Canada  51

3. Visitor locations

So this is just plain fun. There is not a ton to derive from this – especially as IP addresses can, occasionally, be misleading. In addition, this is page view data, not download data. But what is fascinating is that computers in Canada are not the top source of traffic at data.gc.ca. Indeed, Canada’s share of the traffic is actually quite low. In fact, in January, just taking into account the countries in the chart (and not the long tail of visitors) Canada accounted for only 16% of the traffic to the site. That said, I suspect that downloads were significantly higher from Canadian visitors – although I have no hard evidence of this, just a hypothesis.

datagcca-december-visits

•Total visits since launch: 380,276 user sessions

Let's Hack data.gc.ca

In just under two weeks data.gc.ca will celebrate its one year anniversary. This will also mark the period that the pilot project is officially supposed to end.

Looking at data.gc.ca three things stand out. First, the license has improved a great deal since its launch. Second, a LOT of data has been added to the site over the last year. And finally, the website is remarkably bad at searching for data and enabling a community of users.

Indeed, I believe that a lot of people have stopped visiting the site and don’t even know what data is available. My suspicion is that almost none of us know what is actually available since a) there is a lot, b) much of it is not sexy and c) it is very hard to search.

Let’s do something about that.

I have managed to create, and upload to buzzdata, a list of all the data sets in data.gc.ca – both geographic and non-geographic data sets.

I’m proposing that we go through the data.gc.ca data sets and find what is interesting to each of us, and on March 15th, find a way to highlight it or talk about it so that other people find out about it. Maybe you tweet about it (use the hashtah #gcdata) or blog about it.

Even more interesting would be if we could find a way to do it collaboratively – have a way of collectively marking what data sets are interesting (in say, a piratepad somewhere). If someone had a clever proposal about how to go through all the datasets, I’d love for us to collectively highlight the high value datasets (if there are any) available in data.gc.ca.

Speaking with the great community of open data activists in Ottawa, we brainstormed about organizing an event after work on the 15th where people might get together and do this. We could call it “The Big Search” – an effort in any city where people are interested to gather and comb through the data. All with the goal of signaling to developers, non-profits, journalists and others, what, if any, data in data.gc.ca might be of interest for analysis, applications, or other uses. In addition, this exercise would also help us write supportive and critical comments about the government’s open data trial.

Finally, and most ambitiously, I’ve heard some people say they’d like to design an alternative data portal – I’m definitely game for that and am happy to offer up the datadotgc.ca url for that too.

So, I’m throwing this out there. If there is interest, please comment below. Would love to hear your thoughts and hope we can maybe organize some events on March 15th, or at least posts data sets in blogs, on facebook and on twitter, that people think are interesting.

Public Servants Self-Organizing for Efficiency (and sanity) – Collaborative Management Day

Most of the time, when I engage with or speak to federal public servants, they are among the most eager to find ways to work around the bureaucracy in which they find themselves. They want to make stuff happen, and ideally, to make it happen right and more quickly. This is particularly true of younger public servants and those below middle management in general (I also find it is often the case of those at the senior levels, who often can’t pierce the fog of middle management to see what is actually happening).

I’m sure this dynamic is not new. In large bureaucracies around the world the self-organizing capacity of public servants have forever been in a low level guerrilla conflict against the hierarchies that both protect but also restrain them. What makes all this more interesting today however, is never before have public servants had more independent capacity to self-organize and never before have the tools at their disposal been more powerful.

So, for those who live in work in Ottawa who’d like to learn some of the tools public servants are using to better network and get work done across groups and ministries, let me point you to “Collaborative Management Day 2012.” (For those of us who aren’t public servants, that link, which directs into GCPEDIA won’t work – but I’m confident it will work for insiders). To be clear, it’s the ideas that are batted around at events like this that I believe will shape how the government will work in the coming decades. Much like the boomers created the public service of today in the 1960’s, millennials are starting to figure out how to remake it in a world of networks, and diminished resources.

Good luck guys. We are counting on you.

Details:

When: Wednesday, January 25, 2012 from 8 a.m. to 4 p.m.

Where: Canada Aviation and Space Museum, 11 Aviation Parkway, Ottawa, ON or via Webcast

Cost: Free! Seats are limited; registration is required for attendance.

The GCPedia community defines collaboration as being “a recursive process where two or more people or organizations work together in an intersection of common goals—for example, an intellectual endeavour that is creative in nature—by sharing knowledge, learning and building consensus.” And this is exactly what the Collaborative Culture Camp (GOC3) will teach you to achieve at the next Collaborative Management Day on January 25, 2012.

This free event will offer you a day of workshops and learning sessions that will help you:

  • Expand your knowledge and use of collaborative tools and culture
  • Develop an awareness of alternative processes that deliver results
  • Understand how to foster an environment of openness and transparency
  • Develop networks to support the application of new tools

At the end of the day you will be able to bring a collaborative toolkit back to your organization to share with your employees and colleagues!

Keep up to date on the event by keeping an eye on our GCPedia pages and by following us on Twitter (@GOC_3) and watching the #goc3 conversation (no account needed to check out the conversation!).

Questions? Concerns? Feedback? Feel free to email the event organizers or leave a message on our Discussion page on GCPedia.

My Canadian Open Government Consultation Submission

Attached below is my submission to the Open Government Consultation conducted by Treasury Board over the last couple of weeks. There appear to be a remarkable number of submission that were made by citizens, which you can explore on the Treasury Board website. In addition, Tracey Lauriault has tracked some of the submissions on her website.

I actually wish the submissions on the Government website were both searchable and could be downloaded in there entirety. That way we could re-organize them, visualize them, search and parse them as well as play with the submissions so as to make the enormous number of answers easier to navigate and read. I can imagine a lot of creative ways people could re-format all that text and make it much more accessible and fun.

Finally, for reference, in addition to my submission I wrote this blog post a couple months ago suggesting goals the government set for itself as part of its Open Government Partnership commitments. Happily, since writing that post, the government has moved on a number of those recommendations.

So, below is my response to the government’s questions (in bold):

What could be done to make it easier for you to find and use government data provided online?

First, I want to recognize that a tremendous amount of work has been done to get the present website and number of data sets up online.

FINDING DATA:

My advice on making data easier to engage Socrata to create the front end. Socrata has an enormous amount of experience in how to share government data effectively. Consider http://data.oregon.gov here is a site that is clean, easy to navigate and offers a number of ways to access and engage the governments data.

More specifically, what works includes:

1. Effective search: a simple search mechanism returns all results
2. Good filters: Because the data is categorized by type (Internal vs. external, charts, maps, calendars, etc…) it is much easier to filter. One thing not seen on Socrata that would be helpful would be the ability to sort by ministry.
3. Preview: Once I choose a data set I’m given a preview of what it looks like, this enables me to assess whether or not it is useful
4. Social: Here there is a ton on offer
– I’m able to sort data sets by popularity – being able to see what others find interesting is, in of itself interesting.
– Being able to easily share data sets via email, or twitter and facebook means I’m more likely to find something interesting because friends will tell me about it
– Data sets can also be commented upon so I can see what others think of the data, if they think it is useful or not, and what for or not.
– Finally, it would be nice if citizens could add meta data, to make it easier for others to do keyword searches. If the government was worried about the wrong meta data being added, one could always offer a search with crowd sourced meta data included or excluded
5. Tools: Finally, there are a large number of tools that make it easier to quickly play with and make use of the data, regardless of one’s skills as a developer. This makes the data much more accessible to the general public.

USING DATA

Finding data is part of the problem, being able to USE the data is a much bigger issue.

Here the single most useful thing would be to offer API’s into government data. My own personal hope is that one day there will be a large number of systems both within and outside of government that will integrate government data right into their applications. For example, as I blogged about here – https://eaves.ca/2011/02/18/sharing-critical-information-with-public-lessons-for-governments/ – product recall data would be fantastic to have as an API so that major retailers could simply query the API every time they scan inventory in a warehouse or at the point of sale, any product that appears on the list could then be automatically removed. Internally, Borders and Customs could also query the API when scanning exports to ensure that nothing exported is recalled.

Second, if companies and non-profits are going to invest in using open data, they need assurances that both they are legally allowed to use the data and that the data isn’t going to suddenly disappear on them. This means, a robust license that is clear about reuse. The government would be wise to adopt the OGL or even improve on it. Better still helping establish a standardized open data license for Canada and ideally internationally could help reduce some legal uncertainty for more conservative actors.

More importantly, and missing from Socrata’s sites, would be a way of identifying data sets on the security of their longevity. For example, data sets that are required by legislation – such as the NPRI – are the least likely to disappear, whereas data sets the the long form census which have no legal protection could be seen as at higher risk.

 

How would you use or manipulate this data?

I’m already involved in a number of projects that use and share government data. Among those are Emitter.ca – which maps and shares NPRI pollution data and Recollect.net, which shares garbage calendar information.

While I’ve seen dramatically different uses of data, for me personally, I’m interested mostly in using data for thinking and writing about public policy issues. Indeed, much has been made of the use of data in “apps” but I think it is worth noting that the single biggest use of data will be in analysis – government officials, citizens, academics and others using the data to better understand the world around them and lobby for change.

This all said, there are some data sets that are of particular usefulness to people, these include:

1. Data sets on sensitive issues, this includes health, inspection and performance data (Say surgery outcomes for specific hospitals, or restaurant inspection data, crime and procurement data are often in great demand).
2. Dynamic real-time Data: Data that is frequently updated (such a border, passport renewal or emergency room wait times). This data is shared in the right way can often help people adjust schedules and plans or reallocate resources more effectively. Obviously this requires an API.
3.Geodata: Because GIS standards are very mature it is easy to “mashup” geo data to create new maps or offer new services. These common standards means that geo data from different sources will work together or can be easily compared. This is in sharp contrast to say budget data, where there are few common standards around naming and organizing the data, making it harder to share and compare.

What could be done to make it easier for you to find government information online?

It is absolutely essential that all government records be machine readable.

Some of the most deplorable moment in open government occur when the government shares documents with the press, citizens or parliamentary officers in paper form. The first and most important thing to make government information easier to find online is to ensure that it is machine readable and searchable by words. If it does not meet this criteria I increasingly question whether or not it can be declared open.

As part of the Open Government Partnership commitments it would be great for the government to commit to guarantee that every request for information made of it would include a digital version of the document that can be searched.

Second, the government should commit that every document it publishes be available online. For example, I remember in 2009 being told that if I wanted a copy of the Health Canada report “Human Health in a Changing Climate:A Canadian Assessment of Vulnerabilities and Adaptive Capacity” I had to request of CD, which was then mailed to me which had a PDF copy of the report on it. Why was the report not simply available for download? Because the Minister had ordered it not to appear on the website. Instead, I as a taxpayer and to see more of my tax dollars wasted for someone to receive my mail, process it, then mail me a custom printed cd. Enabling ministers to create barriers to access government information, simply because they do not like the contents, is an affront to the use of tax payer dollars and our right to access information.

Finally, Allow Government Scientists to speak directly to the media about their research.

It has become a reoccurring embarrassment. Scientists who work for Canada publish an internationally recognized ground break paper that provides some insight about the environment or geography of Canada and journalists must talk to government scientists from other countries in order to get the details. Why? Because the Canadian government blocks access. Canadians have a right to hear the perspectives of scientists their tax dollars paid for – and enjoy the opportunity to get as well informed as the government on these issues.

Thus, lift the ban that blocks government scientists from speaking with the media.

 

Do you have suggestions on how the Government of Canada could improve how it consults with Canadians?

1. Honour Consultation Processes that have started

The process of public consultation is insulted when the government itself intervenes to bring the process into disrepute. The first thing the government could do to improve how it consults is not sabotage processes that already ongoing. The recent letter from Natural Resources Minister Joe Oliver regarding the public consultation on the Northern Gateway Pipelines has damaged Canadians confidence in the governments willingness to engage in and make effective use of public consultations.

2. Focus on collecting and sharing relevant data

It would be excellent if the government shared relevant data from its data portal on the public consultation webpage. For example, in the United States, the government shares a data set with the number and location of spills generated by Enbridge pipelines, similar data for Canada would be ideal to share on a consultation. Also useful would be economic figures, job figures for the impacted regions, perhaps also data from nearby parks (visitations, acres of land, kml/shape boundary files). Indeed, data about the pipeline route itself that could be downloaded and viewed in Google earth would be interesting. In short, there are all sorts of ways in which open data could help power public consultations.

3. Consultations should be ongoing

It would be great to see a 311 like application for the federal government. Something that when loaded up, would use GPS to identify the services, infrastructure or other resources near the user that is operated by the federal government and allow the user to give feedback right then and there. Such “ongoing” public feedback could then be used as data when a formal public consultation process is kicked off.

 

Are there approaches used by other governments that you believe the Government of Canada could/should model?

1. The UK governments expense disclosure and release of the COINS database more generally is probably the most radical act of government transparency to date. Given the government’s interest in budget cuts this is one area that might be of great interest to pursue.

2. For critical data sets, those that are either required by legislation or essential to the operation of a ministry or the government generally, it would be best to model the city of Chicago or Washington DC and foster the creation of a data warehouse where this data could be easily shared both internally and externally (as privacy and security permits). These cities are leading governments in this space because they have tackled both the technical challenges (getting the data on a platform where it can be shared easily) and around governance (tackling the problem of managing data sets from various departments on a shared piece of infrastructure).

 

Are there any other comments or suggestions you would like to make pertaining to the Government of Canada’s Open Government initiative?

Some additional ideas:

Redefine Public as Digital: Pass an Online Information Act

a) Any document it produces should be available digitally, in a machine readable format. The sham that the government can produce 3000-10,000 printed pages about Afghan detainees or the F-35 and claim it is publicly disclosing information must end.

b) Any data collected for legislative reasons must be made available – in machine readable formats – via a government open data portal.

c) Any information that is ATIPable must be made available in a digital format. And that any excess costs of generating that information can be born by the requester, up until a certain date (say 2015) at which point the excess costs will be born by the ministry responsible. There is no reason why, in a digital world, there should be any cost to extracting information – indeed, I fear a world where the government can’t cheaply locate and copy its own information for an ATIP request as it would suggest it can’t get that information for its own operations.

Use Open Data to drive efficiency in Government Services: Require the provinces to share health data – particularly hospital performance – as part of its next funding agreement within the Canada Health Act.

Comparing hospitals to one another is always a difficult task, and open data is not a panacea. However, more data about hospitals is rarely harmful and there are a number of issues on which it would be downright beneficial. The most obvious of these would be deaths caused by infection. The number of deaths that occur due to infections in Canadian hospitals is a growing problem (sigh, if only open data could help ban the antibacterial wipes that are helping propagate them). Having open data that allows for league tables to show the scope and location of the problem will likely cause many hospitals to rethink processes and, I suspect, save lives.

Open data can supply some of the competitive pressure that is often lacking in a public healthcare system. It could also better educate Canadians about their options within that system, as well as make them more aware of its benefits.

Reduce Fraud: Creating a Death List

In an era where online identity is a problem it is surprising to me that I’m unable to locate a database of expired social insurance numbers. Being able to query a list of social security numbers that belong to dead people might be a simple way to prevent fraud. Interestingly, the United States has just such a list available for free online. (Side fact: Known as the Social Security Death Index this database is also beloved by genealogist who use it to trace ancestry).

Open Budget and Actual Spending Data

For almost a year the UK government has published all spending data, month by month, for each government ministry (down to the £500 in some, £25,000 in others). More over, as an increasing number of local governments are required to share their spending data it has lead to savings, as government begin to learn what other ministries and governments are paying for similar services.

Create a steering group of leading Provincial and Municipal CIOs to create common schema for core data about the country.

While open data is good, open data organized the same way for different departments and provinces is even better. When data is organized the same way it makes it easier to citizens to compare one jurisdiction against another, and for software solutions and online services to emerge that use that data to enhance the lives of Canadians. The Federal Government should use its convening authority to bring together some of the countries leading government CIOs to establish common data schemas for things like crime, healthcare, procurement, and budget data. The list of what could be worked on is virtually endless, but those four areas all represent data sets that are frequently requested, so might make for a good starting point.

Statistics Canada Data to become OpenData – Background, Winners and Next Steps

As some of you learned last night, Embassy Magazine broke the story that all of Statistics Canada’s online data will not only be made free, but released under the Government of Canada’s Open Data License Agreement (updated and reviewed earlier this week) that allows for commercial re-use.

This decision has been in the works for months, and while it does not appear to have been formally announced, Embassy Magazine does appear to have managed to get a Statistics Canada spokesperson to confirm it is true. I have a few thoughts about this story: Some background, who wins from this decision, and most importantly, some hope for what it will, and won’t lead to next.

Background

In the embassy article, the spokesperson claimed this decision had been in the works for years, something that is probably technically true. Such a decision – or something akin to it – has likely been contemplated a number of times. And there have been a number of trials and projects that have allowed for some data to be made accessible albeit under fairly restrictive licenses.

But it is less clear that the culture of open data has arrived at StatsCan, and less clear to me that this decision was internally driven. I’ve met many a Statscan employee who encountered enormous resistance while advocating for data open. I remember pressing the issue during a talk at one of the department’s middle managers conference in November of 2008 and seeing half the room nod vigorously in agreement, while the other half crossed it arms in strong disapproval.

Consequently, with the federal government increasingly interested in open data, coupled with a desire to have a good news story coming out of statscan after last summer census debacle, and with many decisions in Ottawa happening centrally, I suspect this decision occurred outside the department. This does not diminish its positive impact, but it does mean that a number of the next steps, many of which will require StatsCan to adapt its role, may not happen as quickly as some will hope, as the organization may take some time to come to terms with the new reality and the culture shift it will entail.

This may be compounded by the fact that there may be tougher news on the horizon for StatsCan. With every department required to have submitted proposal to cut their budgets by either 5% and 10%, and with StatsCan having already seen a number of its programs cut, there may be fewer resources in the organization to take advantage of the opportunity making its data open creates, or even just adjust to what has happened.

Winners (briefly)

The winners from this decision are of course, consumers of statscan’s data. Indirectly, this includes all of us, since provincial and local governments are big consumers of statscan data and so now – assuming it is structured in such a manner – they will have easier (and cheaper) access to it. This is also true of large companies and non-profits which have used statscan data to locate stores, target services and generally allocate resources more efficiently. The opportunity now opens for smaller players to also benefit.

Indeed, this is the real hope. That a whole new category of winners emerges. That the barrier to use for software developers, entrepreneurs, students, academics, smaller companies and non-profits will be lowered in a manner that will enable a larger community to make use of the data and therefor create economic or social goods.

Such a community, however, will take time to evolve, and will benefit from support.

And finally, I think StatsCan is a winner. This decision brings it more profoundly into the digital age. It opens up new possibilities and, frankly, pushes a culture change that I believe is long over due. I suspect times are tough at StatsCan – although not as a result of this decision – this decision creates room to rethink how the department works and thinks.

Next Steps

The first thing everybody will be waiting for is to see exactly what data gets shared, in what structure and to what detail. Indeed this question arose a number of times on twitter with people posting tweets such as “Cool. This is all sorts of awesome. Are geo boundary files included too, like Census Tracts and postcodes?” We shall see. My hope is yes and I think the odds are good. But I could be wrong, at which point all this could turn into the most over hyped data story of the year. (Which actually matters now that data analysts are one of the fastest growing categories of jobs in North America).

Second, open data creates an opportunity for a new and more relevant role for StatsCan to a broader set of Canadians. Someone from StatsCan should talk to the data group at the World Bank around their transformation after they launched their open data portal (I’d be happy to make the introduction). That data portal now accounts for a significant portion of all the Bank’s web traffic, and the group is going through a dramatic transformation, realizing they are no longer curators of data for bank staff and a small elite group of clients around the world but curators of economic data for the world. I’m told a new, while the change has not been easy, a broader set of users have brought a new sense of purpose and identity. The same could be true of StatsCan. Rather than just an organization that serves the government of Canada and a select groups of clients, StatsCan could become the curators of data for all Canadians. This is a much more ambitious, but I’d argue more democratized and important goal.

And it is here that I hope other next steps will unfold. In the United States, (which has had free census data for as long as anyone I talked to can remember) whenever new data is released the census bureau runs workshops around the country, educating people on how to use and work with its data. StatsCan and a number of other partners already do some of this, but my hope is that there will be much, much more of it. We need a society that is significantly more data literate, and StatsCan along with the universities, colleges and schools could have a powerful role in cultivating this. Tracey Lauriault over at the DataLibre blog has been a fantastic advocate of such an approach.

I also hope that StatsCan will take its role as data curator for the country very seriously and think of new ways that its products can foster economic and social development. Offering APIs into its data sets would be a logical next step, something that would allow developers to embed census data right into their applications and ensure the data was always up to date. No one is expecting this to happen right away, but it was another question that arose on twitter after the story broke, so one can see that new types of users will be interested in new, and more efficient ways, of accessing the data.

But I think most importantly, the next step will need to come from us citizens. This announcement marks a major change in how StatsCan works. We need to be supportive, particularly at a time of budget cuts. While we are grateful for open data, it would be a shame if the institution that makes it all possible was reduced to a shell of its former self. Good quality data – and analysis to inform public policy – is essential to a modern economy, society, and government. Now that we will have free access to what our tax dollars have already paid for, let’s make sure that it stays that way, by both ensure it continues to be available, and that there continues to be a quality institution capable of collecting and analyzing it.

(sorry for typos – it’s 4am, will revise in the morning)

The New Government of Canada Open Data License: The OGL by another name

Last week the Minister Clement issued a press release announcing some of the progress the government has made on its Open Government Initiatives. Three things caught my eye.

First, it appears the government continues to revise its open data license with things continuing to trend in the right direction.

As some of you will remember, when the government first launched data.gc.ca it had a license that was so onerous that it was laughable. While several provisions were problematic, my favourite was the sweeping, “only-make-us-look-good-clause” which, said, word for word: “You shall not use the data made available through the GC Open Data Portal in any way which, in the opinion of Canada, may bring disrepute to or prejudice the reputation of Canada.”

After I pointed out the problems with this clause to then Minister Day, he managed to have it revoked within hours – very much to his credit. But it is a good reminder to the starting point of the government license and to the mindset of government Canada lawyers.

With the new license, almost all the clauses that would obstruct commercial and non-profit reuse have effectively been eliminated. It is no longer problematic to identify individual companies and the attribution clauses have been rendered slightly easier. Indeed, I would argue that the new license has virtually the same constraints as the UK Open Government License (OGL) and even the Creative Commons CC-BY license.

All this begs the question… why not simply use the language and structure of the OGL in much the same manner that British Columbia Government tried to with its own BC OGL? Such a standardized license across jurisdictions might be helpful, it would certainly simply life for think tanks, academics, developers and other users of the data. This is something I’m pushing for and hope that we might see progress on.

Second, the idea that the government is going to post completed access to information (ATIP) requests online is also a move in the right direction. I suspect that the most common ATIP request is one that someone else has already made. Being able to search through previous requests would enable you to find what you are looking for without having to wait weeks or make public servants redo the entire search and clearing process. What I don’t understand is why only post the summaries? In a digital world it would be better for citizens, and cheaper for the government to simply post the entire request whenever privacy policies wouldn’t prevent it.

Third, and perhaps most important were the lines noting that “That number (of data sets) will continue to grow as the project expands and more federal departments and agencies come onboard. During this pilot project, the Government will also continue to monitor and consider national and international best practices, as well as user feedback, in the licensing of federal open data.”

This means that we should expect more data to hit the site. I seems as though more departments are being asked to figure out what data they can share – hopefully this means that real, interesting data sets will be made public. In particular one hopes that data sets which legislation mandates the government collect, will be high on the list of priorities. Also interesting in this statement is the suggestion that the government will consider national and international best practices. I’ve talked to both the Minister and officials about the need to create common standards and structures for open data across jurisdictions. Fostering and pushing these is an area where the government could take a leadership role and it looks like there may be interesting in this.

 

What Munir's Resignation means to Public Servants

This came to me from an anonymous email address, but the author claims to be a public servant. No inside gossip or revelation here, but a serious question about how the public service will react to a critical moment.

The independence of Canada’s public service has been a key part of our governing system. It has its advantages and its drawbacks (discussed in some detail most recently by John Ibbitson in Open and Shut) but it has been important. Munir’s resignation reaffirms this system, how his boss and colleagues react will say a lot about whether other public servants feel the value of independence is still core to the public service.

Read on – it’s thoughtful:

Defining moments. For some individuals these are easy to identify, like when a promising young athlete suffers a career-limiting injury.  For others, such moments come later in life, but are no less real or significant.

The resignation of Munir Sheikh from his position as Chief Statistician of Canada is clearly a defining moment for him personally.  He ends a full career in the Public Service on a point of principle.  This principled stance, necessary in his view to protect the integrity of his organization, has brought pride to many public servants, including this author.

But this act may not only be defining for Mr. Sheikh; it also has the potential to impact on the broader public service.  The Public Service mantra is fearless advice and loyal implementation and we tend to be very good at this.  However, it has always been recognized that this only goes so far.  There are limits to loyal implementation.  Clear examples are when a government attempts to unduly benefit either themselves or their friends through government funds.

Deputy ministers (the position of Chief Statistician is one) are often faced with limit-pushing situations, their ability to manage the delicate political-Public Service relationship is key to their success (and survival) as senior public servants.  When these limits are in danger of being exceeded, the deputy minister can rely on delay to allow time to change the ministers’ mind, and/or intervention from the Prime Minister, via the Privy Council Office.  When these fail, the deputy can either acquiesce (partially or fully) or resign.  This is the theory.  However, in practice I cannot recall the last time a deputy resigned on a point of principle (leaving aside the potential reasons for the former Clerk, Kevin Lynch’s retirement).

Mr. Sheikh has attempted to set a new standard – disregarding the advice of a department is fine – publicly undermining the integrity of that advice is not.  It remains to be seen whether this standard will stick or whether it will in future be seen as a high-water mark for deputy integrity that will never be seen again.

The public and private reactions of the Clerk of the Privy Council will have a significant impact on how others view this resignation.  He is the Prime Minister’s deputy minister, who sets the tone and expectation for all other deputies.  He is also the Head of the Public Service, and helps set the tone for all public servants.  What, if anything, will he say about this issue, to the Prime Minister, deputies and ordinary public servants?  How should we comport ourselves when faced with such issues?

Wayne Wouters, this is your opportunity.  Tell us what you think, this can be your defining moment too.

The evolving tall tales of Minister Clement

It’s been fascinating watching the Industry Minister’s evolving fables around the decision to scrap the long-form census. Since the debate is now coming on three weeks I thought I might be fun to give it a little perspective to show how the Minister has been misleading, and in cases outright lying, to defend his case.

Tale 1: This decision has no implications

This was the first, and my favourite tall tale. People forget but at the very beginning of this debate the minister claimed the change would have no impact on the effectiveness of the census. In an online discussion with concerned Canadians who pointed out that the data from a voluntary long form census would be rendered useless because of selection bias, the Minister responded: “Wrong. Statisticians can ensure validity w larger sample size.” Of course, any first year undergrad student will tell you, this is not the case. Fortunately, Stephen Gordon a professor at the Laval University was on hand to set the record straight. (see debate to the right).

So the first story… this isn’t a big deal, the government has a way of working around this issue, please move on, nothing to see here… once debunked, tall tale number 2 kicked into gear.

Tale 2: Okay, it does have implications, but the cost is worth bearing because ordinary Canadians demanded it

Once the implications of the decision became obvious the government changed gear. Rather than argue that this had no implications they shifted to claiming the decision was about privacy and that Canadians had been demanding the change. Sadly, no one has been able to produce any records suggesting this is the case. Opposition MPs can’t find any complaints. The Privacy Commissioner has had 3 in the last decade and the number has been declining over time. Statistic Canada’s review of previous census generated no such feedback from the public. The concern has never even been mentioned in parliament by any Conservative MPs.

As a special bonus, Minister Clement has been nicely misleading the public claiming he’s received dozens and dozens of complaints since making the announcement (note, not before). But, of course, if we are taking score since the announcement, there are now at least 80 “radical extremists” organizations like the Government of Quebec, The Canadian Jewish Congress, The Evangelical Fellowship of Canada, The Toronto Board of Trade and the Canada West Foundation along with a petition of 7500 Canadians who oppose the Minister’s decision.

So Canadians have not been demanding this change…

Tale 3: Fine, StatsCan told us to do it

Once it was revealed that this was a bad idea, there is virtually universal opposition to it and that Canadians did not demand it, the strategy again shifted gears. Now there is a new tall tale: Minister Clement has been claiming “StatsCan gave me three options, each of which they thought would work. I chose one of those options, with their recommendation.” Relief! This was never the Minister idea. It was StatCan’s idea and the public’s concern and outrage shouldn’t be directed at the minister, but has the ministry. So wouldn’t it be great if they could defend the decision? Maybe make a statement explaining why the recommended it? Sadly, Minister Clement won’t let them.

However, some excellent reporting by Heather Scoffield of The Canadian Press reveals that actually Statistics Canada did not suggest this change. As she reports:

But multiple sources are telling The Canadian Press that is not exactly what happened. The sources say Statistics Canada made no recommendations and only came up with policy options because they were asked to do so by Clement.

And they say the data gathering federal agency did not specifically recommend going the voluntary route.

Rather, they suggested that either the status quo or the complete eradication of the long list of questions would be the better way to go, several sources said.

The option chosen by the federal cabinet was not at the top of the list of options, the sources said. Instead, StatsCan told ministers if they insisted on going that route, they would have to spend more money and dramatically increase the size of the survey in an attempt to get accurate results.

“It wasn’t recommended,” one source said bluntly.

Okay, so StatsCan isn’t excited about this idea and certainly didn’t recommend it. Indeed they recommended either the status quo or getting rid of it altogether. And that only after they were asked to address an issue that, well, wasn’t an issue in the eyes of Canadians.

My bold prediction on the next tall tale 4: Behold – the mass of Canadians who opposed (something about) the long form census

Having had the previous three tales exposed the government must find a new tact. My suspicion is that they will return to tall tale #2 but with a new twist. This was hinted at over the weekend by Maxime Bernier, who claims that as Industry Minister during the 2006 census, he:

“received an average of 1,000 e-mails a day during the census to my MP office complaining about all that, so I know that Canadians who were obliged to answer that long-form census — very intrusive in their personal lives — I know they were upset.”

Of course there is no record of this, and as Professor Stephen Gordon aptly notes, if this was the case why didn’t the Minister ensure that these concerns were reflected in the review of the 2006 census? It would seem that either there were not the quantity of complains Mr. Bernier claims or, as Minister, he didn’t take them seriously. I for one, believe there were a number of complaints (although not thousands). And that the Conservatives will even attempt to produce them in the hopes reporters will not read them and move on.

But here is the nature of this next tale. These complaints weren’t about the intrusiveness of Government, they were about the use of an American defense contractor, Lockheed Martin, providing the computer systems used to conduct the census to Statistics Canada. Most didn’t understand why jobs were being shipped to America. An even small number of this group was concerned privacy, but not from their Government, from an American defense contractor. So if you are a reporter and the Conservatives claim there were privacy complaints, be sure to dig a little deeper. There were some. Just not the complaints they claim.

One tall tale begets another, and in this case, I suspect we about to get another tsunami of them.

CIO Summit recap and links

Yesterday I was part of a panel at the CIO Summit, a conference for CIO’s of the various ministries of the Canadian Government.  There was lots more I would have liked to have shared with the group, so I’ve attached some links here as a follow up for those in (and not in) attendance, to help flesh out some of my thoughts:

1. Doing mini-GCPEDIAcamps or WikiCamps

So what is a “camp“? Check out Wikipedia! “A term commonly used in the titles of technology-related unconferences, such as Foo Camp and BarCamp.” In short, it is an informal gathering of people who share a common interest who gather to share best practices or talk about the shared interest.

There is interest in GCPEDIA across the public service but many people aren’t sure how to use it (in both the technical and social sense). So let’s start holding small mini-conferences to help socialize how people can use GCPEDIA and help get them online. Find a champion, organize informally, do it at lunch, make it informal, and ensure there are connected laptops or computers on hand. And do it more than once! Above all, a network peer-based platform, requires a networked learning structure.

2. Send me a Excel Spreadsheet of structured data sets on your ministries website

As I mentioned, a community of people have launched datadotgc.ca. If you are the CIO of a ministry that has structured data sets (e.g. CVS, excel spreadsheets, KML, SHAPE files, things that users can download and play with, so not PDFs!) drop the URLs of their locations into an email or spreadsheet and send it to me! I would love to have your ministry well represented on the front page graph on datadotgc.ca.

3. Some links to ideas and examples I shared

– Read about how open data help find/push the CRA to locate $3.2B dollar in lost tax revenue.

– Read about how open data needs to be part of the stimulus package.

– Why GCPEDIA could save the public service here.

– Check out Vantrash, openparliament is another great site too.

– The open data portals I referenced: the United States, the United Kingdom, The World Bank, & Vancouver’s

4. Let’s get more people involved in helping Government websites work (for citizens)

During the conference I offered to help organize some Government DesignCamps to help ensure that CLF 3 (or whatever the next iteration will be called) helps Canadians navigate government websites. There are people out there who would offer up some free advice – sometimes out of love, sometimes out of frustration – that regardless of their motivation could be deeply, deeply helpful. Canada has a rich and talented design community including people like this – why not tap into it? More importantly, it is a model that has worked when done right. This situation is very similar to the genesis of the original TransitCamp in Toronto.

5. Push your department to develop an Open Source procurement strategy

The fact is, if you aren’t even looking at open source solutions you are screen out part of your vendor ecosystem and failing in your fiduciary duty to engage in all options to deliver value to tax payers. Right now Government’s only seem to know how to pay LOTS of money for IT. You can’t afford to do that anymore. GCPEDIA is available to every government employee, has 15,000 users today and could easily scale to 300,000 (we know it can scale because Wikipedia is way, way bigger). All this for the cost of $60K in consulting fees and $1.5M in staff time. That is cheap. Disruptively cheap. Any alternative would have cost you $20M+ and, if scaled, I suspect $60M+.

Not every piece of software should necessarily be open source, but you need to consider the option. Already, on the web, more and more governments are looking at open source solutions.