Tag Archives: opendata

Next Generation Open Data: Personal Data Access

Background

This Monday I had the pleasure of being in Mexico City for the OECD’s High Level Meeting on e-Government. CIO’s from a number of countries were present – including Australia, Canada, the UK and Mexico (among others). But what really got me going was a presentation by Chris Vein, the Deputy United States Chief Technology Officer for Government Innovation.

In his presentation he referenced work around the Blue Button and the Green Button – both efforts I was previously familiar with. But my conversation with Chris sparked several new ideas and reminded me of just how revolutionary these initiatives are.

For those unacquainted with them, here’s a brief summary:

The Blue Button Initiative emerged out of the US Department of Veterans Affairs (VA) with a simple goal – create a big blue button on their website that would enable a logged in user to download their health records. That way they can then share those records with whoever they wish, a new doctor, a hospital, an application or even just look at it themselves. The idea has been deemed so good, so important and so popular, that it is now being championed as industry standard, something that not just the VA but all US health providers should do.

The Green Button Initiative is similar. I first read about it on ReadWriteWeb under the catchy and insightful title “Green Button” Open Data Just Created an App Market for 27M US Homes. Essentially the Green Button would enable users to download their energy consumption data from their utility. In the United States 9 utilities have already launched Green Buttons and an app ecosystem – applications that would enable people to monitor their energy use – is starting to emerge. Indeed Chris Vein talked about one app that enabled a user to see their thermostat in real time and then assess the financial and environmental implications of raising and/or lowering it. I personally see the Green Button evolving into an API that you can give others access to… but that is a detail.

Why it Matters

Colleagues like Nigel Shadbolt in the UK have talked a lot about enabling citizens to get their data out of websites like Facebook. And Google has it’s own very laudable Data Liberation Front run by great guy and werewolf expert, Brian Fitzpatrick. But what makes the Green Button and Blue Button initiatives unique and important is that they create a common industry standard for sharing consumer data. This creates incentives for third parties to develop applications and websites that can analyze this data because these applications will scale across jurisdictions. Hence the Read Write Web article’s focus on a new market. It also makes the data easy to share. Healthcare records downloaded using the blue button are easily passed on to a new doctor or a new hospital since now people can design systems to consumer these healthcare records. Most importantly, it gives the option of sharing these records so they don’t have to wait for lumbering bureaucracies.

This is a whole new type of open data. Open not to the public but to the individual to whom the data really belongs.

A Proposal

I would love to see the blue button and green button initiative spread to companies and jurisdictions outside the United States. There is no reason why for examples there cannot be Blue Buttons on Provincial Health Care website in Canada, or the UK. Nor is there any reason why provincial energy corporations like BC Hydro or Bullfrog Energy (there’s a progressive company that would get this) couldn’t implement the Green Button. Doing so would enable Canadian software developers to create applications that could use this data and help citizens and tap into the US market. Conversely, Canadian citizens could tap into applications created in the US.

The opportunity here is huge. Not only could this revolutionize citizens access to their own health and energy consumption data, it would reduce the costs of sharing health care records, which in turn could potentially create savings for the industry at large.

Action

If you are a consumer, tell your local health agency, insurer and energy utility about this.

If you are a energy utility or Ministry of Health and are interested in this – please contact me.

Either way, I hope this is interesting. I believe there is huge potential in Personal Open Data, particular around data currently held by crown corporations and in critical industries, like healthcare.

Data.gc.ca – Data Sets I found that are interesting, and some suggestions

Yesterday was the one year anniversary of the Canadian federal government’s open data portal. Over the past year government officials have been continuously adding to the portal, but as it isn’t particularly easy to browse data sets on the website, I’ve noticed a lot of people aren’t aware of what data is now available (self included!). Consequently, I want to encourage people to scan the available data sets and blog about ones that they think might be interesting to them personally, to others, or to communities of interests they may know.

Such an undertaking has been rendered MUCH easier thanks to the data.gc.ca administrators decision to publish a list of all the data sets available on the site. Turns out, there are 11680 data sets listed in this file. Of course, reviewing all this data took me much longer than I thought it would! (and to be clear, I didn’t explore each one in detail), but the process has been deeply interesting. Below are some thoughts, ideas and data sets that have come out of this exploration – I hope you’ll keep reading, and that it will be of interest to ordinary citizens, prospective data users and to managers of open government data portals.

TagCloud_GC_OpenData

A TagCloud of the Data Sets on data.gc.ca

Some Brief Thoughts on the Portal (and for others thinking about exploring the data)

Trying to review all the data sets on the portal is a enormous task and trying to do it has taught me some lessons about what works and doesn’t. The first is that, while the search function on the website is probably good if you have a keyword or a specific data you are looking for, it is much easier to browse the data in an excel than on the website. What was particularly nice about this is that, in excel, the data was often clustered by type. This made easy to spot related data sets – a great example of this when I found the data on “Building permits, residential values and number of units, by type of dwelling” I could immediately see there were about 12 other data sets on building permits available.

Another issue that became clear to me is the problem of how a data set is classified. For example, because of the way the data is structured (really as a report) the Canadian Dairy Exports data has a unique data file for every month and year (you can look at May 1988 as an example). That means each month is counted as a unique “data set” in the catalog. Of course, French and English versions are also counted as unique. This means that what I would consider to be a single data set “Canadian Dairy Exports Month Dairy Year from 1988 to present” actually counts as 398 data sets. This has two outcomes. First, it is hard to imagine anyone wants the data for just one month. This means a user looking for longitudinal data on this subject has to download 199 distinct data sets (very annoying). Why not just group it into one? Second, given that governments like to keep score about how many data sets they share – counting each month as a unique data set feels… unsportsmanlike. To be clear, this outcome is an artifact of how Agriculture Canada gathers and exports this data, but it is an example of the types of problems an open data catalog needs to come to grips with.

Finally, many users – particularly, but not exclusively, developers – are looking for data that is up to date. Indeed, real time data is particularly sexy since its dynamic nature means you can do interesting things with it. This it was frustrating to occasionally find data sets that were no longer being collected. A great example of this was the Provincial allocation of corporate taxable income, by industry. This data set jumped out at me as I thought it could be quite interesting. Sadly, StatsCan stopped collecting data on this in 1987 so any visualization will have limited use today. This is not to say data like this should be pulled from the catalog, but it might be nice to distinguish between datasets that are being collected on an ongoing basis versus those that are no longer being updated.

Data Sets I found Interesting

Just quickly before I begin, some quick thoughts on my very unscientific methodology for identifying interesting data sets.

  • First, browsing the data sets really brought home to me how many will be interesting to different groups – we really are in the world of the long tail of public policy. As a result, there is lots of data that I think will be interesting to many, many people that is not on this list.
  • Second, I tried to not include too much of StatsCan’s data. StatsCan data already has a fairly well developed user base. And while I’m confident that base is going to get bigger still now that its data is free, I figure there are already a number of people who will be sharing/talking about it
  • Finally, I’ve tried to identify some data sets that I think would make for good mashups or apps. This isn’t easy with federal government data sets since they tend do be more aggregate and high-level than say municipal data sets… but I’ve tried to tease out what I can. That said, I’m sure there is much, much more.

New GeoSpatial API!

So the first data set is a little bit of a cheat since it is not on the open data portal, but I was emailed about it yesterday and it is so damn exciting, I’ve got to share it. It is a recently released public BETA of a new RESTful API from the very cool people at GeoGratis that provides a consolidated access point to several repositories of geospatial data and information products including GeoGratis, GeoPub and Mirage. (huge thank you to the GeoGratis team for sending this to me).

Documentation can be found here (and in french here) and a sample search client that demonstrates some of its functionality and how to interact with the API can be found here. Formats include ATOM, HTML Fragment, CSV, RSS, JSON, and KML. (So you can see results – for example – in Google Earth by using the KML format (example here).

I’m also told that these fine folks have been working on geolocation service, so you can do sexy things like search by place name, by NTS map or by the first three characters of a postal code. Documentation will be posted here in english and french. Super geeks may notice that there is a field in the JSON called CGNDBkey. I’m also told you can use this key to select an individual placename according to the Canadian Geographic names board. Finally, you can also search all their Metadata through search engines like google (here is a sample search for gold they sent me).

All data is currently licensed under GeoGratis.

The National Pollutant Release Inventory

Description: The National Pollutant Release Inventory (NPRI) is Canada’s public inventory of pollutant releases (to air, water and land), disposals and transfers for recycling.

Notes: This is the same data set (but updated) that we used to create emitter.ca. I frankly feel like the opportunities around this data set, for environmentalists, investors (concerned about regulatory and lawsuit risks), the real estate industry, and others, is enormous. The public could be very interested in this.

Greenhouse Gas Emissions Reporting Program

Description: The Greenhouse Gas Emissions Reporting Program (GHGRP) is Canada’s legislated, publicly-accessible inventory of facility-reported greenhouse gas (GHG) data and information.

Notes: What interesting here is that while it doesn’t have lat/longs, it does have facility names and addresses. That means you should be able to cross reference it with the NPRI (which does have lat/longs) to be able to plot where the big greenhouse gas emitters are on a map. Think the same people as the NPRI might be interested in this data.

The Canadian Ice Thickness Program

Description: The Ice Thickness program dataset documents the thickness of ice on the ocean. Measurements begin when the ice is safe to walk on and continue until it is no longer safe to do so. This data can help gauge the impact of global warming and is relevant to shipping data in the north of Canada.

Notes: Students interested in global warming… this could make for some fun visualization.

Argo: Canadian Tracked Data

Description: Argo Data documents some of the approximately 3000 profiling floats were deployed around the world. Once at sea, the float sinks to a preprogrammed target depth of 2000 meters for a preprogrammed period of time. It then floats to the surface, taking temperature and salinity values during its ascent at set depths. — The Canadian Tracked Argo Datadescribes the Argo programme in Canada and provides data and information about Canadian floats.

Notes: Okay, so I can think of no use for this data, but I just that it was so awesome that people are doing this that I totally geeked out.

Civil Aircraft Register Database

Description: Civil Aircraft Register Database – this file contains the current mark, aircraft and owner information of all Canadian civil registered aircraft.

Notes: Here I really think there could be a geeky app. Just a simple app that you can type an aircraft’s number into and it will tell you the owner and details about the plane. I actually think the government could do a lot of work with this data. If regulatory and maintenance data were made available as well – then you’d have a powerful app that would tell you a lot about the planes you fly in. At a minimum would be of interest to flight enthusiasts.

Real Time Hydrometric Data Tool

Description: Real Time Hydrometric Data Tool – this site provides public access to real-time hydrometric (water level and streamflow) data collected at over 1700 locations in Canada. These data are collected under a national program jointly administered under federal-provincial and federal-territorial cost-sharing agreements. It is through partnerships that the Water Survey of Canada program has built a standardized and credible environmental information base for Canada. This dataset contains both current and historical datasets. The current month can be viewed in an HTML table, and historical data can be downloaded in CSV format.

Notes: So ripe for an API! What is cool is that the people at Environment Canada have integrated it into google maps. I could imagine fly fisherman and communities at risk of flooding being interested in this data set.

Access to information data sets

Description: 2006-2010 Access to Information and Privacy Statistics (With the previous years here, here and here.) is a compilation of statistical information about access to information and privacy submitted by government institutions subject to the Access to Information Act and the Privacy Act for 2006-2010.

Notes: I’d love to crunch this stuff again and see whose naughty and nice in the ATIP world…

Poultry and Forestry data

No links, BECAUSE THERE IS SO MUCH OF IT. Anyone interested in the Poultry or Forestry industry will find lots of data… obviously this stuff is useful to people who analyze these industries but I suspect there are a couple of “A” university level papers hidden in that data set as well.

Building Permits

There is tons on building permits., construction.. Actually one of the benefits of looking at the data in a spread sheet, easy to see other related data sets.

StatsCan

It really is amazing how much Statistic Canada data there is. Even reviewing something like the supply and demand of natural gas liquids got me thinking about the wealth of information trapped in there. One thing I do hope statscan starts to do is geolocate its data whenever possible.

Crime Data

As this has been in the news I couldn’t help but include it. It’s nice that any citizen can look at the crime data direct from StatsCan too see how our crime rate is falling (which is why we should build more expensive prisons) Crime statistics, by detailed offences. Of course unreported crime, which we all know is climbing at 3000% a year, is not included in these stats.

Legal Aid Applications

Legal aid applications, by status and type of matter. This was interesting to me since, here in BC there is much talk about funding for the Justice system and yet, the number of legal aid applications has remained more or less flat over the past 5 years.

National Broadband Coverage data

Description: The National Broadband Coverage Data represents broadband coverage information, by technology, for existing broadband service providers as of January 2012. Coverage information for Broadband Canada Program projects is included for all completed projects. Coverage information is aggregated over a grid of hexagons, which are each 6 km across. The estimated range of unserved / underserved population within in each hexagon location is included.

Notes: What’s nice is that there is lat/long data attached to all this, so mapping it, and potentially creating a heat map is possible. I’m certain the people at OpenMedia might appreciate such a map.

Census Consolidated Subdivision

Description: Census Consolidated Subdivision Cartographic Boundary Files portrays the geographic limits used for the 2006 census dissemination. The Census Consolidated Subdivision Boundary Files contain the boundaries of all 2,341 census consolidated subdivisions.

Notes: Obviously this one is on every data geeks radar, but just in case you’ve been asleep for the past 5 months, I wanted to highlight it.

Non-Emergency Surgeries, distribution of waiting times

Description: Non-emergency surgeries, distribution of waiting times, household population aged 15 and over, Canada, provinces and territories

Notes: Would love to see this at the hospital and clinic level!

Border Wait Times

Description: Estimates Border Wait Times (commercial and travellers flow) for the top 22 Canada Border Services Agency land border crossings.

Notes: Here I really think there is an app that could be made. At the very least there is something that could tell you historical averages and ideally, could be integrated into Google and Bing maps when calculating trip times… I can also imagine a lot of companies that export goods to the US are concerned about this issue and would be interested in better data to predict the costs and times of shipping goods. Big potential here.

Okay, that’s my list. Hope it inspires you to take a look yourself, or play with some of the data listed above!

Sharing ideas about data.gc.ca

As some of you may remember, the other week I suggested that on its one year anniversary we hack data.gc.ca – specifically, that people share what data sets they find most interesting on the website, especially as it is hard to search it.

Initially I’d uploaded a list of all the data sets on the catalog to buzzdata. However the other day the data.gc.ca administrators added a data set that is a list of all the data sets available on the site (meta, I know). This new list is, apparently, an even more robust and up to date list than the one I shared earlier and is available in both official languages.

If you do end up finding data you think is particularly interesting, creating a list of your favourite data sets, doing a mash up or visualization or (most ambitiously) creating a better way to search data.gc.ca please send me your results, a link, or at least an email. I’ll be posting what I find interesting tonight or tomorrow morning and would love to link to anything anyone else has done too!

 

Access to Information, Open Data and the Problem with Convergence

In response to my post yesterday one reader sent me a very thoughtful commentary that included this line at the end:

“Rather than compare [Freedom of Information] FOI legislation and Open Gov Data as if it’s “one or the other”, do you think there’s a way of talking about how the two might converge?”

One small detail:

So before diving in to the meat let me start by saying I don’t believe anything in yesterday’s post claimed open data was better or worse than Freedom of Information (FOI often referred to in Canada as Access to Information or ATI). Seeing FOI and open data as competing suggests they are similar tools. While they have similar goals – improving access – and there may be some overlap, I increasingly see them as fundamentally different tools. This is also why I don’t see an opportunity for convergence in the short term (more on that below). I do, however, believe open data and FOI processes can be complimentary. Indeed, I’m hopeful open data can alleviate some of the burden placed on FOI system which are often slow. Indeed, in Canada, government departments regularly violate rules around disclosure deadlines. If anything, this complimentary nature was the implicit point in yesterday’s post (which I could have made more explicit).

The Problem with Convergence:

As mentioned above, the overarching goals of open data and FOI systems are similar – to enable citizens to access government information – but the two initiatives are grounded in fundamentally different approaches to dealing with government information. From my view FOI has become a system of case by case review while open data is seeking to engage in an approach of “pre-clearance.”

Part of this has to do with what each system is reacting to. FOI was born, in part, out of a reaction to scandals in the mid 20th century which fostered public support for a right to access government information.

FOI has become a powerful tool for accessing government information. But the infrastructure created to manage it has also had some perverse effects. In some ways FOI has, paradoxically made it harder to gain access to government information. I remember talking to a group of retired reporters who talk about how it was easier to gain access to documents in a pre-FOI era since there were no guidelines and many public servants saw most documents as “public” anyways. The rules around disclosure today – thanks in part to FOI regimes – mean that governments can make closed the “default” setting for government information. In the United States the Ashcroft Memo serves as an excellent example of this problem. In this case the FOI legislation actually becomes a tool that helps governments withhold documents, rather than enable citizens to gain legitimate access.

But the bigger problem is that the process by which access to information requests are fulfilled is itself burdensome. While relevant and necessary for some types of information it is often overkill for others. And this is the niche that open data seeks to fill.

Let me pause to stress, I don’t share the above to disparage FOI. Quite the opposite. It is a critical and important tool and I’m not advocating for its end. Nor am I arguing the open data can – in the short or even medium term – solve the problems raised above.

This is why, over the short term, open data will remain a niche solution – a fact linked to its origins. Like FOI Open data has its roots in government transparency. However, it also evolved out of efforts to tear down antiquated intellectual property regimes to the facilitate sharing of data/information (particularly between organizations and governments). Thus the emphasis was not on case by case review of documents, but rather of clearing rights to categories of information, both created and to be created in the future. In other words, this is about granting access to the outputs of a system, not access to individual documents.

Another way of thinking about this is that open data initiatives seek to leverage the benefits of FOI while jettisoning its burdensome process. If a category of information can be pre-clear in advanced and in perpetuity for privacy, security and IP concerns then FOI processes – essential for individual documents and analysis – becomes unnecessary and one can reduce the transaction costs to citizens wishing to access the information.

Maybe, in the future, the scope of these open data initiatives could become broader, and I hope they will. Indeed there is, ample evidence to suggest that technology could be used to pre-clear or assess the sensitivity of any government document. An algorithm that assess a mixture of who the author is, the network of people who review it and a scan of the words would probably allow ascertain if a document could be released to an ATIP request in seconds, rather than weeks. It could at least give a risk profile and/or strip out privacy related information. These types of reforms would be much more disruptive (in the positive sense) to FOI legislation than open data.

But all that said, just getting the current focus of open data initiatives right would be a big accomplishment. And, even if such initiatives could be expanded, there are limits. I am not so naive to believe that government can be entirely open. Nor am I sure that would be an entirely good outcome. When trying to foster new ideas or assess how to balance competing interests in society, a private place to initiate and play with ideas may be essential. And despite the ruminations above, the limits of government IT systems means there will remain a lot of information – particularly non-data information like reports and analysis – that we won’t be able to “pre-clear- for sharing and downloading. Consequently an FOI regime – or something analogous – will continue to be necessary.

So rather than replace or converge with FOI systems, I hope open data will, for the short to medium term actually divert information out of the FOI, not because it competes, but because it offers a simpler and more efficient means of sharing (for both government and citizens) certain types of information. That said, open data initiatives offer none of the protections or rights of FOI and so this legislation will continue to serve as the fail safe mechanism should a government choose to stop sharing data. Moreover, FOI will continue to be a necessary tool for documents and information that – for all sorts of reasons (privacy, security, cabinet confidence, etc…) cannot fall under the rubric of an open data initiative. So convergence… not for now. But co-existence feels both likely and helpful for both.

Calculating the Value of Canada’s Open Data Portal: A Mini-Case Study

Okay, let’s geek out on some open data portal stats from data.gc.ca. I’ve got three parts to this review: First, an assessment on how to assess the value of data.gc.ca. Second, a look at what are the most downloaded data sets. And third, some interesting data about who is visiting the portal.

Before we dive in, a thank you to Jonathan C sent me some of this data to me the other day after requesting it from Treasury Board, the ministry within the Canadian Government that manages the government’s open data portal.

1. Assessing the Value of data.gc.ca

Here is the first thing that struck me. Many governments talk about how they struggle to find methodologies to measure the value of open data portals/initiatives. Often these assessments focus on things like number of apps created or downloaded. Sometimes (and incorrectly in my mind) pageviews or downloads are used. Occasionally it veers into things like mashups or websites.

However, one fairly tangible value of open data portals is that they cheaply resolve some access to information requests –  a point I’ve tried to make before. At the very minimum they give scale to some requests that previously would have been handled by slow and expensive access to information/freedom of information processes.

Let me share some numbers to explain what I mean.

The Canada Government is, I believe, only obligated to fulfill requests that originate within Canada. Drawing from the information in the charts later in this post, let’s say assume there were a total of 2200 downloads in January and that 1/3 of these originated from Canada – so a total of 726 “Canadian” downloads. Thanks to some earlier research, I happen to know that the office of the information commissioner has assessed that the average cost of fulfilling an access to information request in 2009-2010 was $1,332.21.

So in a world without an open data portal the hypothetical cost of fulfilling these “Canadian” downloads as formal access to information requests would have been $967,184.46 in January alone. Even if I’m off by 50%, then the cost – again, just for January – would still sit at $483,592.23. Assuming this is a safe monthly average, then over the course of a year the cost savings could be around $11,606,213.52 or $5,803,106.76 – depending on how conservative you’d want to be about the assumptions.

Of course, I’m well aware that not every one of these downloads would been an information request in a pre-portal world – that process is simply to burdensome. You have to pay a fee, and it has to be by check (who pays for anything by check any more???) so many of these users would simply have abandoned their search for government information. So some of these savings would not have been realized. But that doesn’t mean there isn’t value. Instead the open data portal is able to more cheaply reveal latent demand for data. In addition, only a fraction of the government’s data is presently on the portal – so all these numbers could get bigger still. And finally I’m only assessing downloads that originated inside Canada in these estimates.

So I’m not claiming that we have arrived at a holistic view of how to assess the value of open data portals – but even the narrow scope of assessment I outline above generates financial savings that are not trivial, and this is to say nothing of the value generated by those who downloaded the data – something that is much harder to measure – or of the value of increased access to Canadians and others.

2. Most Downloaded Datasets at data.gc.ca

This is interesting because… well… it’s just always interesting to see what people gravitate towards. But check this out…

Data sets like the Anthropogenic disturbance footprint within boreal caribou ranges across Canada may not seem interesting, but the ground breaking agreement between the Forest Products Association of Canada and a coalition of Environmental Non-Profits – known as the Canadian Boreal Forest Agreement (CBFA) – uses this data set a lot to assess where the endangered woodland caribou are most at risk. There is no app, but the data is critical in both protecting this species and in finding a way to sustainably harvest wood in Canada. (note, I worked as an adviser on the CBFA so am a) a big fan and b) not making this stuff up).

It is fascinating that immigration and visa data tops the list. But it really shouldn’t be a surprise. We are of course, a nation of immigrants. I’m sure that immigration and visa advisers, to say nothing of think tanks, municipal governments, social service non-profits and English as a second language schools are all very keen on using this data to help them understand how they should be shaping their services and policies to target immigrant communities.

There is, of course, weather. The original open government data set. We made this data open for 100s of years. So useful and so important you had to make it open.

And, nice to see Sales of fuel used for road motor vehicles, by province and territory. If you wanted to figure out the carbon footprint of vehicles, by province, I suspect this is a nice dataset to get. Probably is also useful for computing gas prices as it might let you get a handle on demand. Economists probably like this data set.

All this to say, I’m less skeptical than before about the data sets in data.gc.ca. With the exception of weather, these data sets aren’t likely useful to software developers – the group I tend to hear most from – but then I’ve always posited that apps were only going to be a tiny part of the open data ecosystem. Analysis is king for open data and there does appear to be people out there who are finding data of value for analyses they want to make. That’s a great outcome.

Here are the tables outlining the most popular data sets since launch and (roughly) in February.

  Top 10 most downloaded datasets, since launch

DATASET DEPARTMENT DOWNLOADS
1 Permanent Resident Applications Processed Abroad and Processing Times (English) Citizenship and Immigration Canada 4730
2 Permanent Resident Summary by Mission (English) Citizenship and Immigration Canada 1733
3 Overseas Permanent Resident Inventory (English) Citizenship and Immigration Canada 1558
4 Canada – Permanent residents by category (English) Citizenship and Immigration Canada 1261
5 Permanent Resident Applicants Awaiting a Decision (English) Citizenship and Immigration Canada 873
6 Meteorological Service of Canada (MSC) – City Page Weather Environment Canada 852
7 Meteorological Service of Canada (MSC) – Weather Element Forecasts Environment Canada 851
8 Permanent Resident Visa Applications Received Abroad – English Version Citizenship and Immigration Canada  800
9 Water Quality Indicators – Reports, Maps, Charts and Data Environment Canada 697
10 Canada – Permanent and Temporary Residents – English version Citizenship and Immigration Canada 625

Top 10 most downloaded datasets, for past 30 days

DATASET DEPARTMENT DOWNLOADS
1 Permanent Resident Applications Processed Abroad and Processing Times (English) Citizenship and Immigration Canada 481
2 Sales of commodities of large retailers – English version Statistics Canada  247
3 Permanent Resident Summary by Mission – English Version Citizenship and Immigration Canada 207
4 CIC Operational Network at a Glance – English Version Citizenship and Immigration Canada 163
5 Gross domestic product at basic prices, communications, transportation and trade – English version Statistics Canada 159
6 Anthropogenic disturbance footprint within boreal caribou ranges across Canada – As interpreted from 2008-2010 Landsat satellite imagery Environment Canada  102
7 Canada – Permanent residents by category – English version Citizenship and Immigration Canada  98
8 Meteorological Service of Canada (MSC) – City Page Weather Environment Canada  61
9 Sales of fuel used for road motor vehicles, by province and territory – English version  Statistics Canada 52
10 Government of Canada Core Subject Thesaurus – English Version  Library and Archives Canada  51

3. Visitor locations

So this is just plain fun. There is not a ton to derive from this – especially as IP addresses can, occasionally, be misleading. In addition, this is page view data, not download data. But what is fascinating is that computers in Canada are not the top source of traffic at data.gc.ca. Indeed, Canada’s share of the traffic is actually quite low. In fact, in January, just taking into account the countries in the chart (and not the long tail of visitors) Canada accounted for only 16% of the traffic to the site. That said, I suspect that downloads were significantly higher from Canadian visitors – although I have no hard evidence of this, just a hypothesis.

datagcca-december-visits

•Total visits since launch: 380,276 user sessions

Let's Hack data.gc.ca

In just under two weeks data.gc.ca will celebrate its one year anniversary. This will also mark the period that the pilot project is officially supposed to end.

Looking at data.gc.ca three things stand out. First, the license has improved a great deal since its launch. Second, a LOT of data has been added to the site over the last year. And finally, the website is remarkably bad at searching for data and enabling a community of users.

Indeed, I believe that a lot of people have stopped visiting the site and don’t even know what data is available. My suspicion is that almost none of us know what is actually available since a) there is a lot, b) much of it is not sexy and c) it is very hard to search.

Let’s do something about that.

I have managed to create, and upload to buzzdata, a list of all the data sets in data.gc.ca – both geographic and non-geographic data sets.

I’m proposing that we go through the data.gc.ca data sets and find what is interesting to each of us, and on March 15th, find a way to highlight it or talk about it so that other people find out about it. Maybe you tweet about it (use the hashtah #gcdata) or blog about it.

Even more interesting would be if we could find a way to do it collaboratively – have a way of collectively marking what data sets are interesting (in say, a piratepad somewhere). If someone had a clever proposal about how to go through all the datasets, I’d love for us to collectively highlight the high value datasets (if there are any) available in data.gc.ca.

Speaking with the great community of open data activists in Ottawa, we brainstormed about organizing an event after work on the 15th where people might get together and do this. We could call it “The Big Search” – an effort in any city where people are interested to gather and comb through the data. All with the goal of signaling to developers, non-profits, journalists and others, what, if any, data in data.gc.ca might be of interest for analysis, applications, or other uses. In addition, this exercise would also help us write supportive and critical comments about the government’s open data trial.

Finally, and most ambitiously, I’ve heard some people say they’d like to design an alternative data portal – I’m definitely game for that and am happy to offer up the datadotgc.ca url for that too.

So, I’m throwing this out there. If there is interest, please comment below. Would love to hear your thoughts and hope we can maybe organize some events on March 15th, or at least posts data sets in blogs, on facebook and on twitter, that people think are interesting.

More on Google Transit and how it is Reshaping a Public Service

Some of you know I’ve written a fair bit on Google transit and how it is reshaping public transit – this blog post in particular comes to mind. For more reading I encourage you to check out the Xconomy article Google Transit: How (and Why) the Search Giant is Remapping Public Transportation as it provides a lot of good details as to what is going on in this space.

Two things about this article:

First, it really is a story about how the secret sauce for success is combining open data with a common standard across jurisdictions. The fact that the General Transit Feed Specification (a structured way of sharing transit schedules) is used by over 400 transit authorities around the world has helped spur a ton of other innovations.

Couple of money quotes include this one about the initial reluctance of some authorities to share their data for free (I’m looking at you Translink board):

“I have watched transit agencies try to monetize schedules for years and nobody has been successful,” he says. “Markets like the MTA and the D.C. Metro fought sharing this data for a very long time, and it seems to me that there was a lot of fallout from that with their riders. This is not our data to hoard—that’s my bottom line.”

and this one about iBart, an app that uses the GTFS to power an app for planning transit trips:

in its home city, San Francisco, the startup’s app continues to win more users: about 3 percent of all trips taken on BART begin with a query on iBART

3%? That is amazing. Last year my home town of Vancouver’s transit authority, Translink, had 211.3 million trips. If the iBart app were ported to here and enjoyed similar success that would man 6.4 million trips planned on iBart (or iTranslink?). That’s a lot of trips made easier to plan.

The second thing I encourage you to think about…

Where else could this model be recreated? What’s the data set, where is the demand from the public, and what is the company or organization that can fulfill the role of google to give it scale. I’d love to hear thoughts.

Transparency isn't a cost – it's a cost saver (a note for Governments and Drummond)

Yesterday Don Drummond – a leading economist hired by the Ontario government to review how the province delivers services in the face of declining economic growth and rising deficits – published his report.

There is much to commend, it lays out stark truths that frankly, many citizens already know, but that government was too afraid to say aloud. It is a report that, frankly, I think many provincial and state governments may look at with great interest since the challenges faced by Ontario are faced by governments across North America (and Europe).

From an IT perspective – particular one where I believe open innovation could play a powerfully transformative role – I found the report lacking. I say this with enormous trepidation, as I believe Drummond to be a man of supreme intellect, but my sense is he (and/or his team) have profoundly misunderstand government transparency and why it should be relevant. In Chapter 16 (no I have not yet read all 700 pages) a few pieces come together to create, what I believe, are problematic conditions. The first relates to the framing around “accountability”:

Accountability is an essential aspect of government operations, but we often treat that goal as an absolute good. Taxpayers expect excellent public-sector management as well as open and transparent procurement practices. However, an exclusive focus on rigorous financial reporting and compliance as the measure of successful management requires significant investments of time, energy and resources. At some point, this investment is subject to diminishing returns.

Remember the context. This section largely deals with how government services – and in particular the IT aspects of these services – could be consolidated (a process that rarely yields the breadth of savings people believe it will). Through this lens the interesting things about the word “accountability” in this section above is that I could replace it with searchability – the capacity to locate pieces of information. I agree with Drummond that there is a granularity around recording items – say tracking every receipt versus offering per diems – that creates unnecessary costs. Nor to I believe we should pay unlimited costs for transparency – just for the sake of transparency. But I do believe that government needs a much, much stronger capacity to search and locate pieces of information. Indeed, I think that capacity, the ability for government to mine its own data intelligently, will be critical. Transparency thus becomes one of the few metrics citizens have into not only how effective a government’s inputs are, but how effective its systems are.

Case in point. If you required every Canadian under the age of 30 to conduct an ATIP request tomorrow, I predict that you’d have a massive collapse in Canadians confidence in government. The length of ATIP requests (and the fact that in many places, they aren’t even online) probably says less about government secrecy to these Canadians than it does about the government’s capacity to locate, identify and process its own data and information. When you can’t get information to me in a timely manner, it strongly suggests that managers may not be able to get timely information either.

If Ontario’s public service is going to be transformed – especially if it is going to fulfill other Drummond report recommendations, such as:

Further steps should be taken to advance partnering with municipal and federal services —efficiencies can be found by working collaboratively with other levels of government. For example, ServiceOntario in Ottawa co-locates with the City of Ottawa and Service Canada to provide services from one location, therefore improving the client experience. Additionally, the new BizPal account (which allows Ontario businesses to manage multiple government requirements from a single account) allows 127 Ontario municipalities (such as Kingston, Timmins, Brampton and Sudbury) to partner with ServiceOntario and become more efficient in issuing business permits and licensing. The creation of more such hubs, with their critical mass, would make it easier to provide services in both official languages. Such synergies in service delivery will improve customer experience and capitalize on economies of scale.

Then it is going to require systems that can be easily queried as well as interface with other systems quickly. Architecting systems in open standards, that can be easily searched and recoded, will be essential. This is particularly true if the recommendation that private sector partners (who love proprietary data models, standards and systems which regularly trap governments in expensive traps) are to be used more frequently. All this is to say, we shouldn’t to transparency for transparencies sake. We should do transparency because it will make Ontario more interoperable, will lower costs, and will enable more accountability.

Accountability doesn’t have to be a cost drive. Quite the opposite, transparency should and can be the bi-product of good procurement strategies, interoperable architecture choices and effective processes.

Let’s not pit transparency against cost savings. Very often, it’s a false dichotomy.

The Exciting Launch of Represent and What It Says About Open Data in Canada

Last week a group of volunteer programs from across Canada announced the launch of Represent – a website that tries to map all of Canada’s boundaries. Confused? Don’t be. It’s simple. This is a nifty piece of digital infrastructure – try visiting the website yourself! After identifying where you are located it will then tell you which MP riding, MLA/MPP district and census subdivision you are located in.

So why does this matter?

What’s important about a site like Represent (much like its cousin site Mapit, which offers a similar service in the UK) is that other websites and applications can use it to offer important services, like letting a user know who their MP is, and thus who their complaint email should be sent to, or identify what by-laws are applicable in the place where they are standing. Have you ever visited the site of a radical group non-profit which urged you to write your MP? With Represent that organization can now easily and cheaply create a widget that would figure out where you are, who you MP is, and ensure you had the right address or email address for your letter. This significantly lowers the barrier to advocacy and political mobilization.

This is why I consider sites like Represent to be core digital infrastructure for a 21st century democracy. Critical because the number of useful services that can educate and engage citizens on politics and government is virtually limitless.

But if we accept that Represent is critical, the site’s limits tell us a lot about the state of our democratic institutions in general, and our open data policy infrastructure in particular. In this regard, there are three insights that come to mind.

1) The information limits of Represent

While Represent can locate any of the federal and provincial ridings (along with the elected official in them) there are remarkably few cities for which the service works. Calgary, Charlottetown, Edmonton, Mississauga, Montreal, Ottawa, Stratford, Summerside, Toronto and Windsor are all that are identified. (The absence of Vancouver – my home town – is less alarming as the city does not have wards or boroughs, we elect 10 councillors in an at large system). The main reason you won’t find more cities available is simply because many cities choose not to share their ward boundary data with the public. And of course, things don’t need to stop with just city wards, there is no reason what Represent couldn’t also tell you which school district you are in, or even which specific school catchment area you are in, in say Vancouver, or North Vancouver.

The paucity of data is an indication of how hard it is to get data from most cities and provinces about the communities in which we live in. There has been great success in getting open data portals launched in several cities – and we should celebrate the successes we’ve had – but the reality is, only a tiny fraction of Canadian cities share data about themselves. In the overwhelming majority, useful data about electoral boundaries, elected officials, schools, etc… exists and are sued internally by governments (paid for by our tax dollars) but they are never shared publicly and so cannot help drive democratic engagement.

So here’s a new rule. If your city boundary data isn’t in Represntyour city is screwing up. It’s a pretty simple metric.

Oh, and Canada Post, you’re the biggest offender of them all. Your data is the default location specific data set in the country – the easiest way to locate where someone is. Being able to map all this data to postal codes is maybe the most important piece of the puzzle, but sadly, Canada Post clings to data our tax dollars subsidize the creation and maintenance of. Of course, in the UK, they made Postal Code data completely open.

2) Lack of Standards

And of course, even when the data does exist, it isn’t standardized. Previously non-profits, think tanks and even companies would have to manage data in various forms from innumerable sources, (or pay people lots of money to organize the data for them). It shouldn’t be this way. While it is great the Represent helps standardize the data, standard data schemas should already exist for things like MPP/MLA/MNA ridings and descriptions. Instead we have to rely on a group of volunteer hackers to solve a problem the countries leading governments are unable, or unwilling to address.

3) Licenses & Legality

However, the real place where Represent shows the short comings in Canada’s open data infrastructure is the way the site struggles to deal with the variety of licenses under which it is allowed to use data from various sources.

The simple fact is, in Canada, most “open data” is in fact not open. Rather that have serious restrictions placed upon them that limit the ability of sites like Rperesent.ca to be useful.

For example, many, many cities still have “share alike” clauses in their licenses, clauses that mean any product created using their data may not have  “further restrictions of any kind.” But of course, each city with a “share alike” clause has slightly different restrictions in their license meaning that none of them can be combined. In the end it means that data from Vancouver cannot be used with data from Edmonton or from Montreal. It’s a complete mess.

Other jurisdictions have no license on their data. For example electoral boundary data for British Columbia, Prince Edward Island, Newfoundland and Nova Scotia is unlicensed, leaving users very unclear about their rights. Hint to these and other jurisdictions: just make it open.

What Represent really demonstrates is that there is a need for a single, standard open data license across Canada. It’s something I’m working on. More to report soon I hope.

Despite these hurdles, Represent is a fantastic project and site – and they are looking for others to help them gather more data. If you want to support them (and I strongly encourage you to do so) check out the bottom of their home page. Big congratulations to everyone involved.

 

 

 

Use The Economist's Data to Find the Best City in the World

Yesterday The Economist Intelligence Unit and Buzzdata launched a $10,000 contest to help enhance The Economist’s “Best city in the world” index.

Yes. It’s a data and visualization competition to identify the best city in the world to live.

As part of the contest, The Economist Intelligence Unit has shared two data sets, its “liveability” and “cost of living” indices for 140 cities around the world. This is, in of itself, pretty cool. But the contest moves beyond their data. As the website outlines, the competition’s core objective is to not just use this data, but figure out what other data sets should be used.

Your mission: to create a new “liveability” index, using the 140 cities in the EIU’s datasets, that determines which is the best city in the world to live in, using these datasets PLUS any additional publicly available data sources that you wish to use (note: see the Contest Rules for information on using additional data). You are also required to create a visualization of the new index that you’ve created.

If you’ve always felt that some important factors in livability and quality of life have not been getting the attention they deserve, now is a chance to change (or add them to!) the debate.

You can check out the rules and judging criteria, as well as sign up, over at the contest’s webpage.

I, sadly, won’t be participating in the competition as… I’m pleased to share that I’ll be helping to judge the contest.