Category Archives: open data

OpenGovWest (BC edition): Are you out west?

Something good is taking shape in my backyard…

From the city of Vancouver’s open data portal to Apps 4 Climate Action to the Water legislation blog, a great deal of the leadership and cutting edge work in open government is taking place in BC. Many places across the country and around the world look to what is happening on the west coast and are trying to draw lessons and see how it can be replicated.

Recognizing this fact a number of great people have been working behind the scenes for the last couple of months pulling together a conference to share this successes, talk about challenges and opportunities and generally think about what could happen next. The conference…? OpenGovWest BC.

If you are in BC and interested in open government, open data and gov 2.0, here’s a conference designed and built for you.

A number of speakers have already been publicly confirmed, others are, apparently, being held as surprises. There are also slots open for presentations if you have a project you’d like to share with the community out west.

The conference will be taking place on November 10th in Victoria, BC – if you are out west and feel passionate about these topics the same way I do, I hope you’ll consider coming.

And while we are talking about conferences, I also want to share Open Government Data Camp that will be happening in London, UK on November 18th and 19th. I’m excited to say I’ll be there with our friends from the Open Knowledge Foundation and the Sunlight Foundation, along with numerous others. Harder to get too, but also likely to be quite, quite fun…

World Bank Discussion on Open Data – lessons for developers, governments and others

Yesterday the World Bank formally launched its Apps For Development competition and Google announced that in addition to integrating the World Bank’s (large and growing) data catalog into searches, it will now do it in 34 languages.

What is fascinating about this announcement and the recent changes at the bank is it appears to be very serious about open data and even more serious about open development. The repercussions of this shift, especially if the bank starts demanding that its national partners also disclose data, could be significant.

This of course, means there is lots to talk about. So, as part of the overall launch of the competition and in an effort to open up the workings of the World Bank, the organization hosted its first Open Forum in which a panel of guests talked about open development and open data. The bank was kind enough to invite me and so I ducted out of GTEC a pinch early and flew down to DC to meet some of the amazing people behind the world bank’s changes and discuss the future of open data and what it means for open development.

Embedded below is the video of the event.

As a little backgrounder here are some links to the bios of the different panelists and people who cycled through the event.

Our host: Molly Wood of CNET.

Andrew McLaughlin, Deputy Chief Technology Officer, The White House (formerly head of Global Public Policy and Government Affairs for Google) (twitter feed)

Stuart Gill, World Bank expert, Disaster Mitigation and Response for LAC

David Eaves, Open Government Writer and Activist

Rakesh Rajani, Founder, Twaweza, an initiative focused on transparency and accountability in East Africa (twitter)

Aleem Walji, Manager, Innovation Practice, World Bank Institute (twitter)

How Governments misunderstand the risks of Open Data

When I’m asked to give a talk about or consult on policies around open data I’ve noticed there are a few questions that are most frequently asked:

“How do I assess the risks to the government of doing open data?”

or

“My bosses say that we can only release data if we know people aren’t going to do anything wrong/embarrassing/illegal/bad with it”

I would argue that these question are either flawed in their logic, or have already been largely addressed.

Firstly, it seems problematic to assess the risks of open data, without also assessing the opportunity. Any activity – from walking out my front door to scaling Mount Everest carries with it risks. What needs to be measured are not the risks in isolation but the risks balanced against the opportunity and benefits.

But more importantly, the logic of the question is flawed in another manner. It suggests that the government only take action if every possible negative use can be prevented.

Let’s forget about data for a second – imagine you are building a road. Now ask: “what are the risk’s that someone might misuse this road?” Well… they are significant. People are going to speed and they are going to jay walk. But it gets worse. Someone may rob a bank and then use the road as part of their escape route. Of course, the road will also provide more efficient transportation for 1000s of people, it will reduce costs, improve access, help ambulances save peoples lives and do millions of other things, but people will also misuse it.

However, at no point in any policy discussion in any government has anyone said “we can’t build this road because, hypothetically, someone may speed or use it as an escape route during a robbery.”

And yet, this logic is frequently accepted, or at least goes unchallenged, as appropriate when discussing open data.

The fact is, most governments already have the necessary policy infrastructure for managing the overwhelming majority of risks concerning open data. Your government likely has provisions dealing with privacy – if applied to open data this should address these concerns. Your government likely has provisions for dealing with confidential and security related issues – if applied to open data this should address these concerns. Finally, your government(s) likely has a legal system that outlines what is, and is not legal – when it comes to the use of open data, this legal system is in effect.

If someone gets caught speeding, we have enforcement officials and laws that catch and punish them. The same is true with data. If someone uses it to do something illegal we already have a system in place for addressing that. This is how we manage the risk of misuse. It is seen as acceptable for every part of our life and every aspect of our society. Why not with open data too?

The opportunity, of both roads and data, are significant enough that we build them and share them despite the fact that a small number of people may not use them appropriately. Should we be concerned about those who will misuse them? Absolutely. But do we allow a small amount of misuse to stop us from building roads or sharing data? No. We mitigate the concern.

With open data, I’m happy to report that we already have the infrastructure in place to do just that.

Rethinking Freedom of Information Requests: from Bugzilla to AccessZilla

Last week I gave a talk at the Conference for Parliamentarians hosted by the Information Commission as part of Right to Know Week.

During the panel I noted that, if we are interested in improving response times for Freedom of Information (FOI) requests (or, in Canada, Access to Information (ATIP) requests) why doesn’t the Office of the Information Commissioner use a bugzilla type software to track requests?

Such a system would have a number of serious advantages, including:

  1. Requests would be public (although the identity of the requester could remain anonymous), this means if numerous people request the same document they could bandwagon onto a single request
  2. Requests would be searchable – this would make it easier to find documents already released and requests already completed
  3. You could track performance in real time – you could see how quickly different ministries, individuals, groups, etc… respond to FOI/ATIP requests, you could even sort performance by keywords, requester or time of the year
  4. You could see who specifically is holding up a request

In short such a system would bring a lot of transparency to the process itself and, I suspect, would provide a powerful incentive for ministries and individuals to improve their performance in responding to requests.

For those unfamiliar with Bugzilla it is an open source software application used by a number of projects to track “bugs” and feature requests in the software. So, for example, if you notice the software has a bug, you register it in Bugzilla, and then, if you are lucky and/or if the bug is really important, so intrepid developer will come along and develop a patch for it. Posted below, for example, is a bug I submitted for Thunderbird, an email client developed by Mozilla. It’s not as intuitive as it could be but you can get the general sense of things: when I submitted the bug (2010-01-09), who developed the patch (David Bienvenu), it’s current status (Fixed), etc…

ATIPPER

Interestingly, an FOI or ATIP request really isn’t that different than a “bug” in a software program. In many ways, bugzilla is just a complex and collaborative “to do” list manager. I could imagine it wouldn’t be that hard to reskin it so that it could be used to manage and monitor access to information requests. Indeed, I suspect there might even be a community of volunteers who would be willing to work with the Office of the Information Commissioner to help make it happen.

Below I’ve done a mock up of what I think revamped Bugzilla, (renamed AccessZilla) might look like. I’m put numbers next to some of the features so that I can explain in detail about them below.

ATIPPER-OIC1

So what are some of the features I’ve included?

1. Status: Now an ATIP request can be marked with a status, these might be as simple as submitted, in process, under review, fixed and verified fixed (meaning the submitter has confirmed they’ve received it). This alone would allow the Information Commissioner, the submitter, and the public to track how long an individual request (or an aggregate of requests) stay in each part of the process.

2.Keywords: Wouldn’t it be nice to search of other FOR/ATIP requests with similar keywords? Perhaps someone has submitted a request for a document that is similar to your own, but not something you knew existed or had thought of… Keywords could be a powerful way to find government documents.

3. Individual accountability: Now you can see who is monitoring the request on behalf of the Office of the information commissioner and who is the ATIP officer within the Ministry. If the rules permitted then potential the public servants involved in the document might have their names attached here as well (or maybe this option will only be available to those who log on as ATIP officers.

4. Logs: You would be able to see the last time the request was modified. This might include getting the documents ready, expressing concern about privacy or confidentiality, or simply asking for clarification about the request.

5. Related requests: Like keywords, but more sophisticated. Why not have the software look at the words and people involved in the request and suggest other, completed requests, that it thinks might similar in type and therefor of interest to the user. Seems obvious.

6. Simple and reusable resolution: Once the ATIP officer has the documentation, they can simply upload it as an attachment to the request. This way not only can the original user quickly download the document, but anyone subsequent user who stumbles upon the request during a search could download the documents. Better still any public servant who has unclassified documents that might relate to the request can simply upload them directly as well.

7. Search: This feels pretty obvious… it would certainly make citizens life much easier and be the basic ante for any government that claims to be interested in transparency and accountability.

8. Visualizing it (not shown): The nice thing about all of these features is that the data coming out of them could be visualized. We could generate realt time charts showing average response time by ministry, list of respondees by speed from slowest to fastest, even something as mundane as most searched keywords. The point being that with visualizations is that a governments performance around transparency and accountability becomes more accessible to the general public.

It may be that there is much better software out there for doing this (like JIRA), I’m definitely open do suggestions. What I like about bugzilla is that it can be hosted, it’s free and its open source. Mostly however, software like this creates an opportunity for the Office of the Information Commissioner in Canada, and access to information managers around the world, to alter the incentives for governments to complete FOI/ATIP requests as well as make it easier for citizens to find out information about their government. It could be a fascinating project to reskin bugzilla (or some other software platform) to do this. Maybe even a Information Commissioners from around the world could pool their funds to sponsor such a reskinning of bugzilla…

UK Adopts Open Government License for everything: Why it's good and what it means

In the UK, the default is open.

Yesterday, the United Kingdom made an announcement that radically reformed how it will manage what will become the government’s most important asset in the 21st century: knowledge & information.

On the National Archives website, the UK Government made public its new license for managing software, documents and data created by the government. The document is both far reaching and forward looking. Indeed, I believe this policy may be the boldest and most progressive step taken by a government since the United States decided that documents created by the US government would directly enter the public domain and not be copyrighted.

In almost every aspect the license, the UK government will manage its  “intellectual property” by setting the default to be open and free.

Consider the introduction to the framework:

The UK Government Licensing Framework (UKGLF) provides a policy and legal overview for licensing the re-use of public sector information both in central government and the wider public sector. It sets out best practice, standardises the licensing principles for government information and recommends the use of the UK Open Government Licence (OGL) for public sector information.

The UK Government recognises the importance of public sector information and its social and economic value beyond the purpose for which it was originally created. The public sector therefore needs to ensure that simple licensing processes are in place to enable and encourage civil society, social entrepreneurs and the private sector to re-use this information in order to:

  • promote creative and innovative activities, which will deliver social and economic benefits for the UK
  • make government more transparent and open in its activities, ensuring that the public are better informed about the work of the government and the public sector
  • enable more civic and democratic engagement through social enterprise and voluntary and community activities.

At the heart of the UKGLF is a simple, non-transactional licence – the Open Government Licence – which all public sector bodies can use to make their information available for free re-use on simple, flexible terms.

An just in case you thought that was vague consider these two quotes from the frame work. This one for data:

It is UK Government policy to support the re-use of its information by making it available for re-use under simple licensing terms.  As part of this policy most public sector information should be made available for re-use at the marginal cost of production. In effect, this means at zero cost for the re-user, especially where the information is published online. This maximises the social and economic value of the information. The Open Government Licence should be the default licence adopted where information is made available for re-use free of charge.

And this one for software:

  • Software which is the original work of public sector employees should use a default licence.  The default licence recommended is the Open Government Licence.
  • Software developed by public sector employees from open source software may be released under a licence consistent with the open source software.

These statements are unambiguous and a dramatic step in the right direction. Information and software created by governments are, by definition, public assets. Tax dollars have already paid for their collection and/or development and the government has already benefited by using from them. They are also non-rivalrous good. This means, unlike a road, if I use government information, or software, I don’t diminish your ability to use it (in contrast only so many cars can fit on a road, and they wear it down). Indeed with intellectual property quite the opposite is true, by using it I may actually make the knowledge more valuable.

This is, obviously, an exciting development. It has generated a number of thoughts:

1.     With this move the UK has further positioned itself at the forefront of the knowledge economy:

By enacting this policy the UK government has just enabled the entire country, and indeed the world, to use its data, knowledge and software to do whatever people would like. In short an enormous resource of intellectual property has just been opened up to be developed, enhanced and re-purposed. This could help lower costs for new software products, diminish the cost of government and help foster more efficient services. This means a great deal of this innovation will be happening in the UK first. This could become a significant strategic advantage in the 21st century economy.

2.     Other jurisdictions will finally be persuaded it is “safe” to adopt open licenses for their intellectual property:

If there is one thing that I’ve learnt dealing with governments it is that, for all the talk of innovation, many governments, and particularly their legal departments, are actually scared to be the first to do something. With the UK taking this bold step I expect a number of other jurisdictions to more vigorously explore this opportunity. (it is worth noting that Vancouver did, as part of the open motion, state the software developed by the city would have an open license applied to it, but the policy work to implement such a change has yet to be announced).

3.     This should foster a debate about information as a public asset:

In many jurisdictions there is still the myth that governments can and should charge for data. Britain’s move should provide a powerful example for why these types of policies should be challenged. There is significant research showing that for GIS data for example, money collected from the sale of data simply pays for the money collection system. This is to say nothing of the policy and managerial overhead of choosing to manage intellectual property. Charging for public data has never made financial sense, and has a number of ethical challenges to it (so only the wealthy get to benefit from a publicly derived good?). Hopefully for less progressive governments, the UK’s move will refocus the debate along the right path.

4.     It is hard to displace a policy leader once they are established.

The real lesson here is that innovative and forward looking jurisdictions have huge advantages that they are likely to retain. It should come as no surprise that the UK made this move – it was among the first national governments to create an open data portal. By being an early mover it has seen the challenges and opportunities before others and so has been able to build on its success more quickly.

Consider other countries – like Canada – that may wish to catch up. Canada does not even have an open data portal as of yet (although this may soon change). This means that it is now almost 2 years behind the UK in assessing the opportunities and challenges around open data and rethinking intellectual property. These two years cannot be magically or quickly caught up. More importantly, it suggests that some public services have cultures that recognize and foster innovation – especially around key issues in the knowledge economy – while others do not.

Knowledge economies will benefit from governments that make knowledge, information and data more available. Hopefully this will serve as a wake up call to other governments in other jurisdictions. The 21st century knowledge economy is here, and government has a role to play. Best not be caught lagging.

Right to Know Week – going on Right Now

So, for those not in the know (…groan) this week is Right to Know Week.

Right to Know (RTK) Week is and internationally designated week with events taking place around the world. It is designed to improve people’s awareness of their rights to access government information and the role such access plays in democracy and good governance. Here in Canada there is an entire week’s worth of events planned and it is easy to find out what’s happening near you.

Last year, during RTK Week I was invited to speak in Ottawa on a panel for parliamentarians. My talk, called Government Transparency in a Digital Age (blog post about it & slideshare link) seemed to go well and the Information Commissioner soon after started quoting some of my ideas and writings in her speeches and testimony/reports to parliamentary. Unsurprisingly, she has become a fantastic ally and champion in the cause for open data. Indeed, most recently, the Federal Information Commissioner, along with all the her provincial counterparts, released a joint statement calling on their respective governments to proactively disclosing information “in open, accessible and reusable formats.”

What is interesting about all this, is that over the course of the last year the RTK community – as witnessed by the Information Commissioners transformation – has begun to understand why “the digital” is radically transforming what access means and how it can work. There is an opportunity to significantly enlarge the number and type of allies in the cause of “open government.” But for this transformation to take place, the traditional players will need to continue to rethink and revise both their roles and their relationships with these new players. This is something I hope to pick up on in my talk.

So yes… this year, I’ll be back in Ottawa again.

I’ll once again be part of the Conference for Parliamentarians-Balancing Openness and the Public Interest in Protecting Information panel, which I’ll be doing with:

  • David Ferriero, Archivist of the United States
  • Vanessa Brinkmann, Counsel, Initial Request Staff, Office of Information Policy, U.S. Department of Justice; and
  • James Travers of the Toronto Star

Perhaps even more exciting than the panel I’m on though is the panel that shows how quickly both this week and the Information Officer’s are trying to transform. Consider that, this year, RTK will include a panel on open data titled Push or Pull: Liberating Government Information” it will be chaired by Microsoft’s John Weigelt and have on it:

  • Nathalie Des Rosiers, General Counsel, Canadian Civil Liberties Association
  • Toby Mendel, Executive Director of the Centre for Law and Democracy
  • Kady O’Malley, Parliamentary blogger for CBC.ca’s Inside Politics blog
  • Jeff Sallot, Carleton University journalism instructor and former Globe and Mail journalist

Sadly I have a prior commitment back in Vancouver so won’t be there in person, but hope to check it out online, hope you will too.

Welcome to Right to Know Week. Hope you’ll join in the fray.

Good Backgrounder on Open Data for Cities – (Looking at You #VoteTO)

Yesterday the Martin Prosperity Institute released another installment of its Toronto Election 2010 discussion papers, this one focused on Open Data.

For citizens of any city this is a fantastic primer on what open data is, why it matters and, in the case of Toronto, why it should be an election issue in the upcoming civic election.

Full disclosure: I did sit down with the paper’s authors at the Institute – Kimberly Silk and Jacqueline Whyte Appleby – to talk about a number of the critical aspects surrounding this issue. Their depth and experience in municipal and regional issues has produced an invaluable resource. I hope citizens of cities everywhere are able to make use of it, but I also hope that citizens of Toronto use it to ask questions of the candidates for Mayor and council.

Again, you can download the report here.

For those not familiar with the Institute, you can read more about it here (excerpt below):

The Lloyd & Delphine Martin Prosperity Institute is the world’s leading think-tank on the role of sub-national factors – location, place and city-regions – in global economic prosperity. Led by Director Richard Florida , we take an integrated view of prosperity, looking beyond economic measures to include the importance of quality of place and the development of people’s creative potential.

Does your Government (and thus you) actually own its data?

For those who missed it, there was fascinating legal analysis of Public Engines, Inc. effort to sue ReportSee, Inc. the other day on the Berkman centre’s Citizen Media Law Project blog.

If you haven’t read it, I encourage you to take a look. It’s more than a legal brief. It is a cautionary tale for every government official.

Why is this?

The story is as such. Public Engines Inc is paid by police departments to collect and analyze their crime data. Given this privileged position Public Engines strips the data of privacy related information and, through a service called CrimeReports.com, it allows citizens to see maps of it and so forth.

The problem is, this isn’t actually open data. As I argue in the three laws of open data (and the good folks at Berkman seem to share my sense of humour) crime data for cities that contract with Public Engines Inc isn’t open. You can look at the data, but you can’t touch it. Worst still… don’t even think about playing with it (unless you are doing so ON crimereports.com website, in a way that their license lets you – its all quite constraining stuff).

And herein lies Reportsee Inc’s big mistake. It scraped this data from CrimeReports.com and offered it up in a competing manner.

The legal analysis on the post is very much worth reading. At the end of the day, everyone is behaving rationally. Public Engine Inc is trying to protect its monopoly on crime data, and the investment it has made in cleaning it up of private information. Reportsee is simply trying to access what is public data the only place where – in many instances – it appears to be being made available.

The real party to blame here are governments that signed this agreements and that don’t understand that data is both a strategic and public asset. Understandably the smart people at Berkman understood this and jumped all over it:

The bottom line is that this sort of dispute could be avoided if government agencies are more proactive and farsighted when negotiating terms with third-party providers of data management services. In particular, government agencies should maintain control over the resulting data, or at a minimum, require that the contractor permit a wide range of uses of the data. It’s not just in the public interest of promoting government transparency and accountability. It’s also in the agencies’ interest to streamline its public records requests. The agencies are already paying for the data management services anyway, why spend even more government resources in order to respond to redundant public records requests?

Indeed, the post notes that:

…that government agencies often pay third parties to collect, compile and maintain public records data in useful formats, and who may retain rights over the data. This isn’t the first time a third-party data contractor has stepped in the way of a commercial use of data feeds. In the Bay Area a few years ago, Routsey’s iPhone app making use of data feeds with bus and train arrival times got in a jam when the contractor providing the data to MUNI, the public transportation agency, asserted its rights to the data.

So, if you are a government official, this is the critical lesson. Many vendors know that if they control the data, they control you. They’ve got you locked into to buying their software and possible even locked in to buying their consulting services. More importantly, they now have a monopoly over what the public can learn about services and information their tax dollars paid to deliver and collect. No government would ever allow the New York Times or the Globe and Mail to become the exclusive distributor of government information. And yet, everyday, governments sign contracts with software vendors that effectively does just this but with something more basic than information, the raw data. Frightening enough stuff for governments. Still more frightening for us citizens.

Also, having heard tale after tale of government legal offices fighting open data initiatives, I’m reminded of how I wish some government lawyers would take the time they spend preventing the public from accessing public data and reallocate it towards preventing publicly funded data from becoming the monopoly assets of private vendors.

Links from Gov2.0 Summit talk and bonus material

My 5 minute lightening fast jam packed talk (do I do other formats? answer… yes) from yesterday’s Gov2.0 summit hasn’t yet been has just been posted to youtube. I love that this year the videos have the slides integrated into it.

For those who were, and were not, there yesterday, I wanted to share links to all the great sites and organizations I cited during my talk, I also wanted to share one or two quick stories I didn’t have time to dive into:

VanTrash and 311:

Screen-shot-2010-09-09-at-3.07.32-AM-1024x640As one of the more mature apps in Vancouver using open data Vantrash keeps being showing us how these types of innovations just keep giving back in new and interesting ways.

In addition to being used by over 3000 households (despite never being advertised – this is all word of mouth) it turns out that the city staff are also finding a use for vantrash.

I was recently told that 311 call staff use Vantrash to help trouble shoot incoming calls from residents who are having problems with garbage collection. The first thing one needs to do in such a situation is identify which collection zone the caller lives in – turns out VanTrash is the fastest and more effective way to accomplish this. Simply input the caller’s address into the top right hand field and presto – you know their zone and schedule. Much better than trying to find their address on a physical map that you may or may not have near your station.

TaxiCity, Open Data and Game Development

Another interesting spin off of open data. The TaxiCity development team, which recreated downtown Vancouver in 2-D using data from the open data catalog, noted that creating virtual cities in games could be a lot easier with open data. You could simply randomize the height of buildings and presto an instant virtual city would be ready. While the buildings would still need to be skinned one could recreate cities people know quickly or create fake cities that felt realistic as they’d be based on real plans. More importantly, this process could help reduce the time and resources needed to create virtual cities in games – an innovation that may be of interest to those in the video game industry. Of course, given that Vancouver is a hub for video game development, it is exactly these types of innovations the city wishes to foster and will help sustain Vancouver’s competitive advantage.

Links (in order of appearance in my talk)

Code For America shirt design can be seen in all their glory here and can be ordered here. As a fun aside, I literally took that shirt of Tim O’Reilly’s back! I saw it the day before and said, I’d wear that on stage. Tim overheard me and said he’d give me his if I was serious…

Vancouver’s Open Motion (or Open3, as it is internally referred to by staff) can be read in the city’s PDF version or an HTML version from my blog.

Vancouver’s Open Data Portal is here. keep an eye on this page as new data sets and features are added. You can get RSS feed or email updates on the page, as well as see its update history.

Vantrash the garbage reminder service’s website is here. There’s a distinct mobile interface if you are using your phone to browse.

ParkingMobility, an app that crowdsources the location of disabled parking spaces and enables users to take pictures of cars illegally parked in disabled spots to assist in enforcement.

TaxiCity, the Centre for Digital Media Project sponsored by Bing and Microsoft has its project page here. Links to the sourcecode, documentation, and a ton of other content is also available. Really proud of these guys.

Microsoft’s Internal Vancouver Open Data Challenge fostered a number of apps. Most have been opensourced and so you can get access to the code as well. The apps include:

The Graffiti Analysis written by University of British Columbia undergraduate students can be downloaded from this blog post I posted about their project.

BTA Works – the research arm of Bing Thom Architects has a great website here. You can’t download their report about the future of Vancouver yet (it is still being peer-reviewed) but you can read about it in this local newspaper article.

Long Tail of Public Policy – I talk about this idea in some detail in my chapter on O’Reilly Media’s Open Government. There is also a brief blog post and slide from my blog here.

Vancouver’s Open Data License – is here. Edmonton, Ottawa and Toronto use essentially the exact same thing. Lots that could be done on this front still mind you… Indeed, getting all these cities on a single standard license should be a priority.

Vancouver Data Discussion Group is here. You need to sign in to join but it is open to anyone.

Okay, hope those are interesting and helpful.

The Challenge of Open Data and Metrics

One promise of open data is its ability to inform citizens and consumers about the quality of local services. At the Gov 2.0 Summit yesterday the US Department of Health and Human Resources announced it was releasing data on hospitals, nursing homes and clinics in the hopes that developers will create applications that show citizens and consumers how their local hospitals stacks up against others. In short, how good, or even how safe, is their local hospital?

In Canada we already have some experience with this type of measuring. The Fraser Institute publishes an annual report card of schools performance in Alberta, BC, Ontario and Washington. (For those unfamiliar with the Fraser Institute it is a right-wing think tank based in Vancouver with, shall we say, dubious research credentials but strong ideological and fundraising goals.

Perhaps unsurprisingly, private schools do rather well in the Fraser Institute’s report card. Indeed it would appear (and I may be off by one here) that the t0p 18 schools on the list are all private. This does support a narrative that private schools are inherently better than state run schools that would be consistent with the Fraser Institute’s outlook. But, of course, that would be a difficult conclusion to sustain. Private schools tend to be populated with kids from wealthy families with better educated parents and have been given a blessed head start in life. Also, and not noted in the report card, is that many private schools are comfortable turfing out under-performing or unruly students. This means that the “delayed advancement rate,” one critical metric of a schools performance, is dramatically less impacted than a public school that cannot as easily send students packing.

Indeed, the Fraser Institute’s report card is rife with problems, something that teachers unions and, say,  equally ideological but left-oriented think tanks like the Centre for Policy Alternatives are all too happy to point out.

While I loath the Fraser Institute’s simplistic report card and think it is of dubious value to parents I do like that they are at least trying to give parents some tool by which to measure schools. The notion that schools, teachers and education quality can’t be measured, or are too complicated to measure is untenable. I suspect few parent – especially those in say, jobs where they are evaluated – believe it. Nor does such a position help parents assess the quality of education their child is receiving. While they understand, may be sympathetic to or even agree that this is a complicated issue it seems clear based on the success of Ontario’s school locator that many parents want and like these tools.

Ultimately the problem here isn’t the open data (despite what critics of the Ontario Government’s school comparison website would have you believe). Besides, are we now going to hide or suppress data so that parents can’t assess their kids schools? Nor is the problem school report cards per se. If anything is the problem it is that the Fraser Institute has had the field all to itself to play in. If teachers groups, other think tanks, or any other group believes that the Fraser Institute’s report cards are not too crude, why not design a better one? The data is available (and the government could easily be pressured to make more of it available). Why don’t teacher’s groups share with parents the metrics by which they believe parents should evaluate and compare schools? What this issue could use is some healthy competition and debate – one that generated more options and tools for parents.

The challenge for government is to make data more easily available. By making educational data more accessible, less time, IT skills and energy is needed to organize the data and precious resources can instead be focused on developing and visualizing the scoring methodology. This is certainly seems to be Health and Human Services approach: lower transaction costs, galvanize a variety of assessment applications and foster a healthy debate. It would be nice if ministries of education in Canada took a similar view.

But the second half of that challenge is also important, and groups outside of government need to recognize they can have a role, and the consequence of not participating. The mistake is to ask how to deal with groups like the Fraser Institute that use crude metrics, instead we need to encourage more groups and encourage our own organizations to contribute to the debate, to give it more nuance, and create better tools. Leaving the field to the Fraser Institute is a dangerous strategy, one that will serve few people. This is even more the case since in the future we are likely to have more, not less data about education, health and a myriad of other services and programs.

So, the challenge for readers is – will your organization participate?