Tag Archives: opendata

Launching Emitter.ca: Open Data, Pollution and Your Community

This week, I’m pleased to announce the beta launch of Emitter.ca – a website for locating, exploring and assessing pollution in your community.

Why Emitter?

A few weeks ago, Nik Garkusha, Microsoft’s Open Source Strategy Lead and an open data advocate asked me: “are there any cool apps you could imagine developing using Canadian federal government open data?”

Having looked over the slim pickings of open federal data sets – most of which I saw while linking to them datadotgc.ca – I remembered one: Environment Canada’s National Pollutant Release Inventory (NPRI) that had real potential.

Emitter-screen-shot

With NPRI I felt we could build an application that allowed people and communities to more clearly see who is polluting, and how much, in their communities could be quite powerful. A 220 chemicals that NPRI tracks isn’t, on its own, a helpful or useful to most Canadians.

We agreed to do something and set for ourselves three goals:

  1. Create a powerful demonstration of how Canadian Federal open data can be used
  2. Develop an application that makes data accessible and engaging to everyday Canadians and provides communities with a tool to better  understand their immediate region or city
  3. Be open

With the help of a crew of volunteers with knew and who joined us along the way – Matthew Dance (Edmonton), Aaron McGowan (London, ON), Barranger Ridler (Toronto) and Mark Arteaga (Oakville) – Emitter began to come together.

Why a Beta?

For a few reasons.

  1. There are still bugs, we’d love to hear about them. Let us know.
  2. We’d like to refine our methodology. It would be great to have a methodology that was more sensitive to chemical types, combinations and other factors… Indeed, I know Matt would love to work with ENGOs or academics who might be able to help provide us with better score cards that can helps Canadians understand what the pollution near them means.
  3. More features – I’d love to be able to include more datasets… like data on where tumours or asthama rates or even employment rates.
  4. I’d LOVE to do mobile, to be able to show pollution data on a mobile app and even in using augmented reality.
  5. Trends… once we get 2009 and/or earlier data we could begin to show trends in pollution rates by facility
  6. plus much, much more…

Build on our work

Finally, we have made everything we’ve done open, our methodology is transparent, and anyone can access the data we used through an API that we share. Also, you can learn more about Emitter and how it came to be reading blog posts by the various developers involved.

Thank yous

Obviously the amazing group of people who made Emitter possible deserve an enormous thank you. I’d also like to thank the Open Lab at Microsoft Canada for contributing the resources that made this possible. We should also thank those who allowed us to build on their work, including Cory Horner’s Howdtheyvote.ca API for Electoral District boundaries we were able to use (why Elections Canada doesn’t offer this is beyond me and, frankly, is an embarrassment). Finally, it is important to acknowledge and thank the good people at Environment Canada who not only collected this data, but have the foresight and wisdom to share make it open. I hope we’ll see more of this.

In Sum

Will Emitter change the world? It’s hard to imagine. But hopefully it is a powerful example of what can happen when governments make their data open. That people will take that data and make it accessible in new and engaging ways.

I hope you’ll give it a spin and I look forward to sharing new features as they come out.

Update!

Since Yesterday Emitter.ca has picked up some media. Here are some of the links so far…

Hanneke Brooymans of the Edmonton Journal wrote this piece which was in turn picked up by the Ottawa Citizen, Calgary Herald, Canada.com, Leader Post, The Province, Times Columnist and Windsor Star.

Nestor Arellano of ITBusiness.ca wrote this piece

Burke Campbell, a freelance writer, wrote this piece on his site.

Kate Dubinski of the London Free Press writes a piece titled It’s Easy to Dig up Dirt Online about emitter.ca

Launching datadotgc.ca 2.0 – bigger, better and in the clouds

Back in April of this year we launched datadotgc.ca – an unofficial open data portal for federal government data.

At a time when only a handful of cities had open data portals and the words “open data” were not being even talked about in Ottawa, we saw the site as a way to change the conversation and demonstrate the opportunity in front of us. Our goal was to:

  • Be an innovative platform that demonstrates how government should share data.
  • Create an incentive for government to share more data by showing ministers, public servants and the public which ministries are sharing data, and which are not.
  • Provide a useful service to citizens interested in open data by bringing it all the government data together into one place to both make it easier to find.

In every way we have achieved this goal. Today the conversation about open data in Ottawa is very different. I’ve demoed datadotgc.ca to the CIO’s of the federal government’s ministries and numerous other stakeholders and an increasing number of people understand that, in many important ways, the policy infrastructure for doing open data already exists since datadotgc.ca show the government is already doing open data. More importantly, a growing number of people recognize it is the right thing to do.

Today, I’m pleased to share that thanks to our friends at Microsoft & Raised Eyebrow Web Studio and some key volunteers, we are taking our project to the next level and launching Datadotgc.ca 2.0.

So what is new?

In short, rather than just pointing to the 300 or so data sets that exist on federal government websites members may now upload datasets to datadotg.ca where we can both host them and offer custom APIs. This is made possible since we have integrated Microsoft’s Azure cloud-based Open Government Data Initiative into the website.

So what does this mean? It means people can add government data sets, or even mash up government data sets with their own data to create interest visualization, apps or websites. Already some of our core users have started to experiment with this feature. London Ontario’s transit data can be found on Datadotgc.ca making it easier to build mobile apps, and a group of us have taken Environment Canada’s facility pollution data, uploaded it and are using the API to create an interesting app we’ll be launching shortly.

So we are excited. We still have work to do around documentation and tracking some more federal data sets we know are out there but, we’ve gone live since nothing helps us develop like having users and people telling us what is, and isn’t working.

But more importantly, we want to go live to show Canadians and our governments, what is possible. Again, our goal remains the same – to push the government’s thinking about what is possible around open data by modeling what should be done. I believe we’ve already shifted the conversation – with luck, datadotgc.ca v2 will help shift it further and faster.

Finally, I can never thank our partners and volunteers enough for helping make this happen.

International Open Data Hackathon Website and Wiki is up

So when I first wrote Let’s Do an International Open Data Hackathon I thought… maybe they’ll be 5 or 6 cities that will want to do one.

That may still be the case, but given the number of visits the blog post experienced and the number of people who registered interest on the etherpad, we may end up with a few more – which would be exciting.

To date people in 33 different cities spanning 16 countries four continents (some really nice guys from India emailed me saying they want to do one, but haven’t connected to the wiki yet) have said they are interested in organizing a hackathon in their home town. Will all of these happen? Who knows. But it is great to see that there is so much interest in an issue that represent an important opportunity.

Exciting.

So, to celebrate the growing interest we’ve launched a website and a wiki to help inform people about the open data hackathon and give them a place to register their interest and organize.

Please, check them out! Feel free to create wiki pages for your cities, to share best practices and ideas on running hackathons, or even just translate materials and content into a parallel page.

Articles I'm Digesting 1/11/2010

Here’s a few articles I recently digested:

Enabling Access and Reuse of Public Sector Information in Canada: Crown Commons Licenses, Copyright, and Public Sector Information by Elizabeth F. Judge

This piece (which you can download as a PDF) is actually a chapter in a book titled: From “Radical Extremism” to “Balanced Copyright” : Canadian Copyright and the Digital Agenda.

This piece provides a fantastic overview on both the how and why Crown Copyright impedes the remixing and repurposing of government information. The only thing confusing to me about the article is that it focuses a great deal on data which, by the author’s own admission, is not covered by Crown Copyright:

With respect to data, Crown copyright does not protect raw data (unprocessed data, such as numbers entered into a database), but it does protect an original expres- sion of the data (for example, an original map is a copyrightable artistic work based on geospatial data) and compilations (including compilations of data), providing that there is an original selection or arrangement of the data (that is, there has been human intervention where skill and judg- ment has been exercised).

Given I often have to explain to government types that data is not covered by Crown Copyright (this is in part why it often has – more restrictive still – licenses attached to it) my only concern about the paper is that because of its strong focus on data it will inadvertently muddy the waters. However, still a good piece and I suspect many who read it will wander away hoping that some change to Crown Copyright legislation will be forthcoming.

The Global Debt Clock by The Economist Intelligence Unit

Few outside of Canada understand how much Canadian politics was dominated by the issue of “the debt” in the 1990s. When Bill Clinton made his first visit to Canada the headlines were more concerned with Canada’s bond rating being downgraded than the visit of the new US president.

The belief, however, that Canada has tamed its debt may be a myth. The challenge may be that it people are starting to wise up to all that downgrading. That the debt has simple shifted from the national (which people historically looked at) to the provincial level (which is rarely calculated into “national” debt). The Economist chart puts things into sharp (and dim?) perspective:

Canada’s public debt: $1,257,953,424,658 or $37,042.44 per person or 82.3% of GDP

America’s public debt: $9,117,200,547,945 or $29,491.12 per person or 62.0% of GDP

Of course Canada’s debt includes health care expenditures which in the United States are (more) born by private citizens, so the debt burden per individual once you factor in private debt may not be closer. But then household debt in Canada is about to overtake that in America so again…

This all said, pretty much every country in the developed world looks ugly in terms of debt… this may, sadly, be the boomers biggest legacy.

Disconnect: Why our politics is so out of touch and what it means for our future by Richard Florida and Jeremy D. Mayer

Written back in 2007 this article deserves a revisit:

“In our view, American politics today is distinguished by one feature: instability. In place of an enduring political force such as post-1896 Republican dominance or the Democrats after Roosevelt in 1932, American politics in recent years has see-sawed back and forth. Twelve years of Reagan-Bush were followed by 8 of Bill Clinton, and then Bush and Rove, now this. And, only 6 of those years saw one party with simultaneous control of the presidency and Congress.

This instability, in our view, stems from one primary source: Our economic system has undergone a tectonic shift, to which the political system is still trying to adapt. Just as our politics was recast a century ago by the forces of the Industrial Revolution, so to is it being reshaped today by the rise of the technology, innovation and creativity as economic forces. The rise of this innovative, knowledge-based Creative Economy is even more significant and more challenging to politics as the Industrial Economy. Today, this sector accounts roughly a third of the American workforce — or roughly 40 million workers – nearly three times the industrial sector and blue-collar working class. What’s more, these creative occupations account for the lion’s share of all wealth generation, accounting for nearly half of all wages and salaries paid in the United States. That’s nearly $2 trillion, or as much as the manufacturing and service sectors combined.

But the creative economy doesn’t just generate phenomenal wealth. It also sorts people across new economic and geographical boundaries and generates inequality between and within states and regions as great as that of the early Industrial Revolution. As a result, we’re living through a period of tumultuous political adjustment.”

and speaking of revisits…

American Backlash by Michael Adams

Offers an alternative explanation regarding the challenges faced by incumbent parties in the US. I remembered this as I was recently reading Wente’s piece about Palin and the Tea Party, where she cites pollster Scott Rasmussen:

“who argues that the major division in the country now is not between the Republicans and Democrats, but between the mainstream public and the political class – the small proportion of the population, perhaps 10 per cent, (including most people who work in mainstream media) that still believes that government tries to serve the public interest, rather than colluding with big business against ordinary people.”

This was, of course, the thesis of Adams book back in 2006. Nice to be ahead of the curve.

Open Data Hackathon page by Volunteers around the world

Hope that there will be a dedicated site for this up this week – have a few people stepping forward on that front. In the interim, please do consider adding you name if you are interested in helping organize one in your city.

Let's do an International Open Data Hackathon

Let’s do it.

Last summer, I met Pedro Markun and Daniela Silva at the Mozilla Summit. During the conversation – feeling the drumbeat vibe of the conference – we agreed it would be fun to do an international event. Something that could draw attention to open data.

A few weeks before I’d met Edward Ocampo-Gooding, Mary Beth Baker and Daniel Beauchamp at GovCamp Ottawa. Fresh from the success of getting the City of Ottawa to see the wisdom of open data and hosting a huge open data hackathon at city hall they were thinking “let’s do something international.” Yesterday, I tested the idea on the Open Knowledge Foundation’s listserve and a number of great people from around the world wrote back right away and said… “We’re interested.”

This idea has lots of owners, from Brazil to Europe to Canada, and so my gut check tells me, there is interest. So let’s take the next step. Let’s do it.

Why.

Here’s my take on three great reasons now is a good time for a global open data hackathon:

1) Build on Success: There are a growing number of places that now have open data. My sense is we need to keep innovating with open data – to show governments and the public why it’s serious, why it’s fun, why it makes life better, and above all, why it’s important. Let’s get some great people together with a common passion and see what we can do.

2) Spread the Word: There are many places without open data. Some places have developed communities of open data activists and hackers, others have nascent communities. In either case these communities should know they are not alone, that there is an international community of developers, hackers and advocates who want to show them material and emotional support. They also need to demonstrate, to their governments and the public, why open data matters. I think an global open data hackathon can’t hurt, and can make help a whole lot. Let’s see.

3) Make a Better World: Finally, there is a growing amount of global open data thanks to the World Bank’s open data catalog and its Apps for Development competition. The Bank is asking for developers to build apps that, using this data (plus any other data you want) will contribute to reaching the Millennium Development Goals by 2015. No matter who, or where, you are in the world this is a cause I believe we can all support. In addition, for communities with little available open data, the bank has a catalog that might provide at least some that is of interest.

So with that all said, I think we should propose hosting a global open data hackathon that is simple and decentralized: locally organized, but globally connected.

How.

The basic premises for the event would be simple, relying on 5 basic principles.

1. It will happen on Saturday, Dec 4th. (after a fair bit of canvassing of colleagues around the world this seems to be a date a number can make work). It can be as big or as small, as long or as short, as you’d like it.

2. It should be open. Daniel, Mary Beth and Edward have done an amazing job in Ottawa attracting a diverse crowd of people to hackathons, even having whole families come out. Chris Thorpe in the UK has done similarly amazing work getting young and diverse group hacking. I love Nat Torkington’s words on the subject. Our movement is stronger when it is broader.

3. Anyone can organize a local event. If you are keen help organize one in your city and/or just participate add your name to the relevant city on this wiki page. Where ever possible, try to keep it to one per city, let’s build some community and get new people together. Which city or cities you share with is up to you as it how you do it. But let’s share.

4. You can hack on anything that involves open data. Could be a local app, or a global apps for development submission, scrape data from a government website and make it available in a useful format for others or create your own data catalog of government data.

5. Let’s share ideas across cities on the day. Each city’s hackathon should do at least one demo, brainstorm, proposal, or anything that it shares in an interactive way with at members of a hackathon in at least one other city. This could be via video stream, skype, by chat… anything but let’s get to know one another and share the cool projects or ideas we are hacking on. There are some significant challenges to making this work: timezones, languages, culture, technology… but who cares, we are problem solvers, let’s figure out a way to make it work.

Again, let’s not try to boil the ocean. Let’s have a bunch of events, where people care enough to organize them, and try to link them together with a simple short connection/presentation.Above all let’s raise some awareness, build something and have some fun.

What’s next?

1. If you are interested, sign up on the wiki. We’ll move to something more substantive once we have the numbers.

2. Reach out and connect with others in your city on the wiki. Start thinking about the logistics. And be inclusive. Someone new shows up, let them help too.

3. Share with me your thoughts. What’s got you excited about it? If you love this idea, let me know, and blog/tweet/status update about it. Conversely, tell me what’s wrong with any or all of the above. What’s got you worried? I want to feel positive about this, but I also want to know how we can make it better.

4. If there is interest let’s get a simple website up with some basic logo that anyone can use to show they are part of this. Something like the OpenDataOttawa website comes to mind, but likely simpler still, just laying out the ground rules and providing links to where events are taking place. Might even just be a wiki. I’ve registered opendataday.org, not wedded to it, but it felt like a good start. If anyone wants to help set that up, please let me know. Would love the help.

5. Localization. If there is bandwidth locally, I’d love for people to translate this blog post and repost it locally. (let me know as I’ll try cross posting it here, or at least link to it). It is important that this not be an english language only event.

6. If people want a place to chat with other about this, feel free to post comments below. Also the Open Knowledge Foundation’s Open Government mailing list is probably a good resource.

Okay, hopefully this sounds fun to a few committed people. Let me know what you think.

The Open Data Debate Arrives in Ottawa

The Liberals are promising to create an open data portal – opendata.gc.ca – much like President Obama has done in the United States and both Gordon Brown and David Cameron have done in the United Kingdom.

It’s a savvy move.

In May 2010 when it launched a public consultation on the Digital Economy, the government invited the public to submit proposals and vote on them. Two of the top three most voted ideas involved asking the government to open up access to government collected data. Three months after the submissions have closed it appears the opposition has decided to act on Canadians wishes and release a 21st century open government strategy that reflects these popular demands.

Today, at 1pm EST, I’ve discovered the Liberals will announce that, if elected, they will adopt a government-wide directive in which “the default position for all departments and agencies will be for the release of information to the public, both proactively and responsively, after privacy and other legal requirements are met.”

There is much that both ordinary citizens and advocates of greater government transparency will like in the proposal. Not only have the Liberals mirrored the most aggressive parts of President Obama’s transparency initiatives they are also promising some specific and aggressive policies of their own. In addition to promising to launching opendata.gc.ca to share government data the document proposes the creation of accesstoinformation.gc.ca where citizens could search past and current access to information requests as well as see response times. A third website, entitled accountablespending.gc.ca is also proposed. It would allow government grants, contributions and contracts to be searched.

The announcement brings to the Canadian political debate an exciting issue that first gained broad notoriety in early 2009 when Tim Berners-Lee, the inventor of the world wide web, called on the world’s governments to share their data. By May of that year the United States launched data.gov and in September of 2009 the British Government launched data.gov.uk both of which garnered significant domestic attention. In addition, dozens of cities around the world – including Vancouver, Edmonton and, most recently, Ottawa – have launched websites where they shared information that local charities, non-profits, businesses and ordinary citizens might find useful.

Today, citizens in these jurisdictions enjoy improved access to government information about the economy, government spending, access to information requests, and statistical data. In the United States developers have created websites that empower citizens by enabling them to analyze government data or see what government data exists about their community while a British program alerts citizens to restaurant’s health inspections scores.  The benefit however, not limited to improved transparency and accountability. An independent British estimated that open data could contribute as much as £6 billion to British economy. Canada’s computer developers, journalists and entrepreneurs have been left wondering, when will their government give them access to the data their tax dollars paid to collect?

One obvious intent of the Liberals is to reposition themselves at the forefront of a debate around government transparency and accountability. This is ground that has traditionally been Conservative, but with the cancellation of the long form census, the single source jet fighter contract and, more recently, allegations that construction contracts were awarded to conservative party donors, is once again contestable.

What will be interesting to see is the Conservative response. It’s been rumored the government has explored an open data portal but to date there has been no announcement. Open data is one area where, often, support exists across the political spectrum. In the United Kingdom Gordon Brown’s Labour government launched data.gov.uk but David Cameron’s Conservative government has pursued the project more aggressively still, forcing the release of additional and higher value data to the public. A failure to adopt open data would be tragedy – it would cause Canada to lag in an important space that is beginning to reshape how governments work together and how they serve and interact with citizens. But perhaps most obviously, open data and open government shouldn’t be a partisan issue.

OpenGovWest (BC edition): Are you out west?

Something good is taking shape in my backyard…

From the city of Vancouver’s open data portal to Apps 4 Climate Action to the Water legislation blog, a great deal of the leadership and cutting edge work in open government is taking place in BC. Many places across the country and around the world look to what is happening on the west coast and are trying to draw lessons and see how it can be replicated.

Recognizing this fact a number of great people have been working behind the scenes for the last couple of months pulling together a conference to share this successes, talk about challenges and opportunities and generally think about what could happen next. The conference…? OpenGovWest BC.

If you are in BC and interested in open government, open data and gov 2.0, here’s a conference designed and built for you.

A number of speakers have already been publicly confirmed, others are, apparently, being held as surprises. There are also slots open for presentations if you have a project you’d like to share with the community out west.

The conference will be taking place on November 10th in Victoria, BC – if you are out west and feel passionate about these topics the same way I do, I hope you’ll consider coming.

And while we are talking about conferences, I also want to share Open Government Data Camp that will be happening in London, UK on November 18th and 19th. I’m excited to say I’ll be there with our friends from the Open Knowledge Foundation and the Sunlight Foundation, along with numerous others. Harder to get too, but also likely to be quite, quite fun…

World Bank Discussion on Open Data – lessons for developers, governments and others

Yesterday the World Bank formally launched its Apps For Development competition and Google announced that in addition to integrating the World Bank’s (large and growing) data catalog into searches, it will now do it in 34 languages.

What is fascinating about this announcement and the recent changes at the bank is it appears to be very serious about open data and even more serious about open development. The repercussions of this shift, especially if the bank starts demanding that its national partners also disclose data, could be significant.

This of course, means there is lots to talk about. So, as part of the overall launch of the competition and in an effort to open up the workings of the World Bank, the organization hosted its first Open Forum in which a panel of guests talked about open development and open data. The bank was kind enough to invite me and so I ducted out of GTEC a pinch early and flew down to DC to meet some of the amazing people behind the world bank’s changes and discuss the future of open data and what it means for open development.

Embedded below is the video of the event.

As a little backgrounder here are some links to the bios of the different panelists and people who cycled through the event.

Our host: Molly Wood of CNET.

Andrew McLaughlin, Deputy Chief Technology Officer, The White House (formerly head of Global Public Policy and Government Affairs for Google) (twitter feed)

Stuart Gill, World Bank expert, Disaster Mitigation and Response for LAC

David Eaves, Open Government Writer and Activist

Rakesh Rajani, Founder, Twaweza, an initiative focused on transparency and accountability in East Africa (twitter)

Aleem Walji, Manager, Innovation Practice, World Bank Institute (twitter)

How Governments misunderstand the risks of Open Data

When I’m asked to give a talk about or consult on policies around open data I’ve noticed there are a few questions that are most frequently asked:

“How do I assess the risks to the government of doing open data?”

or

“My bosses say that we can only release data if we know people aren’t going to do anything wrong/embarrassing/illegal/bad with it”

I would argue that these question are either flawed in their logic, or have already been largely addressed.

Firstly, it seems problematic to assess the risks of open data, without also assessing the opportunity. Any activity – from walking out my front door to scaling Mount Everest carries with it risks. What needs to be measured are not the risks in isolation but the risks balanced against the opportunity and benefits.

But more importantly, the logic of the question is flawed in another manner. It suggests that the government only take action if every possible negative use can be prevented.

Let’s forget about data for a second – imagine you are building a road. Now ask: “what are the risk’s that someone might misuse this road?” Well… they are significant. People are going to speed and they are going to jay walk. But it gets worse. Someone may rob a bank and then use the road as part of their escape route. Of course, the road will also provide more efficient transportation for 1000s of people, it will reduce costs, improve access, help ambulances save peoples lives and do millions of other things, but people will also misuse it.

However, at no point in any policy discussion in any government has anyone said “we can’t build this road because, hypothetically, someone may speed or use it as an escape route during a robbery.”

And yet, this logic is frequently accepted, or at least goes unchallenged, as appropriate when discussing open data.

The fact is, most governments already have the necessary policy infrastructure for managing the overwhelming majority of risks concerning open data. Your government likely has provisions dealing with privacy – if applied to open data this should address these concerns. Your government likely has provisions for dealing with confidential and security related issues – if applied to open data this should address these concerns. Finally, your government(s) likely has a legal system that outlines what is, and is not legal – when it comes to the use of open data, this legal system is in effect.

If someone gets caught speeding, we have enforcement officials and laws that catch and punish them. The same is true with data. If someone uses it to do something illegal we already have a system in place for addressing that. This is how we manage the risk of misuse. It is seen as acceptable for every part of our life and every aspect of our society. Why not with open data too?

The opportunity, of both roads and data, are significant enough that we build them and share them despite the fact that a small number of people may not use them appropriately. Should we be concerned about those who will misuse them? Absolutely. But do we allow a small amount of misuse to stop us from building roads or sharing data? No. We mitigate the concern.

With open data, I’m happy to report that we already have the infrastructure in place to do just that.

UK Adopts Open Government License for everything: Why it's good and what it means

In the UK, the default is open.

Yesterday, the United Kingdom made an announcement that radically reformed how it will manage what will become the government’s most important asset in the 21st century: knowledge & information.

On the National Archives website, the UK Government made public its new license for managing software, documents and data created by the government. The document is both far reaching and forward looking. Indeed, I believe this policy may be the boldest and most progressive step taken by a government since the United States decided that documents created by the US government would directly enter the public domain and not be copyrighted.

In almost every aspect the license, the UK government will manage its  “intellectual property” by setting the default to be open and free.

Consider the introduction to the framework:

The UK Government Licensing Framework (UKGLF) provides a policy and legal overview for licensing the re-use of public sector information both in central government and the wider public sector. It sets out best practice, standardises the licensing principles for government information and recommends the use of the UK Open Government Licence (OGL) for public sector information.

The UK Government recognises the importance of public sector information and its social and economic value beyond the purpose for which it was originally created. The public sector therefore needs to ensure that simple licensing processes are in place to enable and encourage civil society, social entrepreneurs and the private sector to re-use this information in order to:

  • promote creative and innovative activities, which will deliver social and economic benefits for the UK
  • make government more transparent and open in its activities, ensuring that the public are better informed about the work of the government and the public sector
  • enable more civic and democratic engagement through social enterprise and voluntary and community activities.

At the heart of the UKGLF is a simple, non-transactional licence – the Open Government Licence – which all public sector bodies can use to make their information available for free re-use on simple, flexible terms.

An just in case you thought that was vague consider these two quotes from the frame work. This one for data:

It is UK Government policy to support the re-use of its information by making it available for re-use under simple licensing terms.  As part of this policy most public sector information should be made available for re-use at the marginal cost of production. In effect, this means at zero cost for the re-user, especially where the information is published online. This maximises the social and economic value of the information. The Open Government Licence should be the default licence adopted where information is made available for re-use free of charge.

And this one for software:

  • Software which is the original work of public sector employees should use a default licence.  The default licence recommended is the Open Government Licence.
  • Software developed by public sector employees from open source software may be released under a licence consistent with the open source software.

These statements are unambiguous and a dramatic step in the right direction. Information and software created by governments are, by definition, public assets. Tax dollars have already paid for their collection and/or development and the government has already benefited by using from them. They are also non-rivalrous good. This means, unlike a road, if I use government information, or software, I don’t diminish your ability to use it (in contrast only so many cars can fit on a road, and they wear it down). Indeed with intellectual property quite the opposite is true, by using it I may actually make the knowledge more valuable.

This is, obviously, an exciting development. It has generated a number of thoughts:

1.     With this move the UK has further positioned itself at the forefront of the knowledge economy:

By enacting this policy the UK government has just enabled the entire country, and indeed the world, to use its data, knowledge and software to do whatever people would like. In short an enormous resource of intellectual property has just been opened up to be developed, enhanced and re-purposed. This could help lower costs for new software products, diminish the cost of government and help foster more efficient services. This means a great deal of this innovation will be happening in the UK first. This could become a significant strategic advantage in the 21st century economy.

2.     Other jurisdictions will finally be persuaded it is “safe” to adopt open licenses for their intellectual property:

If there is one thing that I’ve learnt dealing with governments it is that, for all the talk of innovation, many governments, and particularly their legal departments, are actually scared to be the first to do something. With the UK taking this bold step I expect a number of other jurisdictions to more vigorously explore this opportunity. (it is worth noting that Vancouver did, as part of the open motion, state the software developed by the city would have an open license applied to it, but the policy work to implement such a change has yet to be announced).

3.     This should foster a debate about information as a public asset:

In many jurisdictions there is still the myth that governments can and should charge for data. Britain’s move should provide a powerful example for why these types of policies should be challenged. There is significant research showing that for GIS data for example, money collected from the sale of data simply pays for the money collection system. This is to say nothing of the policy and managerial overhead of choosing to manage intellectual property. Charging for public data has never made financial sense, and has a number of ethical challenges to it (so only the wealthy get to benefit from a publicly derived good?). Hopefully for less progressive governments, the UK’s move will refocus the debate along the right path.

4.     It is hard to displace a policy leader once they are established.

The real lesson here is that innovative and forward looking jurisdictions have huge advantages that they are likely to retain. It should come as no surprise that the UK made this move – it was among the first national governments to create an open data portal. By being an early mover it has seen the challenges and opportunities before others and so has been able to build on its success more quickly.

Consider other countries – like Canada – that may wish to catch up. Canada does not even have an open data portal as of yet (although this may soon change). This means that it is now almost 2 years behind the UK in assessing the opportunities and challenges around open data and rethinking intellectual property. These two years cannot be magically or quickly caught up. More importantly, it suggests that some public services have cultures that recognize and foster innovation – especially around key issues in the knowledge economy – while others do not.

Knowledge economies will benefit from governments that make knowledge, information and data more available. Hopefully this will serve as a wake up call to other governments in other jurisdictions. The 21st century knowledge economy is here, and government has a role to play. Best not be caught lagging.