Tag Archives: mozilla

BC Apps For Climate Change Contest to be Announced

Over the past few months I’ve been working with the BC Government around the idea of an “Apps for Climate Change.” The idea, initiated by the province, is to hold a development competition akin to the “Apps for Democracy” competition hosted by Washington DC but focused around climate change.

I talked a little bit about the upcoming competition during my O’Reilly Gov 2.0 International Online talk and referenced an article by Stephen Hui in the Georgia Straight which outlines some of the competitions details. (some people have been asking for that link).

In short, the province is assembling a fairly large data catalog focused around climate change and greenhouse gas emissions, along with a number of other data sets. I expect the contest to be announced at GLOBE 2010 (Mar 24-26) with a side announcement at OpenGovWest and hope to share more information soon. There will be prize money involved – but more importantly, an opportunity to create something that could get serious profile.

In addition to interested independent developers, one hope I have is that non-profits like Greenpeace, the David Suzuki foundation and others will reach out to developers in their volunteer/activist community and encourage them to use these data sets in ways that might help the public. I’m also hoping that some private sector actors may see ways to use this data to better serve their clients or save them, or their customers, money.

Either way, I hope the competition sparks the interest of Canadians across the country and generates some interesting applications that can help citizens act on the issue of climate change.

Open Government – New Book from O'Reilly Media

I’m very excited to share I have a chapter in the new O’Reilly Media book Open Government (US Link & CDN Link). I’ve just been told that the book has just come back from the printers and can now be ordered.

Also exciting is that a sample of the book (pictured left) that includes the first 8 chapters can be downloaded as a PDF for free.

The book includes several people and authors I’m excited to be in the company of, including: Tim O’Reilly, Carl Malamud, Ellen Miller, Micah Sifry, Archon Fung and David Weil. My chapter – number 12 – is titled “After the Collapse,” a reference to the Coasean collapse Shirky talks about in Here Comes Everybody. It explores what is beginning to happen (and what is to come) to government and civil services when transaction and coordination costs for doing work dramatically lower. I’ve packed a lot into it, so it is pretty rich with my thinking, and I’m pleased with the result.

If you care about the future of government as well as the radical and amazing possibilities being opened up by new technologies, processes and thinking, then I hope you’ll pick up a copy. I’m not getting paid for it; instead, a majority of the royalties go to the non-profit Global Integrity.

Also, the O’Reilly people are trying to work out a discount for government employees. We all would like the ideas and thinking in this book to go wide and far and around the globe.

Finally, I’d like to give a big thank you to the editors Laurel Ruma and Daniel Lathrop, along with Sarah Schacht of Knowledge as Power, who made it possible for me to contribute.

The Internet as Surveillance Tool

There is a deliciously ironic, pathetically sad and deeply frightening story coming out of France this week.

On January 1st France’s new (and controversial law) Haute Autorité pour la Diffusion des Œuvres et la Protection des Droits sur Internet otherwise known by its abbreviation – Hadopi – came into effect. The law makes it illegal to download copyright protected works and uses a “three-strikes” system of enforcement. The first two times an individual illegally downloads copyrighted content (knowingly or unknowingly) they receive a warning. Upon the third infraction the entire household has its internet access permanently cut off and is added to a blacklist. To restore internet access the households’ computers must be outfitted with special monitoring software which tracks everything the computer does and every website it visits.

Over at FontFeed, Yves Peters chronicles how the French Agency designated with enforcing the legislation, also named Hadopi, illegally used a copyrighted font, without the permission of its owner, in their logo design. Worse, once caught the organization tried to cover up this fact by lying to the public. I can imagine that fonts and internet law are probably not your thing, but the story really is worth reading (and is beautifully told).

But as sad, funny and ironic as the story is, it is deeply scary. Hadopi, which is intended to prevent the illegal downloading of copyrighted materials, couldn’t even launch without (innocently or not) breaking the law. They however, are above the law. There will be no repercussions for the organization and no threat that its internet access will be cut off.

The story for French internet users will, however, be quite different. Over the next few months I wouldn’t be surprised if tens, or even hundreds of thousands of French citizens (or their children, or someone else in their home) inadvertently download copyrighted material illegally and, in order to continue to have access to the internet, will be forced to acquiesce to allowing the French Government to monitor everything they do on their computer. In short, Hadopi will functionally become a system of mass surveillance – a tool to enable the French government to monitor the online activities of more and more of its citizens. Indeed, it is conceivable that after a few years a significant number and possibly even a majority of French computers could be monitored. Forget Google. In France, the government is the Big Brother you need to worry about.

Internet users in other countries should also be concerned. “Three Strikes” provisions likes those adopted by France have allegedly been discussed during the negotiations of ACTA, an international anti-counterfeiting treaty that is being secretly negotiated between a number of developed countries.

Suddenly copyright becomes a vehicle to justify the governments right to know everything you do online. To ensure some of your online activities don’t violate copyright online, all online activities will need to be monitored. France, and possibly your country soon too, will thus transform the internet, the greatest single vehicle for free thought and expression, into a giant wiretap.

(Oh, and just in case you thought the French already didn’t understand the internet, it gets worse. Read this story from the economist. How one country can be so backward is hard to imagine).

My Unfinished Business Talk in Toronto

ocad logoI’m really pleased to share that I’ll be giving a talk at the Ontario College of Art & Design this January 14th, 2010. The talk is one I’ve been giving for government officials a fair bit of late – it is on how technology, open methodologies and social change are creating powerful pressures for reform within our government bureaucracies. The ideas in it also form the basis of a chapter I’ve written for the upcoming O’Reilly Media book on Open Government due out in January (in the US, assuming here in Canada too – more on this in a later post).

I completely thrilled to be giving a talk at OCAD and especially want to thank Michael Anton Dila for making this all happen. It was his idea, and he pushed me to make it happen. It is especially of Michael and OCAD since they have kept the talk free and open to the public.

The talk details are below and you can register here. More exciting has been the interest in the talk – I saw that 100 tickets disappeared in the first 4 hours yesterday – people care about government and policy!

We have much unfinished business with our government – look forward to digging into it.

ABOUT UNFINISHED BUSINESS

The Unfinished Lecture is a monthly event hosted by the Strategic Innovation Lab at OCAD and sponsored by Torch Partnership. Part of the Unfinished Business initiative, the lectures are intended to generate an open conversation about strategic innovation in the business and design of commercial enterprises and public organizations.

AFTER THE COLLAPSE: Technology, Open and the Future of Government

What do Facebook, 911 and NASA all have in common? They all offer us a window into how our industrial era government may be redesigned for the digital age. In this lecture David Eaves will look at how open methodologies, technology and social change is reshaping the way public service and policy development will be organized and delivered in the future: more distributed, adaptive and useful to an increasingly tech savvy public. Whether a interested designer, a disruptive programmer, a restless public servant or a curious citizen David will push your thinking on what the future has in store for the one institution we all rely on: Government.
As a closing remark, I’d also like to thank Health Canada & Samara, both of who asked me to put my thoughts on this subject together into a single talk.
Hope to see you in Toronto.

Three Laws of Open Data (International Edition)

When I published the Three Laws of Open Data post back on September 30, 2009 I was pleasantly surprised by how much traffic it garnered. In addition, a number of people emailed me positive feedback about the post (including some who read a revised version on the Australian Governments Web 2.0 Taskforce blog).

All this got me thinking – there must be a number of people out there for whom the three laws are hard to understand not because they are technical, but because I only ever blog in English. Just once I thought it would be cool to have a blog post be translated – and this post felt popular and important enough to be worthwhile. So I put out a twitter request asking if anyone might “localize” the three laws. After much positive feedback and generous help, I’ll be publishing the text below in several different major languages, one – and sometimes two – a day. If you’ve got friends or colleagues overseas who you think might be interested please send them the appropriate link!

You can read the post below in:

The Three Laws of Open Data:

Over the past few years I have become increasingly involved in the movement for open government – and more specifically advocating for Open Data, the sharing of information government collects and generates freely towards citizens such that they can analyze it, re-purpose and use it themselves. My interest in this space comes out of writing and work I’ve down around how technology, open systems and generational change will transform government. Earlier this year I began advising the Mayor and Council of the City of Vancouver helping them pass the Open Motion (referred to by staff as Open3) and create Vancouver’s Open Data Portal, the first municipal open data portal in Canada. More recently, the Australian Government’s has asked me to sit on the International Reference Group for it’s Government 2.0 Taskforce.

Obviously the open government movement is quite broad, but my recent work has pushed me to try to distill out the essence of the Open Data piece of this movement. What, ultimately, do we need and are we asking for. Consequently, while presenting for a panel discussion on Conference for Parliamentarians: Transparency in the Digital Era fro Right to Know Week organized by the Canadian Government’s Office of the Information Commissioner I shared my best effort to date of this distillation: Three laws for Open Government Data.

The Three Laws of Open Government Data:

  1. If it can’t be spidered or indexed, it doesn’t exist
  2. If it isn’t available in open and machine readable format, it can’t engage
  3. If a legal framework doesn’t allow it to be repurposed, it doesn’t empower

To explain, (1) basically means: Can I find it? If Google (and/or other search engines) can’t find it, it essentially doesn’t exist for most citizens. So you’d better ensure that you are optimized to be crawled by all sorts of search engine spiders.

After I’ve found it, (2) notes that, to be useful, I need to be able to use (or play with) the data. Consequently, I need to be able to pull or download it in a useful format (e.g. an API, subscription feed, or a documented file). Citizens need data in a form that lets them mash it up with Google Maps or other data sets, analyze in Open Office or convert to a standard of their choosing and use in any program they would like. Citizens who can’t use and play with information are citizens who are disengaged/marginalized from the discussion.

Finally, even if I can find it and use it, (3) highlights that I need a legal framework that allows me to share what I’ve created, to be able to mobilize other citizens, provide a new services or just point out an interesting fact. This means information and data needs to be licensed to allow the freest possible use or, ideally, have no licensing at all. The best government data and information is that which cannot be copyright protected. Data sets that are licensed in a manner that effectively prevent citizens from sharing their work with one another do not empower, it silences and censures.

Find, Use and Share. That’s want we want.

Of course, a brief scan of the internet has revealed that others have also been thinking about this as well. There is this excellent 8 Principle of Open Government Data that are more detailed and perhaps better suited for a CIO level and lower conversation. But for talking to politicians (or Deputy Ministers, Cabinet Secretaries or CEOs) I found the simplicity of these three resonates more strongly; it is a simpler list they can remember and demand.

Making Open Source Communities (and Open Cities) More Efficient

My friend Diederik and I are starting to work more closely with some open source projects about how to help “open” communities (be they software projects or cities) become more efficient.

One of the claims of open source is that many eyes make all bugs shallow. However, this claim is only relevant if there is a mechanism for registering and tackling the bugs. If a thousand people point out a problem, one may find that one is overwhelmed with problems – some of which may be critical, some of which are duplicates and some of which are not problems at all, but mistakes, misunderstandings or feature requests. Indeed, in recent conversations with open source community leaders, one of the biggest challenges and time sinks in a project is sorting through bugs and identifying those that are both legitimate and “new.” Cities, particularly those with 311 systems that act similar to “bug tracking” software in open source projects, have a similar challenge. They essentially have to ensure that each new complaint is both legitimate, and geuninely “new” (and not a duplicate complaint – eg. are there 2 potholes at Broadway and 8th vs. two people have called in to complain about the same pothole).

The other month Diederik published the graph below that used bug submission data for Mozilla Firefox tracked in Bugzilla to demonstrate how, over time, bug submitters on average do become more efficient (blue line). However, what is interesting is that despite the improved average quality the variability in the efficacy of individual bug submitters remained high (red line). The graph makes it appear as though the variability increases as submitters become more experienced but this is not the case, towards the left there were simply many more bug submitters and they averaged each other out creating the illusion of less variability. As you move to the right the number of bug submitters with these levels of experience are quite few, sometimes only 1-2 per data point, so the variability simply becomes more apparent.

Consequently, the group encircled by purple oval are very experienced and yet continue to submit bugs the community ultimately chooses to either ignore or deems not worth fixing. Sorting through, testing and evaluating these bugs suck up precious time and resource.

We are presently looking at more data to assess if we can come up with a profile for what makes for a bug submitter who falls into this group (as opposed to be “average” or exceedingly effective). If one could screen for such bug submitters, then a community might be able to better educate them and/or provide more effective tools and thus improve their performance. In more radical cases – if the net cost of their participation was too great – one could even screen them out of the bug submission process. If one could improve the performance of this purple oval group by even 25% there would be a significant improvement in the average (blue line). We are looking forward to talk and share more about this in the near future.

As a secondary point, I feel it is important to note that we are still in the early days of open source development model. My sense is there are still improvements – largely through more effective community management – that can yield dramatic (as opposed to incremental) boosts in productivity for open source projects. This separates them again from proprietary models which – as far as I can tell – can at the moment at best hope for incremental improvements in productivity. Thus, for those evaluating the costs of open versus closed processes, it might be worth considering the fact that the two approaches may be (and, in my estimation, are) evolving at very different rates.

(If someone from a city government is reading this and you have data regarding 311 reports – we would be interested in analyzing your data to see if similar results bear out – plus it may enable us to help you manage you call volume more effectively.)

The Stimulus Map: Open Data and enhancing our democracy

The subject of the distribution of stimulus monies has been generating a fair amount of interest. Indeed, the Globe published this piece and the Halifax Chronicle-Herald published this piece analyzing the spending. But something more interesting is also happening…

Yesterday, my friend Ducky Sherwood and her husband Jim published their own analysis, an important development for two reasons.

First, their analysis is just plain interesting… they’ve got an excellent breakdown of who is receiving what (Ontario is a big winner in absolute and per capita terms, Quebec is the big loser). Moreover, they’ve made the discussion fun and engaging by creating this map. It shows you every stimulus project in the country and where you click it will highlight nearby projects. The map also displays and colour-codes every riding in the country by party (blue for Conservatives, magenta for everyone else) and the colour’s strength correlates to the quantity of monies received.

Stimulus Map

Second, and more interesting for me, is how their analysis hints at the enormous possibilities of what citizens can do when Government’s share their data and information about programs with the public in useful formats. (You can get spreadsheets of the data and for those more technically-minded the API can be found here). This is an example of the Long Tail of Public Policy Analysis in action.

This could have a dramatic impact on public discourse. Open data shifts the locus of power in the debate. Previously, simply getting the data was of value since your analysis would likely only compete, at best, with one or two other peoples (usually a news organization, or maybe a professor). But when anyone can access the information the value shifts. Simply doing an analysis is no longer interesting (since anyone can do it). Now the quality, relevance, ideological slant, assumptions, etc… of the analysis are of paramount value. This has serious implications – implications I believe bode well for debate and democracy in this country. Indeed, I hope more people will play with the stimulus data (like these guys have) and that a more rigorous debate about both where it is being spent and how it is being spent will ensue. (Needless to say, I believe that spending money on auto bailouts and building roads does little to promote recovery – the real opportunity would have been in seeding the country with more data to power the businesses of tomorrow).

There are, however, limits to Ducky’s analysis that are no fault of her own. While she can crunch the numbers and create a great map she is… ultimately… limited to the information that government gives her (and all of us). For example the data set she uses is fairly vague about the value of projects: the government labels them “under $100K” or “between $100K and $1M.” These are hardly precise figures.

Nor does the data say anything about the quality of these projects or their impact. Of course, this is what the debate should be about. Where, how effectively, and to what end is our money being spent? Ducky’s analysis allows us to get to these questions more quickly. The point here is that by opening up this stimulus money to popular analysis we can have a debate about effectiveness.

I don’t, for a second, believe that this will be an easy debate – one in which a “right” answer will magically emerge out of the “data.” Quite the opposite, as I pointed out above the debate will now shift to the economic, ideological and other assumptions that inform each opinion.  This could in fact create a less clear picture – but it will also be a picture that is more reflective of the diversity of opinions found in our country and that can scarcely be represented in the two national newspapers. And this is what is most important. Open data allows for a greater debate, one that more citizens can contribute and be a part of rather than just passively observe from their newspapers and TV screens. That is the real opportunity of open data is not that it enables a perfect discussion, but a wider, more democratic and thus, as far as I’m concerned, a better one.

(An additional note, while it is great that the government has created an API to share this data, let us not get too excited; it is very limited in what it tells us. More data, shared openly would be better still. Don’t expect this anytime soon. Yesterday the Government dropped 4,476 pages off at the Parliamentary Budget Office rather than send them a electronic spreadsheet (h/t Tim Wilson). Clearly they don’t want the PBO to be able to crunch the numbers on the stimulus package – which means they probably don’t want you to either.)

Searching The Vancouver Public Library Catalog using Amazon

A few months ago I posted about a number of civic applications I’d love to see. These are computer, iphone, blackberry applications or websites that leverage data and information shared by the government that would help make life in Vancouver a little nicer.

Recently I was interviewed on CBC’s spark about some of these ideas that have come to fruition because of the hard work and civic mindedness of some local hackers. Mostly, I’ve talked about Vantrash (which sends emails or tweets to remind people of their upcoming garbage day), but during the interviewed I also mentioned that Steve Tannock created a script that allows you to search the Vancouver Public Library (VPL) Catalog from the Amazon website.

Firstly – why would you want you want to use Amazon to search the VPL? Two reasons: First, it is WAY easier to find books on the Amazon site then the library site, so you can leverage Amazon’s search engine to find books (or book recommendations) at the VPL. Second, it’s a great way to keep the book budget in check!

To use the Amazon website to search the VPL catalog you need to follow these instructions:

1. You need to be using the Firefox web browser. You can download and install it for free here. It’s my favourite browser and if you use it, I’m sure it will become yours too.

2. You will need to install the greasemonkey add-on for Firefox. This is really easy to do as well! After you’ve installed Firefox, simply go here and click on install.

3. Finally, you need to download the VPL-Amazon search script from Steve Tannock’s blog here.

4. While you are at Steve’s blog, write something nice – maybe a thank you note!

5. Go to the Amazon website and search for a book. Under the book title will be a small piece of text letting you know if the VPL has the book in its catalog! (See example picture below) Update: I’m hearing from some users that the script works on the Amazon.ca site but not the Amazon.com site.

I hope this is helpful! And happy searching.

Also, for those who are more technically inclined feel free to improve on the script – fix any bugs (I’m not sure there are any) or make it better!

Amazon shot

19th Century Net Neutrality (and what it means for the 21st Century)

So what do bits of data and coal locomotive have in common?

It turns out a lot.

In researching an article for a book I’ve discovered an interesting parallel between the two in regard to the issue of Net Neutrality. What is Net Neutrality? It is the idea that when you use the Internet, you do so free of restrictions. That any information you download gets treated the same as any other piece of information. This means that your Internet service provider (say Rogers, Shaw or Bell) can’t choose to provide you with certain content faster than other content (or worse, simply block you from accessing certain content altogether).

Normally the issue of Net Neutrality gets cast in precisely those terms – do bits of data flowing through fibre optic and copper cables get treated the same, regardless of whose computer they are coming from and whose computer they are going to. We often like to think these types of challenges are new, and unique, but one thing I love about being a student of history, is that there are almost always interesting earlier examples to any problem.

Take the late 19th and early 20th century. Although the term would have been foreign to them, Net Neutrality was a raging issue, but not in regard to the telegraph cables of the day.  No, it was an issue in regards to railway networks.

In 1903 the United States Congress passed the Elkins Act. The Act forbade railway companies from offering, and railway customers from demanding, preferential rates for certain types of goods. Any “good” that moved over the (railway) network had to be priced and treated the same as any other “good.” In short, the (railway) network had to be neutral and price similar goods equally. What is interesting is that many railway companies welcomed the act because some trusts (corporations) paid the standard rail rate but would then demand that the railroad company give them rebates.

What’s interesting to me is that

a) Net Neutrality was a problem back in the late 19th and early 20th century; and

b) Government regulation was seen as an effective solution to ensuring a transparent and fair market place on these networks

The question we have to ask ourselves is, do we want to ensure that the 21st century (fibre optic) networks will foster economic growth, create jobs and improve productivity in much the same way the 19th and 20th century (railway) networks did for that era? If the answer is yes, we’d be wise to look back and see how those networks were managed effectively and poorly.  The Elkins Act is an interesting starting point, as it represented progressives efforts to ensure transparency and equality of opportunity in the marketplace so that it could function as an effective platform for commerce.

The Three Laws of Open Government Data

Yesterday, at the Right To Know Week panel discussion – Conference for Parliamentarians: Transparency in the Digital Era – organized by the Office of the Information Commissioner I shared three laws for Open Government Data that I’d devised on the flight from Vancouver.

The Three Laws of Open Government Data:

  1. If it can’t be spidered or indexed, it doesn’t exist
  2. If it isn’t available in open and machine readable format, it can’t engage
  3. If a legal framework doesn’t allow it to be repurposed, it doesn’t empower

To explain, (1) basically means: Can I find it? If Google (and/or other search engines) can’t find it, it essentially doesn’t exist for most citizens. So you’d better ensure that you are optimized to be crawled by all sorts of search engine spiders.

After I’ve found it, (2) notes that, to be useful, I need to be able to play with the data. Consequently, I need to be able to pull or download it in a useful format (e.g. an API, subscription feed, or a documented file). Citizens need data in a form that lets them mash it up with Google Maps or other data sets, or analyze in Excel. This is essentially the difference between VanMaps (look, but don’t play) and the Vancouver Data Portal, (look, take and play!). Citizens who can’t play with information are citizens who are disengaged/marginalized from the discussion.

Finally, even if I can find it and play with it, (3) highlights that I need a legal framework that allows me to share what I’ve created, to mobilize other citizens, provide a new service or just point out an interesting fact. This is the difference between Canada’s House of Parliament’s information (which, due to crown copyright, you can take, play with, but don’t you dare share or re-publish) and say, Whitehouse.gov which “pursuant to federal law, government-produced materials appearing on this site are not copyright protected.”

Find, Play and Share. That’s want we want.

Of course, a brief scan of the internet has revealed that others have also been thinking about this as well. There is this excellent 8 Principle of Open Government Data that are more detailed, and admittedly better, especially for a CIO level and lower conversation.  But for talking to politicians (or Deputy Ministers or CEOs), like those in attendance during yesterday’s panel or, later that afternoon, the Speaker of the House, I found the simplicity of three resonated more strongly; it is a simpler list they can remember and demand.