Category Archives: open source

London, UK, Open Government Data Camp Keynote – Nov 18, 2010

Here is my opening keynote to the Open Government Data Camp held in London earlier this year (2010) on Nov 18th. Pasted below is the day two keynote by the always excellent Tom Steinberg of mysociety.org.

Hope these are thought provoking for novice and veteran tech and open government types, as well as those just curious about how our world is changing.

My speech gets all very serious after the initial 12-year old boy moment. It was a great audience at the camp – really wonderful to have such an engaged group of people.

David Eaves from Open Knowledge Foundation on Vimeo.

Tim Berners-Lee, Prof. Nigel Shadbolt & Tom Steinberg from Open Knowledge Foundation on Vimeo.

Wikileaks and the coming conflict between closed and open

I’ve been thinking about wikileaks ever since the story broke. Most of the stories – like those written by good friends like Taylor Owen and Scott Gilmore are pieces very much worth reading but I think miss the point about wikileaks and/or assess it on their own terms and thus fail to understand what wikileaks is actually about and what it is trying to do. We need to be clear in our understanding, and thus the choices we are about to confront.

However, before you read anything I write there are smarter people out there – two in particular – who have said things that I’m not reading anywhere else. The first is Jay Rosen (key excerpt below) whose 15 minutes Pressthink late night video on the subject is brilliant and the second is by zunguzungu piece Julian Assange and the Computer Conspiracy; “To destroy this invisible government” (key excerpt further below) is a cool and calculated dissection of wikileaks goals and its intentions. I’ve some thoughts below, but these two pieces are, in my mind, the most important things you can read on the subject and strongly inform my own piece (much, much further below). I know that this is all very long, and that many of you won’t have the patience, but I hope that what I’ve written and shared below is compelling enough to hold your attention, I certainly think it is important enough.

Jay Rosen:

While we have what purports to be a “watchdog press” we also have, laid out in front of us, the clear record of the watchdog press’s failure to do what is says it can do, which is to provide a check on power when it tries to conceal its deeds and its purpose. So I think it is a mistake to reckon with Wikileaks without including in the frame the spectacular failures of the watchdog press over the last 10, 20, 40 years, but especially recently. And so, without this legitimacy crisis in mainstream American journalism, the leakers might not be so inclined to trust Julian Assange and a shadowy organization like Wikileaks. When the United States is able to go to war behind a phony case, when something like that happens and the Congress is fooled and a fake case is presented to the United Nations and war follows and 100,000s of people die and the stated rationale turns out to be false, the legitimacy crisis extends from the Bush government itself to the American state as a whole and the American press and the international system because all of them failed at one of the most important things that government by consent can do: which is reason giving. I think these kind of huge cataclysmic events within the legitimacy regime lie in the background of the Wikileaks case, because if wasn’t for those things Wikileaks wouldn’t have the supporters it has, the leakers wouldn’t collaborate the way that they do and the moral force behind exposing what this government is doing just wouldn’t be there.

This is one of the things that makes it hard for our journalists to grapple with Wikileaks. On the one hand they are getting amazing revelations. I mean the diplomatic cables tell stories of what it is like to be inside the government and inside international diplomacy that anyone who tries to understand government would want to know. And so it is easy to understand why the big news organizations like the New York Times and The Guardian are collaborating with Wikileaks. On the other hand they are very nervous about it because it doesn’t obey the laws of the state and it isn’t a creature of a given nation and it is inserting itself in-between the sources and the press. But I think the main reason why Wikileaks causes so much insecurity with our journalists is because they haven’t fully faced the fact that the watchdog press they treasure so much died under George W. Bush. It failed. And instead of rushing to analyze this failure and prevent it from happening ever again – instead of a truth and reconciliation commission-style effort that could look at “how did this happen” – mostly what our journalists did, with a few exceptions, is they moved on to the next story. The watchdog press died. And what we have is Wikileaks instead. Is that good or is that bad? I don’t know, because I’m still trying to understand what it is.

Zunguzungu:

But, to summarize, he (Assange) begins by describing a state like the US as essentially an authoritarian conspiracy, and then reasons that the practical strategy for combating that conspiracy is to degrade its ability to conspire, to hinder its ability to “think” as a conspiratorial mind. The metaphor of a computing network is mostly implicit, but utterly crucial: he seeks to oppose the power of the state by treating it like a computer and tossing sand in its diodes…

…The more secretive or unjust an organization is, the more leaks induce fear and paranoia in its leadership and planning coterie. This must result in minimization of efficient internal communications mechanisms (an increase in cognitive “secrecy tax”) and consequent system-wide cognitive decline resulting in decreased ability to hold onto power as the environment demands adaption. Hence in a world where leaking is easy, secretive or unjust systems are nonlinearly hit relative to open, just systems. Since unjust systems, by their nature induce opponents, and in many places barely have the upper hand, mass leaking leaves them exquisitely vulnerable to those who seek to replace them with more open forms of governance.

– zunguzungu

Almost all the media about wikileaks has, to date, focused on the revelations about what our government actually thinks versus what it states publicly. The bigger the gap between internal truth and external positions, the bigger the story.

This is, of course, interesting stuff. But less discussed and more interesting is our collective reaction to wikileaks. Wikileaks is drawing a line, exposing a fissure in the open community between those who believe in overturning current “system(s)” (government and international) and those who believe that the current system can function but simply needs greater transparency and reform.

This is why placing pieces like Taylor Owen and Scott Gilmore‘s against zunguzungu’s is so interesting. Ultimately both Owen and Gilmore believe in the core of the current system – Scott explicitly so, arguing how secrecy in the current system allows for human right injustices to be tackled. Implicit in this, of course, is the message that this is how they should be tackled. Consequently they both see wikileaks as a failure as they (correctly) argue that its radical transparency will lead to a more closed and ineffective governments. Assange would likely counter that Scott’s effort address systems and not cause and may even reinforce the international structures that help foster hunan rights abuses. Consequently Assange’s core value of transparency, which at a basic level Owen and Gilmore would normally identify with, becomes a problem.

This is interesting. Owen and Scott believe in reform, they want the world to be a better place and fight (hard) to make it so. I love them both for it. But they aren’t up for a complete assault on the world’s core operating rules and structures. In a way this ultimately groups them (and possibly me – this is not a critique of Scott and Taylor whose concerns I think are well founded) on the same side of a dividing line as people like Tom Flanagan (the former adviser to the Canadian Prime Minister who half-jokingly called for Assange to be assassinated) and Joe Liberman (who called on companies that host material related to wikileaks to sever their ties with them). I want to be clear, they do not believe Assange should be assassinated but they (and possibly myself) do seem to agree that his tactics are a direct threat to the functioning of system that I think they are arguing needs to be reformed but preserved – and so see wikileaks as counterproductive.

My point here is that I want to make explicit the choices wikileaks is forcing us to make. Status quo with incremental non-structural reform versus whole hog structural change. Owen and Gilmore can label wikileaks a failure but in accepting that analysis we have to recognize that they view it from a position that believes in incremental reform. This means you believe in some other vehicle. And here, I think we have some tough questions to ask ourselves. What indeed is that vehicle?

This is why I think Jay Rosen’s piece is so damn important. One of the key ingredients for change has been the existence of the “watchdog” press. But, as he puts it (repeated from above):

I think it is a mistake to reckon with Wikileaks without including in the frame the spectacular failures of the watchdog press over the last 10, 20, 40 years, but especially recently. And so, without this legitimacy crisis in mainstream American journalism, the leakers might not be so inclined to trust Julian Assange and a shadowy organization like Wikileaks. When the United States is able to go to war behind a phony case, when something like that happens and the Congress is fooled and a fake case is presented to the United Nations and war follows and 100,000s of people die and the stated rationale turns out to be false, the legitimacy crisis extends from the Bush government itself to the American state as a whole and the American press and the international system because all of them failed at one of the most important things that government by consent can do: which is reason giving.

the logical conclusion of Rosen’s thesis is a direct challenge to those of us who are privileged enough to benefit from the current system. As ugly and imperfect as the current system may be Liberman, Flanagan, Owen and Gilmore and, to be explicit, myself, benefit from that system. We benefit from the status quo. Significantly. Dismantling the world we know carries with it significant risks, both for global stability, but also personally. So if we believe that Assange has the wrong strategy and tactics we need to make the case, both to ourselves, to his supporters, to those who leak to wiki leaks and to those on the short end of the stick in the international system about how it is the reform will work and how it is that secrecy and power will be managed for the public good.

In this regard the release of wikileak documents is not a terrorist event, but it is as much an attack on the international system as 9/11 was. It is a clear effort to destabilize and paralyze the international system. It also comes at a time when confidence in our institutions is sliding – indeed Rosen argues that this eroding confidence feeds wikileaks.

So what matters is how we react. To carry forward (the dangerous) 9/11 analogy, we cannot repeat the mistakes of the Bush administration. Then our response corrupted the very system we sought to defend, further eroded the confidence in institutions that needed support and enhanced our enemies – we attacked human rights, civil liberties, freedom of speech and prosecuted a war that killed 100,000s of innocent lives on the premise of manufactured evidence.

Consequently, our response to the current crises can’t be to close up governments and increase secrecy. This will strengthen the hands of those who run wikileaks and cause more public servants and citizens to fear the institutions wikileaks and look for alternatives… many of whoe will side with wikileaks and help imped the capacity of the most important institution in our society to respond to everyday challenges.

As a believer in open government and open data the only working option to us to do the opposite. To continue to open up these institutions as the only acceptable and viable path to making them more credible. This is not to say that ALL information should be made open. Any institution needs some private place to debate ideas and test unpopular theses. But at the moment our governments – more through design and evolution than conspiracy – enjoy far more privacy and secrecy than the need. Having a real and meaningful debate about how to change that is our best response. In my country, I don’t see that debate happening. In the United States, I see it moving forward, but now it has more urgency. Needless to say, I think all of this gives new weigh to my own testimony I’ll be making before the parliamentary Standing Committee on Access to Information, Privacy and Ethics.

I still hope the emerging conflict between open and closed can be won without having to resort to the types of tactics adopted by wikileaks. But for those of use who believe it, we had better start making the case persuasively. The responses of people like Flanagan and Liberman remind me of Bush after 9/11 “you are either with us, or with the terrorists.” Whether intentionally or unintentionally, an analogous response will create a world in which power and information are further removed from the public and will lead to the type of destabilizing change Assange wants.

I’m bound to write more on this – especially around wikileaks, open data and transparency that I think some authors unhelpfully conflate but this post is already long enough and I’m sure most people haven’t even reached a place where they’ll be reading this.

Patch Culture, Government and why Small Government Advocates are expensive

So earlier today I saw this post by Kevin Gaudet, Federal Director of the Canadian Taxpayers Federation and retweeted by Andrew Coyne (one of the country’s best commentators).

It pretty much sums up why Governments (incorrectly) fear open data and open government:

Screen-shot-2010-11-17-at-6.57.05-AM

The link is to a story on The Sun’s website: Government agencies caught advertising on sex sites!!!! (okay, the exclamation marks are mine).

This is exactly the type of headline every public servant and politician is terrified of. The details, of course, aren’t nearly as salacious. A screenshot Photoforum.ru reveals its a stock photo website (which probably does violate all sorts of copyrights) and, I am sure, there are some photos, somewhere on the site, of nude women, much like there are on say Flickr or Facebook.

Screen-shot-2010-11-17-at-7.09.04-AM1

 

Three interesting things here:

1. The challenge: Non-stories suck time and energy. The government, of course, didn’t choose to advertise on Photoforum.ru. It hired an agency, Cossette, that placed the ads through Google AdSense. (this is actually the good part of the story – great to hear that the government is using Google AdSense which is a cheap and effective way to advertise). But today, I assure you that dozens of public servants, at least one member of the Heritage Minister’s political staff and a few Canada Post employees are spending the next few days running around like chickens with their heads cut off trying to find out how an ad in a $5000 contract ended up on a relatively harmless site. The total cost of all that investigative work and the reports they will generate to address this “scandal” will probably cost taxpayers $10,000s, if not $100,000s in staff time and energy. No wonder governments fear disclosure. The distraction and cost in time and energy needed to fix a non-story is enormous.

2. The diagnosis: The accountability systems trap: What’s interesting of course, is everybody is acting rationally. The Sun (while stretching the truth) is reporting on the misuse (however minor) of taxpayer dollars. Same with the Canadian Taxpayers Federation (CTF). But the solutions are terrible. The CTF demand that the government eliminate all advertising is nonsensical. We can debate whether every ad is needed, but a flat out ban is silly. There are lots of things governments need to inform citizens about. So we can talk about reducing the government’s ad budget, but this news story doesn’t advance that aim either. Indeed, the Canadian Taxpayers and the Sun have, indirectly, helped inflate the cost of government. As noted, the government, acting rationally, now has dozens of people spending their time solving a crisis around a some minor ad campaigns.

So we have a system where everyone is behaving rationally. But the outcome are terrible. This is the scandal. Indeed, if one were really concerned about the effective spending of taxpayer dollars what one would quickly realize that what’s actually broken here isn’t government advertising, it’s the process by which governments react to and address problems.

3. One Approach: Patch-Culture and Open Data. So it is understandable that government types fear disclosure as, presently, they often seeing it leading to the above situtation.

But what’s interesting is that we could handle it differently.

This same story, in the software world, would have been quite different. The Sun article would have simple been understand as someone pointing out a (minor) bug. The software developers (in the case the public servants) would have patched it (which they did, ending the ads deployment to these sites) and the whole thing would have been a non-event. But at least, for minimal cost and effort, the software (ad program) would be better making both the developers (public servants/politicians) and users (taxpayers) happy. So why is it so different with government?

Two reasons.

First. Government’s pretend that everything they do is perfect. They live with an industrial worldview where you had to get something perfect before launching it since you were going to stamp out 100,000 copies of the things and any bug would get replicated that many times. Of course, very little of government is like this, much more of it is like software where you can make fixes along the way. What makes software so great (although sometimes frustrating too) is that they acknowledge this and so patch it on the fly. Patch-culture – as my friend David Humphrey likes to call it – where people help software get better by point out flaws, and even offering solutions, is exactly what our government needs to learn.

Second. People jump to the worst conclusions because governments appear secretive. Because they don’t proactively disclose (and because Access to Information process is sooooo slow) they always look like they are hiding something. Consequently, when people find mistakes they presume people have been hiding it, trying to cover it up and/or that it was made with some malicious intent. In an open source community, when people find a bug, they often assume it was a (dumb) mistake – not a nefarious plot to do achieve some dubious ends. Open data makes a patch culture easier to create because when you share what you are doing it is hard to believe you have some evil intent. Yes, people may just assume you are dumb – but that is actually better than evil (especially for powerful institutions like government).

Open data is about changing the culture everywhere, both inside and outside government. It won’t do it over night and it won’t do it perfectly but it can help. And it’s a shift we need to make.

Conclusion

I could easily have imagined the above story playing differently in world where patch culture has taken shape within government. Someone discovers the ads on photoforum.ru, thinks it is an error (someone made a dumb choice), submits a bug to the government, the ads are removed, the bug is patched, everybody is happy and… no wasteful non-story about government spending.

Indeed, we could have instead allocated all that wasted time and energy to have the debate the CTF actually wants to have: how much of the budget should go towards advertising.

What we need to be thinking about is what is the system we are in, what are the incentives it creates for the various actors and then think of the system we’d like, and figure out how to get from here to there. I think Open Data is an important (but not exclusive) part of the puzzle for changing the relationship between the government, the press and citizens. It may even create some new problems, but I think they will be better, less costly and more interesting problems then the ones we have now, we are frankly destroying budgets and the institutions we need to serve us.

Launching datadotgc.ca 2.0 – bigger, better and in the clouds

Back in April of this year we launched datadotgc.ca – an unofficial open data portal for federal government data.

At a time when only a handful of cities had open data portals and the words “open data” were not being even talked about in Ottawa, we saw the site as a way to change the conversation and demonstrate the opportunity in front of us. Our goal was to:

  • Be an innovative platform that demonstrates how government should share data.
  • Create an incentive for government to share more data by showing ministers, public servants and the public which ministries are sharing data, and which are not.
  • Provide a useful service to citizens interested in open data by bringing it all the government data together into one place to both make it easier to find.

In every way we have achieved this goal. Today the conversation about open data in Ottawa is very different. I’ve demoed datadotgc.ca to the CIO’s of the federal government’s ministries and numerous other stakeholders and an increasing number of people understand that, in many important ways, the policy infrastructure for doing open data already exists since datadotgc.ca show the government is already doing open data. More importantly, a growing number of people recognize it is the right thing to do.

Today, I’m pleased to share that thanks to our friends at Microsoft & Raised Eyebrow Web Studio and some key volunteers, we are taking our project to the next level and launching Datadotgc.ca 2.0.

So what is new?

In short, rather than just pointing to the 300 or so data sets that exist on federal government websites members may now upload datasets to datadotg.ca where we can both host them and offer custom APIs. This is made possible since we have integrated Microsoft’s Azure cloud-based Open Government Data Initiative into the website.

So what does this mean? It means people can add government data sets, or even mash up government data sets with their own data to create interest visualization, apps or websites. Already some of our core users have started to experiment with this feature. London Ontario’s transit data can be found on Datadotgc.ca making it easier to build mobile apps, and a group of us have taken Environment Canada’s facility pollution data, uploaded it and are using the API to create an interesting app we’ll be launching shortly.

So we are excited. We still have work to do around documentation and tracking some more federal data sets we know are out there but, we’ve gone live since nothing helps us develop like having users and people telling us what is, and isn’t working.

But more importantly, we want to go live to show Canadians and our governments, what is possible. Again, our goal remains the same – to push the government’s thinking about what is possible around open data by modeling what should be done. I believe we’ve already shifted the conversation – with luck, datadotgc.ca v2 will help shift it further and faster.

Finally, I can never thank our partners and volunteers enough for helping make this happen.

Rethinking Wikipedia contributions rates

About a year ago news stories began to surface that wikipedia was losing more contributors that it was gaining. These stories were based on the research of Felipe Ortega who had downloaded and analyzed millions the data of contributors.

This is a question of importance to all of us. Crowdsourcing has been a powerful and disruptive force socially and economically in the short history of the web. Organizations like Wikipedia and Mozilla (at the large end of the scale) and millions of much smaller examples have destroyed old business models, spawned new industries and redefined the idea about how we can work together. Understand how the communities grow and evolve is of paramount importance.

In response to Ortega’s research Wikipedia posted a response on its blog that challenged the methodology and offered some clarity:

First, it’s important to note that Dr. Ortega’s study of editing patterns defines as an editor anyone who has made a single edit, however experimental. This results in a total count of three million editors across all languages.  In our own analytics, we choose to define editors as people who have made at least 5 edits. By our narrower definition, just under a million people can be counted as editors across all languages combined.  Both numbers include both active and inactive editors.  It’s not yet clear how the patterns observed in Dr. Ortega’s analysis could change if focused only on editors who have moved past initial experimentation.

This is actually quite fair. But the specifics are less interesting then the overall trend described by the Wikmedia Foundation. It’s worth noting that no open source or peer production project can grow infinitely. There is (a) a finite number of people in the world and (b) a finite amount of work that any system can absorb. At some point participation must stabilize. I’ve tried to illustrate this trend in the graphic below.

Open-Source-Lifecyclev2.0021-1024x606

As luck would have it, my friend Diederik Van Liere was recently hired by the Wikimedia Foundation to help them get a better understanding of editor patterns on Wikipedia – how many editors are joining and leaving the community at any given moment, and over time.

I’ve been thinking about Diederik’s research and three things have come to mind to me when I look at the above chart:

1. The question isn’t how do you ensure continued growth, nor is it always how do you stop decline. It’s about ensuring the continuity of the project.

Rapid growth should probably be expected of an open source or peer production project in the early stage that has LOTS of buzz around it (like Wikipedia was back in 2005). There’s lots of work to be done (so many articles HAVEN’T been written).

Decline may also be reasonable after the initial burst. I suspect many open source lose developers after the product moves out of beta. Indeed, some research Diederik and I have done of the Firefox community suggests this is the case.

Consequently, it might be worth inverting his research question. In addition to figuring out participation rates, figure out what is the minimum critical mass of contributors needed to sustain the project. For example, how many editors does wikipedia need to at a minimum (a) prevent vandals from destroying the current article inventory and/or at the maximum (b) sustain an article update and growth rate that sustains the current rate of traffic rate (which notably continues to grow significantly). The purpose of wikipedia is not to have many or few editors, it is to maintain the world’s most comprehensive and accurate encyclopedia.

I’ve represented this minimum critical mass in the graphic above with a “Maintenance threshold” line. Figuring out the metric for that feels like it may be more important than participation rates independently as such as metric could form the basis for a dashboard that would tell you a lot about the health of the project.

2. There might be an interesting equation describing participation rates

Another thing that struck me was that each open source project may have a participation quotient. A number that describes the amount of participation required to sustain a given unit of work in the project. For example, in wikipedia, it may be that every new page that is added needs 0.000001 new editors in order to be sustained. If page growth exceeds editors (or the community shrinks) at a certain point the project size outstrips the capacity of the community to sustain it. I can think of a few variables that might help ascertain this quotient – and I accept it wouldn’t be a fixed number. Change the technologies or rules around participation and you might make increase the effectiveness of a given participant (lowering the quotient) or you might make it harder to sustain work (raising the quotient). Indeed, the trend of a participation quotient would itself be interesting to monitor… projects will have to continue to find innovative ways to keep it constant even as the projects article archive or code base gets more complex.

3. Finding a test case – study a wiki or open source project in the decline phase

One things about open source projects is that they rarely die. Indeed, there are lots of open source projects out there that are the walking zombies. A small, dedicated community struggles to keep a code base intact and functioning that is much too large for it to manage. My sense is that peer production/open source projects can collapse (would MySpace count as an example?) but the rarely collapse and die.

Diederik suggested that maybe one should study a wiki or open source project that has died. The fact that they rarely do is actually a good thing from a research perspective as it means that the infrastructure (and thus the data about the history of participation) is often still intact – ready to be downloaded and analyzed. By finding such a community we might be able to (a) ascertain what “maintenance threshold” of the project was at its peak, (b) see how its “participation quotient” evolved (or didn’t evolve) over time and, most importantly (c) see if there are subtle clues or actions that could serve as predictors of decline or collapse. Obviously, in some cases these might be exogenous forces (e.g. new technologies or processes made the project obsolete) but these could probably be controlled for.

Anyways, hopefully there is lots here for metric geeks and community managers to chew on. These are only some preliminary thoughts so I hope to flesh them out some more with friends.

Rethinking Freedom of Information Requests: from Bugzilla to AccessZilla

Last week I gave a talk at the Conference for Parliamentarians hosted by the Information Commission as part of Right to Know Week.

During the panel I noted that, if we are interested in improving response times for Freedom of Information (FOI) requests (or, in Canada, Access to Information (ATIP) requests) why doesn’t the Office of the Information Commissioner use a bugzilla type software to track requests?

Such a system would have a number of serious advantages, including:

  1. Requests would be public (although the identity of the requester could remain anonymous), this means if numerous people request the same document they could bandwagon onto a single request
  2. Requests would be searchable – this would make it easier to find documents already released and requests already completed
  3. You could track performance in real time – you could see how quickly different ministries, individuals, groups, etc… respond to FOI/ATIP requests, you could even sort performance by keywords, requester or time of the year
  4. You could see who specifically is holding up a request

In short such a system would bring a lot of transparency to the process itself and, I suspect, would provide a powerful incentive for ministries and individuals to improve their performance in responding to requests.

For those unfamiliar with Bugzilla it is an open source software application used by a number of projects to track “bugs” and feature requests in the software. So, for example, if you notice the software has a bug, you register it in Bugzilla, and then, if you are lucky and/or if the bug is really important, so intrepid developer will come along and develop a patch for it. Posted below, for example, is a bug I submitted for Thunderbird, an email client developed by Mozilla. It’s not as intuitive as it could be but you can get the general sense of things: when I submitted the bug (2010-01-09), who developed the patch (David Bienvenu), it’s current status (Fixed), etc…

ATIPPER

Interestingly, an FOI or ATIP request really isn’t that different than a “bug” in a software program. In many ways, bugzilla is just a complex and collaborative “to do” list manager. I could imagine it wouldn’t be that hard to reskin it so that it could be used to manage and monitor access to information requests. Indeed, I suspect there might even be a community of volunteers who would be willing to work with the Office of the Information Commissioner to help make it happen.

Below I’ve done a mock up of what I think revamped Bugzilla, (renamed AccessZilla) might look like. I’m put numbers next to some of the features so that I can explain in detail about them below.

ATIPPER-OIC1

So what are some of the features I’ve included?

1. Status: Now an ATIP request can be marked with a status, these might be as simple as submitted, in process, under review, fixed and verified fixed (meaning the submitter has confirmed they’ve received it). This alone would allow the Information Commissioner, the submitter, and the public to track how long an individual request (or an aggregate of requests) stay in each part of the process.

2.Keywords: Wouldn’t it be nice to search of other FOR/ATIP requests with similar keywords? Perhaps someone has submitted a request for a document that is similar to your own, but not something you knew existed or had thought of… Keywords could be a powerful way to find government documents.

3. Individual accountability: Now you can see who is monitoring the request on behalf of the Office of the information commissioner and who is the ATIP officer within the Ministry. If the rules permitted then potential the public servants involved in the document might have their names attached here as well (or maybe this option will only be available to those who log on as ATIP officers.

4. Logs: You would be able to see the last time the request was modified. This might include getting the documents ready, expressing concern about privacy or confidentiality, or simply asking for clarification about the request.

5. Related requests: Like keywords, but more sophisticated. Why not have the software look at the words and people involved in the request and suggest other, completed requests, that it thinks might similar in type and therefor of interest to the user. Seems obvious.

6. Simple and reusable resolution: Once the ATIP officer has the documentation, they can simply upload it as an attachment to the request. This way not only can the original user quickly download the document, but anyone subsequent user who stumbles upon the request during a search could download the documents. Better still any public servant who has unclassified documents that might relate to the request can simply upload them directly as well.

7. Search: This feels pretty obvious… it would certainly make citizens life much easier and be the basic ante for any government that claims to be interested in transparency and accountability.

8. Visualizing it (not shown): The nice thing about all of these features is that the data coming out of them could be visualized. We could generate realt time charts showing average response time by ministry, list of respondees by speed from slowest to fastest, even something as mundane as most searched keywords. The point being that with visualizations is that a governments performance around transparency and accountability becomes more accessible to the general public.

It may be that there is much better software out there for doing this (like JIRA), I’m definitely open do suggestions. What I like about bugzilla is that it can be hosted, it’s free and its open source. Mostly however, software like this creates an opportunity for the Office of the Information Commissioner in Canada, and access to information managers around the world, to alter the incentives for governments to complete FOI/ATIP requests as well as make it easier for citizens to find out information about their government. It could be a fascinating project to reskin bugzilla (or some other software platform) to do this. Maybe even a Information Commissioners from around the world could pool their funds to sponsor such a reskinning of bugzilla…

UK Adopts Open Government License for everything: Why it's good and what it means

In the UK, the default is open.

Yesterday, the United Kingdom made an announcement that radically reformed how it will manage what will become the government’s most important asset in the 21st century: knowledge & information.

On the National Archives website, the UK Government made public its new license for managing software, documents and data created by the government. The document is both far reaching and forward looking. Indeed, I believe this policy may be the boldest and most progressive step taken by a government since the United States decided that documents created by the US government would directly enter the public domain and not be copyrighted.

In almost every aspect the license, the UK government will manage its  “intellectual property” by setting the default to be open and free.

Consider the introduction to the framework:

The UK Government Licensing Framework (UKGLF) provides a policy and legal overview for licensing the re-use of public sector information both in central government and the wider public sector. It sets out best practice, standardises the licensing principles for government information and recommends the use of the UK Open Government Licence (OGL) for public sector information.

The UK Government recognises the importance of public sector information and its social and economic value beyond the purpose for which it was originally created. The public sector therefore needs to ensure that simple licensing processes are in place to enable and encourage civil society, social entrepreneurs and the private sector to re-use this information in order to:

  • promote creative and innovative activities, which will deliver social and economic benefits for the UK
  • make government more transparent and open in its activities, ensuring that the public are better informed about the work of the government and the public sector
  • enable more civic and democratic engagement through social enterprise and voluntary and community activities.

At the heart of the UKGLF is a simple, non-transactional licence – the Open Government Licence – which all public sector bodies can use to make their information available for free re-use on simple, flexible terms.

An just in case you thought that was vague consider these two quotes from the frame work. This one for data:

It is UK Government policy to support the re-use of its information by making it available for re-use under simple licensing terms.  As part of this policy most public sector information should be made available for re-use at the marginal cost of production. In effect, this means at zero cost for the re-user, especially where the information is published online. This maximises the social and economic value of the information. The Open Government Licence should be the default licence adopted where information is made available for re-use free of charge.

And this one for software:

  • Software which is the original work of public sector employees should use a default licence.  The default licence recommended is the Open Government Licence.
  • Software developed by public sector employees from open source software may be released under a licence consistent with the open source software.

These statements are unambiguous and a dramatic step in the right direction. Information and software created by governments are, by definition, public assets. Tax dollars have already paid for their collection and/or development and the government has already benefited by using from them. They are also non-rivalrous good. This means, unlike a road, if I use government information, or software, I don’t diminish your ability to use it (in contrast only so many cars can fit on a road, and they wear it down). Indeed with intellectual property quite the opposite is true, by using it I may actually make the knowledge more valuable.

This is, obviously, an exciting development. It has generated a number of thoughts:

1.     With this move the UK has further positioned itself at the forefront of the knowledge economy:

By enacting this policy the UK government has just enabled the entire country, and indeed the world, to use its data, knowledge and software to do whatever people would like. In short an enormous resource of intellectual property has just been opened up to be developed, enhanced and re-purposed. This could help lower costs for new software products, diminish the cost of government and help foster more efficient services. This means a great deal of this innovation will be happening in the UK first. This could become a significant strategic advantage in the 21st century economy.

2.     Other jurisdictions will finally be persuaded it is “safe” to adopt open licenses for their intellectual property:

If there is one thing that I’ve learnt dealing with governments it is that, for all the talk of innovation, many governments, and particularly their legal departments, are actually scared to be the first to do something. With the UK taking this bold step I expect a number of other jurisdictions to more vigorously explore this opportunity. (it is worth noting that Vancouver did, as part of the open motion, state the software developed by the city would have an open license applied to it, but the policy work to implement such a change has yet to be announced).

3.     This should foster a debate about information as a public asset:

In many jurisdictions there is still the myth that governments can and should charge for data. Britain’s move should provide a powerful example for why these types of policies should be challenged. There is significant research showing that for GIS data for example, money collected from the sale of data simply pays for the money collection system. This is to say nothing of the policy and managerial overhead of choosing to manage intellectual property. Charging for public data has never made financial sense, and has a number of ethical challenges to it (so only the wealthy get to benefit from a publicly derived good?). Hopefully for less progressive governments, the UK’s move will refocus the debate along the right path.

4.     It is hard to displace a policy leader once they are established.

The real lesson here is that innovative and forward looking jurisdictions have huge advantages that they are likely to retain. It should come as no surprise that the UK made this move – it was among the first national governments to create an open data portal. By being an early mover it has seen the challenges and opportunities before others and so has been able to build on its success more quickly.

Consider other countries – like Canada – that may wish to catch up. Canada does not even have an open data portal as of yet (although this may soon change). This means that it is now almost 2 years behind the UK in assessing the opportunities and challenges around open data and rethinking intellectual property. These two years cannot be magically or quickly caught up. More importantly, it suggests that some public services have cultures that recognize and foster innovation – especially around key issues in the knowledge economy – while others do not.

Knowledge economies will benefit from governments that make knowledge, information and data more available. Hopefully this will serve as a wake up call to other governments in other jurisdictions. The 21st century knowledge economy is here, and government has a role to play. Best not be caught lagging.

My Mozilla Summit 2010 Talk: Making the Army of Awesome more Awesome

This summer I had the enormous pleasure and privilege of both being at the Mozilla Summit and of being selected to give a lightening talk.

Embedded below is the talk – it’s five minutes so won’t take long to watch and is a short and updated version of my community management presentation. There are tons of people to thank for this talk, Diederik Van Liere, David Ascher and Mike Beltzner come to mind immediately, but there are many others as well. It also builds off a number of posts, including some old gems like this one and this one.

I’ve embedded a YouTube video of it, and the slide deck is a little further down.

Getting Government Right Behind the Firewall

The other week I stumbled on this fantastic piece by Susan Oh of Ragan.com about a 50 day effort by the BC government to relaunch its intranet set.

Yes, 50 days.

If you run a large organization’s intranet site I encourage to read the piece. (Alternatively, if you are forced (or begged) to use one, forward this article to someone in charge). The measured results are great – essentially a doubling in pretty much all the things you want to double (like participation) – but what is really nice is how quick and affordable the whole project was, something rarely seen in most bureaucracies.

Here is an intranet for 30,000 employees, that “was rebuilt from top to bottom within 50 days with only three developers who were learning the open-source platform Drupal as they as went along.”

I beg someone in the BC government to produce an example of such a significant roleout being accomplished with so few resources. Indeed, it sounds eerily similar to GCPEDIA (available to 300,000 people using open source software and 1 FTE, plus some begged and borrowed resources) and OPSPedia (a test project also using open source software with tiny rollout costs). Notice a pattern?

Across our governments (not to mention a number of large conservative companies) there are tiny pockets where resourceful teams find a leader or project manager willing to buck the idea that a software implementations must be a multi-year, multimillion dollar roll out. And they are making the lives of public servants better. God knows our public servants need better tools, and quickly. Even the set of tools being offered in the BC examples weren’t that mind-blowing, pretty basic stuff for anyone operating as a knowledge worker.

I’m not even saying that what you do has to be open source (although clearly, the above examples show that it can allow one to move speedily and cheaply) but I suspect that the number of people (and the type of person) interested in government would shift quickly if, internally, they had this set of tools at their disposal. (Would love to talk to someone at Canada’s Food Inspection Agency about their experience with Socialtext)

The fact is, you can. And, of course, this quickly get us to the real problem… most governments and large corporations don’t know how to deal with the cultural and power implications of these tools.

We’ll we’d better get busy experimenting and trying cause knowledge workers will go where they can use their and their peers brains most effectively. Increasingly, that isn’t government. I know I’m a fan of the long tail of public policy, but we’ve got to fix government behind the firewall, otherwise their won’t be a government behind the firewall to fix.

Collaborate: "Governments don't do that"

The other day while enjoying breakfast with a consultant friend I heard her talk of about how smaller local governments didn’t have the resources to afford her, or her firms services.

Hogwash I thought! Fresh from the launch of CivicCommons.com at the Gov2.0 Summit I jumped in and asked, surely a couple of the smaller municipalities with similar needs could come together, jointly spec out a project and pool their budgets. It seems like a win-win-win, budgets go further, better services are offered and, well, of less interest but still nice, my friend gets to work on rolling out some nice technologies in the community in which she lives.

The response?

“Government’s doesn’t work that way.”

Followed up by…

“Why would we work with one of those other communities, they are our competitors.”

Once you’ve stopped screaming at your monitor… (yes, I’m happy to give you a few seconds to vent that frustration) let me try to explain in as cool as a manor as possible why this makes no sense. And while I don’t live in the numerous municipalities that border on Vancouver, if you do, consider writing you local councillor/mayor. I think your IT department is wasting your tax dollars.

First, government’s don’t work that way? Really? So an opportunity arises for you to save money and offer better services to your citizens and you’re going to say no because the process offends you in some way? I’m pretty sure there’s a chamber full of council people and a mayor who feel pretty differently about that.

The fact is, that governing a city is going to get more complicated. The expectations of citizens are going to become greater. There is going to be a gap, and no amount of budget is going to cover it. Citizens increasingly have access to top tier services on the web – they know what first class systems look like. They look like Google, Amazon, Travelocity, etc… and vary rarely your municipal website site and the services it offers. It almost doesn’t matter where you are reading this from, I’m willing to bet your city’s site isn’t world class. Thanks to the web however your citizens, even the ones who never leave your bedroom community, are globe traveling super consumers of the web. They are getting faster, better and more effective service on and off the web. You might want to consider this because as the IT director in a city of 500,000 people you probably don’t have the resources to keep up.

Okay, so sharing a budget to be able to build better online infrastructure (or whatever) for your city makes sense. But now you’re thinking – we can’t work with that neighboring community… their our competitors.

Stop. Stop right there.

That community is not your competitor. Let me tell you right now. No one is moving to West Van over Burnaby because their website is better, or their garbage service is more efficient. They certainly aren’t moving because you offer webbased forms on your city’s website and the other guys (annoyingly) make you print out a PDF. That’s not influencing the 250K-500K decision about where I live. Basically, if it doesn’t involve the quality of the school it probably isn’t factoring in.

Hell even other cities like Toronto, Calgary or Seattle aren’t your competitor. If anyone is moving there it’s likely because of family or a job. Maybe if you really got efficient then a marginally lower muncipal tax would help, but if that were the case, then partner with as many cities as possible and benefit from some collaborative economies of scale… cause now you kicking the but of the 99% of cities that aren’t collaborating and sharing costs.

And, of course, this isn’t limited to cities. Pretty much any level of government could benefit from pooling budgets to sponsor some commonly speced out projects.

It’s depressing to see that the biggest challenge to driving down the costs of running a city (or any government) aren’t going to technological, but a cultural obsession with the belief that everybody else is different, competing and not as good as us.