Category Archives: open data

International Open Data Day – An Update

(Can’t read the whole post? Important stuff is highlighted in grey below.)

Two years ago, I met some open data advocates from Brazil and Ottawa, and we schemed of doing an international open data hackathon. A few weeks later, this blog post launched International Open Data Day with the hope that supporters would emerge in 5-6 cities to host local events.

Instead, on December 4th, 2010, an amazing group of volunteers organized events in 60 cities on every continent except Australia (OK, and Antartica). A year later, our second effort enjoyed similar success – including Australia this time! We also benefited from a generous informal partnership with the Random Hacks of Kindness, who let our open data hackers participate in spaces where they were organizing hackathons on the same day.

A number of people have been asking me about this year’s International Open Data Day. First, I want to apologize to the community of wonderful people who have been asking me if will we do it again. Between an outrageous travel schedule, work commitments and, most happily, the advent of my little boy – whose 36 hour birth(!) prevented me from participating in Vancouver’s Open Data hackathon last year(!) – I have been remiss in not organizing and communicating information regarding this year’s event.

So, over the past 4 weeks I’ve been consulting with some of the previous year’s organizers (I deeply apologize if I have not spoken to you), and here is the status update:

  1. I’ve been reminded of the awesomeness, organizational skill, patience and general wonderfulness of open data advocates around the world
  2. We are DEFINITELY DOING International Open Data Day and WANT YOU TO PARTICIPATE
  3. For a number of reasons* we are MOVING THE EVENT TO FEB 23rd, 2013.
  4. We are more keen than ever to have this not just be about hacking code, but reusing other projects and BROADENING THE COMMUNITY by using data to do all sorts of things from analysis to visualizations and even just sharing an interesting insight.

So with that in mind, here are a few things that are going on as we speak.

  • If you are interested in participating in, or organizing an International Open Data Day event in your community, please join the Open Data Day mailing list. We’ll post any updates there, and it is a great place to ask questions and get to know other organizers.
  • You may have noticed that opendataday.org is not working. Yikes! We know! We are in the midst of transferring the site over to the Open Knowledge Foundation, who has generously offered to manage it.
  • Once we get the site up – and especially the wiki – I’ll be hitting the mailing list again asking people to start registering their cities, noting locations and sharing information. In the past, the wiki has been amazingly well organized, which has been exceedingly helpful and awesome.
  • For those interested in hosting an event, there is a great guide on how to do so, thanks to Kevin McArthur, Herb Lainchbury and Donna Horn – that can be found here.
  • We are starting to reach out to some of our past partners including The Open Knowledge Foundation, Data.gov/White House, the World Bank, ScraperWiki, the Sunlight Foundation, various Hackerspaces, Hacks and Hackers – and anyone else who wants to be involved, we’d love to hear from you.
  • Feel free to send me an email if you have any thoughts, questions or concerns.

Okay, apologies for the long blog post and the delay in communicating about this. If you have participated or organized an event in the past, thank you. I hope you’re excited about doing it again and that the new date works for you!

Am looking forward to hearing from you.

*Those reasons being: too close to various holidays, too much at the best time of summer in the southern hemisphere, too little time to organize, timing around the Code for America fellows being in their partner cities.

Ontario's Open Data Policy: The Good, The Bad, The Ugly and the (Missed?) Opportunity

Yesterday the province of Ontario launched its Open Data portal. This is great news and is the culmination of a lot of work by a number of good people. The real work behind getting open data program launched is, by and large, invisible to the public, but it is essential – and so congratulations are in order for those who helped out.

Clearly this open data portal is in its early stages – something the province is upfront about. As a result, I’m less concerned with the number of data sets on the site (which however, needs to, and should, grow over time). Hopefully the good people in the government of Ontario have some surprises for us around interesting data sets.

Nor am I concerned about the layout of the site (which needs to, and should, improve over time – for example, once you start browsing the data you end up on this URL and there is no obvious path back to the open data landing page, it makes navigating the site hard).

In fact, unlike some I find any shortcomings in the website downright encouraging. Hopefully it means that speed, iteration and an attitude to ship early has won out over media obsessive, rigid, risk adverse approach governments all to often take. Time will tell if my optimism is warranted.

What I do want to focus on is the license since this is a core piece of infrastructure to an open data initiative. Indeed, it is the license that determines whether the data is actually open or closed. And I think we should be less forgiving of errors in this regard than in the past. It was one thing if you launched in the early days of open data two or four years ago. But we aren’t in early days anymore. There over 200 government open data portal around the world. We’ve crossed the chasm people. Not getting the license right is not a “beta” mistake any more. It’s just a mistake.

So what can we say about the Ontario Open Data license?

First, the Good

There is lots of good things to be said about it. It clearly keys off the UK’s Open Government License much like BC’s license did as does the proposed Canadian Open Government License. This means that above all, it is written in  plain english and is easily understood. In addition, the general format is familiar to many people interested in open data.

The other good thing about the license (pointed out to me by the always sharp Jason Birch) is that it’s attribution clause is softer than the UK, BC or even the proposed Federal Government license. Ontario uses the term “should” whereas the others use the term “must.”

Sadly, this one improvement pales in comparison to some of the problems and, most importantly the potentially lost opportunity I urgently highlight at the bottom of this post.

The Bad

While this license does have many good qualities initiated by the UK, it does suffer from some major flaws. The most notable comes in this line:

Ontario does not guarantee the continued supply of the Datasets, updates or corrections or that they are or will be accurate, useful, complete, current or free and clear of any possible third party copyright, moral right, other intellectual property right or other claim.

Basically this line kills the possibility that any business, non-profit or charity will ever use this data in any real sense. Hobbyests, geeks, academics will of course use it but this provision is deeply flawed.

Why?

Well, let me explain what it means. This says that the government cannot be held accountable to only release data it has the right to release. For example: say the government has software that tracks road repair data and it starts to release it and, happily all sorts of companies and app developers use it to help predict traffic and do other useful things. But then, one day the vendor that provided that road repair tracking software suddenly discovers in the fine print of the contract that they, not the government, own that data! Well! All those companies, non-profits and app developers are suddenly using proprietary data, not (open) government data. And the vendor would be entirely in its rights to go either sue them, or demand a license fee in exchange of letting them continue to use the data.

Now, I understand why the government is doing this. It doesn’t want to be liable if such a mistake is made. But, of course, if they don’t want to absorbe the risk, that risk doesn’t magically disappear, it transfers to the data user. But of course they have no way of managing that risk! Those users don’t know what the contracts say and what the obligations are, the party best positioned to figure that out is the government! Essentially this line transfers a risk to the party (in this case the user) that is least able to manage it. You are left asking yourself, what business, charity or non-profit is going to invest hundreds of thousands of dollars (or more) and people time to build a product, service or analysis around an asset (government data) that it might suddenly discover it doesn’t have the right to use?

The government is the only organization that can clear the rights. If it is unwilling to do so, then I think we need to question whether this is actually open data.

The Ugly

But of course the really ugly part of the license (which caused me to go on a bit of a twitter rant) comes early. Here it is:

If you do any of the above you must ensure that the following conditions are met:

  • your use of the Datasets causes no harm to others.

Wowzers.

This clause is so deeply problematic it is hard to know where to begin.

First, what is the definition of harm? If I use open data from the  Ontario government to rate hospitals and the some hospitals are sub-standard am I “harming” the hospital? Its workers? The community? The Ministry of Health?

So then who decides what the definition is? Well, since the Government of Ontario is the licensor of the data… it would seem to suggest that they do. Whatever the standing government of the data wants to decree is a “harm” suddenly becomes legit. Basically this clause could be used to strip many users – particularly those interested in using the data as a tool for accountability – of their right to use the data, simply because it makes the licensor (e.g. the government) uncomfortable.

A brief history lesson here for the lawyers who inserted this clause. Back in in March of 2011 when the Federal Government launched data.gc.ca they had a similar clause in their license. It read as follows:

“You shall not use the data made available through the GC Open Data Portal in any way which, in the opinion of Canada, may bring disrepute to or prejudice the reputation of Canada.”

While the language is a little more blunt, its effect was the same. After the press conference launching the site I sat down with Stockwell Day (who was the Minister responsible at the time) for 45 minutes and walked him through the various problems with their license.

After our conversations, guess how long it took for that clause to be removed from the license? 3 hours.

If this license is going to be taken seriously, that clause is going to have to go, otherwise, it risks being a laughing stock and a case study of what not to do in Open Government workshops around the world.

(An aside: What was particularly nice was the Minister Day personally called my cell phone to let me know that he’d removed that clause a few hours after our conversation. I’ve disagreed with Day on many, many, many things, but was deeply impressed by his knowledge of the open data file and his commitment to its ideals. Certainly his ability to change the license represents one of the fastest changes to policy I’ve ever witnessed.)

The (Missed?) Opportunity

What is ultimately disappointing about the Ontario license however is that it was never needed. Why every jurisdiction feels the need to invent its own license is always beyond me. What, beyond the softening of the attribution clause, has the Ontario license added to the Open Data world. Not much that I can see. And, as I’ve noted above, it many ways it is a step back.

You know data users would really like? A common license. That would make it MUCH easier to user data from the federal government, the government of Ontario and the Toronto City government all at the same time and not worry about compatibility issues and whether you are telling the end user the right thing or not. In this regard the addition of another license is a major step backwards. Yes, let me repeat that for other jurisdictions thinking about doing open data: The addition of another new license is a major step backwards.

Given that the Federal Government has proposed a new Open Government License that is virtually identical to this license but has less problematic language, why not simply use it? It would make the lives of the people who this license is supposed to enable  – the policy wonks, the innovators, the app developers, the data geeks – lives so much easier.

That opportunity still exists. The Government of Ontario could still elect to work with the Feds around a common license. Indeed given that the Ontario Open Data portal says they are asking for advice on how to improve this program, I implore, indeed beg, that you consider doing that. It would be wonderful if we could move to a single license in this country, and if a partnership between the Federal Government and Ontario might give such an initiative real momentum and weight. If not, into the balkanized abyss of a thousand licenses we wil stumble.

 

Re-Architecting the City by Changing the Timelines and Making it Disappear

A couple of weeks ago I was asked by one of the city’s near where I live to sit on an advisory board around the creation of their Digital Government strategy. For me the meeting was good since I felt that a cohort of us on the advisory board were really pushing the city into a place of discomfort (something you want an advisory board to do in certain ways). My sense is a big part of that conversation had to do with a subtle gap between the city staff and some of the participants around what a digital strategy should deal with.

Gord Ross (of Open Roads) – a friend and very smart guy – and I were debriefing afterwards about where and why the friction was arising.

We had been pushing the city hard on its need to iterate more and use data to drive decisions. This was echoed by some of the more internet oriented members of the board. But at one point I feel like I got healthy push back from one of the city staff. How, they asked, can I iterate when I’ve got 10-60 years timelines that I need to plan around? I simply cannot iterate when some of the investments I’m making are that longterm.

Gord raised Stewart Brands building layers as a metaphor which I think sums up the differing views nicely.

Brand presents his basic argument in an early chapter, “Shearing Layers,” which argues that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In his business-consulting manner, he calls these the “Six S’s” (borrowed in part from British architect and historian F. Duffy’s “Four S’s” of capital investment in buildings).

The Site is eternal; the Structure is good for 30 to 300 years (“but few buildings make it past 60, for other reasons”); the Skin now changes every 15 to 20 years due to both weathering and fashion; the Services (wiring, plumbing, kitchen appliances, heating and cooling) change every seven to 15 years, perhaps faster in more technological settings; Space Planning, the interior partitioning and pedestrian flow, changes every two or three years in offices and lasts perhaps 30 years in the most stable homes; and the innermost layers of Stuff (furnishings) change continually.

My sense is the city staff are trying to figure out what the structure, skin and services layers should be for a digital plan, whereas a lot of us in the internet/tech world live occasionally in the services layer but most in the the space planning and stuff layers where the time horizons are WAY shorter. It’s not that we have to think that way, it is just that we have become accustomed to thinking that way… doubly so since so much of what works on the internet isn’t really “planned” it is emergent. As a result, I found this metaphor useful for trying to understanding how we can end up talking past one another.
It also goes to the heart of what I was trying to convey to the staff: that I think there are a number of assumptions governments make about what has been a 10 or 50 year lifecycle versus what that lifecycle could be in the future.
In other words, a digital strategy could allow some things “phase change” from being say in the skin or service layer to being able to operate on the faster timeline, lower capital cost and increased flexibility of a space planning layer. This could have big implications on how the city works. If you are buying software or hardware on the expectation that you will only have to do it every 15 years your design parameters and expectations will be very different than if it is designed for 5 years. It also has big implications for the systems that you connect to or build around that software. If you accept that the software will constantly be changing, easy integration becomes a necessary feature. If you think you will have things for decades than, to a certain degree, stability and rigidity are a byproduct.
This is why, if the choice is between trying to better predict how to place a 30 year bet (e.g. architect something to be in the skin or services layer) or place a 5 year bet (architect it to be in the space planning or stuff layer) put as much of it in the latter as possible. If you re-read my post on the US government’s Digital Government strategy, this is functionally what I think they are trying to do. By unbundling the data from the application they are trying to push the data up to the services layer of the metaphor, while pushing the applications built upon it down to the space planning and stuff layer.
This is not to say that nothing should be long term, or that everything long term is bad. I hope not to convey this. Rather, that by being strategic about what we place where we can foster really effective platforms (services) that can last for decades (think data) while giving ourselves a lot more flexibility around what gets built around them (think applications, programs, etc…).
The Goal
The reason why you want to do all this, is because you actually want to give the city the flexibility to a) compete in a global marketplace and b) make itself invisible to its citizens. I hinted at this goal the other day at the end of my piece in TechPresident on the UK’s digital government strategy.
On the competitive front I suspect that across Asia and Africa about 200 cities, and maybe a lot more, are going to get brand new infrastructure over the coming 100 years. Heck some of these cities are even being built from scratch. If you want your city to compete in that environment, you’d better be able to offer new and constantly improving services in order to keep up. If not, others may create efficiencies and discover improvements that given them structural advantages in the competition for talent and other resources.
But the other reason is that this kind of flexibility is, I think, critical to making (what Gord now has me referring to as the big “C” city) disappear. I like my government services best when they blend into my environment. If you live a privilidged Western World existence… how often do you think about electricity? Only when you flick the switch and it doesn’t work. That’s how I suspect most people want government to work. Seamless, reliable, designed into their lives, but not in the way of their lives. But more importantly, I want the “City” to be invisible so that it doesn’t get in the way of my ability to enjoy, contribute to, and be part of the (lower case) city – the city that we all belong to. The “city” as that messy, idea swapping, cosmopolitan, wealth and energy generating, problematic space that is the organism humans create where ever the gather in large numbers. I’d rather be writing the blog post on a WordPress installation that does a lot of things well but invisibly, rather than monkeying around with scripts, plugins or some crazy server language I don’t want to know. Likewise, the less time I spend on “the City,” and the more seamlessly it works, the more time I spend focused on “the city” doing the things that make life more interesting and hopefully better for myself and the world.
Sorry for the rambling post. But digesting a lot of thoughts. Hope there were some tasty pieces in that for you. Also, opaque blog post title eh? Okay bed time now.

On Being Misquoted – Access Info Europe and Freedominfo.org

I’ve just been alerted to a new post out on Freedominfo.org has quotes of mine that are used in way that is deeply disappointing. It’s never fund to see your ideas misused to make it appear that you are against something that you deeply support.

The most disappointing misquote comes from Helen Darbishire, a European FOI expert at Access Info Europe. Speaking about the convergence between open data and access to information laws (FOIA) she “lamented that comments like Eaves’ exacerbate divisions at a time when  “synergies” are developing at macro and micro levels.” The comment she is referring to is this one:

“I just think FOIA is broken; the wait time makes it broken….” David Eaves, a Canadian open government “evangelist,” told the October 2011 meeting of International information commissioners. He said “efforts to repair it are at the margins” and governments have little incentive for reform.

I’m not sure if Darbishire was present at the 7th International Conference of Information Commissioners where I made this comment in front of a room of mostly FOI experts but the comment actually got a very warm reception. Specifically, I was talking about how the wait times of access to information requests – not theidea of Access to Information. The fact is, that for many people waiting 4-30 weeks for a response from a government for a piece of information makes the process broken. In addition, I often see the conversation among FOIA experts focus on how to reduce that time by a week or a few days. But for most people, that will still leave them feeling like the system is too slow and so, in their mind, broken, particularly in a world where people are increasingly used to getting the information they want in about .3 seconds (the length of a Google search).

What I find particularly disappointing about Darbishire’s comments is that I’ve been advocating for for the open data and access to information communities to talk more to one another – indeed long before I find any reference of her calling for it. Back in April during the OGP meeting I wrote:

There remain important and interest gaps particularly between the more mature “Access to Information” community and the younger, still coalescing “Gov2.0/OpenGov/Tech/Transparency” community. It often feels like members of the access to information community are dismissive of the technology aspects of the open government movement in general and the OGP in particular. This is disappointing as technology is likely going to have a significant impact on the future of access to information. As more and more government work gets digitized, how way we access information is going to change, and the opportunities to architect for accessibility (or not) will become more important. These are important conversations and finding a way to knit these two communities together more could help the advance everyone’s thinking.

And of course, rather than disparage Access to Information as a concept I frequently praise it, such as during this article about the challenges of convergence between open data and access to information:

Let me pause to stress, I don’t share the above to disparage FOI. Quite the opposite. It is a critical and important tool and I’m not advocating for its end. Nor am I arguing the open data can – in the short or even medium term – solve the problems raised above.

That said, I’m willing to point out the failures of both Open Data and Access to information. But to then cherry pick my comments about FOIA and paint me as someone who is being unhelpful strikes me as problematic.

I feel doubly that way since, not only have I advocated for efforts to bridge the communities, I’ve tried to make efforts to make it happen. I was the one who suggested that Warren Krafchik – the Civil Society co-chair of the Open Government Partnership be invited to the Open Knowledge Festival to help with a conversation around helping bring the two communities together and reached out to him with the invitation.

If someone wants to label me as someone who is opinionated in the space, that’s okay – I do have opinions about what works and what doesn’t work and try to share them, sometimes in a constructive way, and sometimes – such as when on a panel – in a way that helps spur discussion. But to lay the charge of being divisive, when I’ve been trying for several years to bridge the conversation and bring the open data perspective into the FOIA community, feels unfair and problematic.

Lies, Damned Lies, and Open Data

I have an article titles Lies, Damn Lies and Open Data in Slate Magazine as part of their Future Tense series.

Here, for me, is the core point:

On the surface, the open data movement was about who could access and use government data. It rested on the idea that data was as much a public asset as a highway, bridge, or park and so should be made available to those who paid for its creation and curation: taxpayers. But contrary to the hopes of some advocates, improving public access to data—that is, access to the evidence upon which public policy is going to be constructed—does not magically cause governments’, and politicians’, desire for control to evaporate. Quite the opposite. Open data will not depoliticize debate. It will force citizens, and governments, to realize how politicized data is, and always has been.

The long form census debacle here in Canada was, I think, a great example of data getting politicized, and was really helped clarify my thinking around this. This piece has been germinating since then, but the core thesis has occasionally leaked out during some of my talks and discussion. Indeed, you can see me share some of it during the tail end of my opening keynote at the Open Knowledge Foundation International Open Data Camp almost three years ago.

Anyways, please hop on over to Slate and take a look – I hope you enjoy the read.

Lying with Maps: How Enbridge is Misleading the Public in its Ads

The Ottawa Citizen has a great story today about an advert by Enbridge (the company proposing to build a oil pipeline across British Columbia) that includes a “broadly representational” map that shows prospective supertankers steaming up an unobstructed Douglas Channel channel on their way to and from Kitimat – the proposed terminus of the pipeline.

Of course there is a small problem with this map. The route to Kitimat by sea looks nothing like this.

Take a look at the Google Map view of the same area (I’ve pasted a screen shot below – and rotated the map so you are looking at it from the same “standing” location). Notice something missing from Enbridge’s maps?

Kitimate-Google2

According to the Ottawa Citizens story an Enbridge spokesperson said their illustration was only meant to be “broadly representational.” Of course, all maps are “representational,” that is what a map is, a representation of reality that purposefully simplifies that reality so as to aid the reader draw conclusions (like how to get from A to B). Of course such a representation can also be used to mislead the reader into drawing the wrong conclusion. In this case, removing 1000 square kilometers that create a complicated body of water to instead show that oil tankers can steam relatively unimpeded up Douglas Channel from the ocean.

The folks over at Leadnow.ca have remade the Enbridge map as it should be:

EnbridgeV2

Rubbing out some – quite large – islands that make this passage much more complicated of course fits Enbridge’s narrative. The problem is, at this point, given how much the company is suffering from the perception that it is not being fully upfront about its past record and the level of risk to the public, presenting a rosy eyed view of the world is likely to diminish the public’s confidence in Enbridge, not increase their confidence in the project.

There is another lesson. This is great example of how facts, data and visualization matter. They do. A lot. And we are, almost every day, being lied to through visual representations from sources we are told to trust. While I know that no one thinks of maps as open or public data in many ways they are. And this is a powerful example of how, when data is open and available, it can enable people to challenge the narratives being presented to them, even when those offering them up are powerful companies backed by a national government.

If you are going to create a representation of something you’d better think through what you are trying to present, and how others are going to see it. In Enbridge’s case this was either an effort at guile gone horribly wrong or a communications strategy hopelessly unaware of the context in which it is operating. Whoever you are, and whatever you are visualization – don’t be like Enbridge – think through your data visualization before you unleash it into the wild.

How Government should interact with Developers, Data Geeks and Analysts

Below is a screen shot from the Opendatabc google group from about two months ago. I meant to blog about this earlier but life has been in the way. For me, this is a prefect example of how many people in the data/developer/policy world probably would like to interact with their local, regional or national government.

A few notes on this interaction:

  • I occasionally hear people try to claim the governments are not responsive to requests for data sets. Some aren’t. Some are. To be fair, this was not a request for the most controversial data set in the province. But it is was a request. And it was responded to. So clearly there are some governments that are responsive. The questions is figuring out which one’s are, why they are, and see if we can export that capacity to other jurisdictions.
  • This interaction took place in a google group – so the whole context is social and norm driven. I love that public officials in British Columbia as well as with the City of Vancouver are checking in every once in a while on google groups about open data, contributing to conversations and answering questions that citizens have about government, policies and open data. It’s a pretty responsive approach. Moreover, when people are not constructive it is the group that tends to moderate the behaviour, rather than some leviathan.
  • Yes, I’ve blacked out the email/name of the public servant. This is not because I think they’d mind being known or because they shouldn’t be know, but because I just didn’t have a chance to ask for permission. What’s interesting is that this whole interaction is public and the official was both doing what that government wanted and compliant with all social media rules. And yet, I’m blacking it out, which is a sign of how messed up current rules and norms make citizens relationships with public officials they interact with online -I’m worried of doing something wrong by telling others about a completely public action. (And to be clear, the province of BC has really good and progressive rules around these types of things)
  • Yes, this is not the be all end all of the world. But it’s a great example of a small thing being doing right. It’s nice to be able to show that to other government officials.

 

What do I think of the Canadian Senate?

Read Jennifer Ditchburn in the Globe and Mail – Senate stubborn on making information about chamber more accessible.

It is laughable about how hard the Canadian Senate makes it to access information about it. The lower house – which has made good progress in the last few years on this front – shares tons of information online. But the Senate? Attendance records, voting records and well, pretty much any record, is nigh high impossible to get online. Indeed, as Jennifer points out, for many requests you have to make an appointment and go in, in person, in Ottawa(!!!) to get them.

What year is it? 1823? It’s not like we haven’t had the mail, the telephone, the fax machine, and of course, the internet come along to make accessing all this information a little easier. I love that if you want to get certain documents about the operation of the senate you have to go to Ottawa.

Given the Senate is not even elected in Canada and has, shall we say, a poor reputation for accountability and accessibility, you’d think this would be a priority. Sadly, it is not. Having spoken with some of the relevant parties I can say, Senators are not interested in letting you see or know anything.

I understand the desire of the senate to be above the political fray, to not be bent by the fickle swings in electoral politics, to be a true house of “second sober thought.” And yet I see no reason why it can’t still be all that, while still making all the information that it must make public about itself, available online in a machine readable format. It is hard to see how voting records or attendance records will sway how the Senate operates, other than maybe prompt some Senators to show up for work more often.

But let’s not hold our breath for change. Consider my favourite part of the article:

“A spokeswoman for government Senate leader Marjory LeBreton said she was unavailable and her office had no comment. Ms. LeBreton has asked a Senate committee to review the rules around Senate attendance, but it’s unclear if the review includes the accessibility of the register.”

No comment? For a story on the Senate’s lack of accessibility? Oh vey! File it under: #youredoingitwrong

 

 

Is Civic Hacking Becoming 'Our Pieces, Loosely Joined?'

I’ve got a piece up over on the WeGov blog at TechPresident – Is Civic Hacking Becoming ‘Our Pieces, Loosely Joined?

Juicy bit:

There is however, a larger issue that this press release raises. So far, it appears that the spirit of re-use among the big players, like MySociety and the Sunlight Foundation*, only goes so deep. Indeed often it seems they are limited to believing others should re-use their code. There are few examples where the bigger players dedicate resources to support other people’s components. Again, it is fine if this is all about creating competing platforms and competing to get players in smaller jurisdictions who cannot finance creating whole websites on their own to adopt it. But if this is about reducing duplication then I’ll expect to see some of the big players throw resources behind components they see built elsewhere. So far it isn’t clear to me that we are truly moving to a world of “small pieces loosely joined” instead of a world of “our pieces, loosely joined.”

You can read the rest over there.

Reviewing Access to Information Legislation

Just got informed – via the CivicAccess mailing list – that Canada’s Access to Information Commissioner is planning to review Canada’s Access to Information legislation (full story here at the Vancouver Sun).

This is great news. Canada has long trumpeted its Access to Information Legislation as world leading. This was true… in 1985. It was plausible in 1995. Today, it is anything but true. The process is slow, frequently requests are denied and requests had to be paid for by check. Indeed, if a document you are looking for might be held by the US government, it is well known among Canadian journalists that you are better to ask the Americans for it. Even though you are a foreign they are both much faster and much more likely, to provide it. It is, frankly, embarrassing.

So we are no longer global leaders. Which is why I think it is great the commissioner might look abroad for best practices. The article suggests she will look at Britain, the United States, Mexico, New Zealand and Australia.

These are all fine choices. But if I had my pick, I’d add Brazil to the mix. It’s new transparency law is exceedingly interesting and aggressive in its approach. Greg Michener – a Canadian who lives in Brazil – covered the new law in Brazil’s Open Government Shock Therapy for TechPresident (where I’m an editor). The disclosure requirements in Brazil set a bar that, in some ways, is much higher than in Canada.

There are also some Eastern European countries that have had very progressive transparency laws – in reaction to both previously authoritarian regimes and to corruption problems – that make them worth examining. In other words, I’d love to see a mix that included more countries that have altered their laws more recently. This is probably where we are going to find some of the newer, more exciting innovations.

Regardless of what countries are looked at though – I’m glad the commissioner is doing this and wish her good luck.