Monthly Archives: November 2012

Proactive Disclosure – An Example of Doing it Wrong from Shared Service Canada

Just got flagged about this precious example of doing proactive disclosure wrong.

So here is a Shared Service Canada website dedicated the Roundtable on Information Technology Infrastructure. Obviously this is a topic of real interest to me – I write a fair bit about delivering (or failing to deliver) government service online effectively. I think it is great that Service Canada is reaching out to the private sector to try to learn lessons. Sadly, some of the links on the site didn’t work for me, specifically the important sounding: Summary of Discussions: Shared Services Canada Information and Communications Technology Sector Engagement Process.

But that is not the best part. Take a look at the website below. In one glance the entirety of the challenge of rethinking communications and government transparency  is nicely summed up.
proactive-nonedisclosure2

Apparently, if you want a copy of the presentation the Minister made to the committee you have to request it.

That’s odd, since really, the cost of making it downloadable is essentially zero. While the cost of emailing someone and making them get it back to you, is well, a colossal waste of my, and that public servants, time. (Indeed, to demonstrate this to the government, I hope that everyone of my readers requests this document).

There are, in my mind, two explanations for this. The first, more ominous one, is that someone wants to create barriers to getting this document. Maybe that is the case – who knows.

The second, less ominous, but in some ways more depressing answer is that this is simply standard protocol, or worse, that no one involved in this site has the know how or access rights to upload the document.

Noted added 6 mins after posting: There is also a third reason, less innocuous than reasons one and two. That being that the government cannot post the document unless it is in both official languages. And since this presentation is only available in (likely) english, it cannot be posted. This actually feels the most likely and will be teeing up a whole new post shortly on bilingualism and transparency. The number of times I’m told a document or data set can’t be proactively shared because of language issues is frustratingly frequent. I’ve spoken to the Language Commissioner on this and believe more dialogue is required. Bilingualism cannot be an excuse for a poor experience, or worse, opaque government.

In either case, it is a sad outcome. Either our government is maliciously trying to make it difficult to get information to Canadians (true of most governments) or they don’t know how to.

Of course, you may be saying… but David – who cares if there is an added step to geting this document that is slightly inconvenient? Well, let me remind you THIS IS SHARED SERVICE CANADA AND IT IS ABOUT A COMMITTEE FOCUSED ON DELIVERING ONLINE SERVICES (INTERNALLY AND EXTERNALLY) MORE EFFECTIVELY. If there was one place where you wanted to show you were responsive, proactive and reducing the transaction costs to citizens… the kind of approach you were going to use to make all government service more efficient and effective… this would be it.

The icing on the cake? There is that beautiful “transparency” button right below the text that talks about how the government is interested in proactive disclosure (see screenshot below). I love the text here – this is exactly what I want my government to be doing.

And yet, this is experience, while I’m sure conforming to the letter of the policy, feels like it violates pretty much everything around the spirit of proactive disclosure. This is after all a document that has already been made public… and now we are requiring citizens to request it.

We have a lot of work to do.

Making Bug Fixing more Efficient (and pleasant) – This Made Me Smile

The other week I was invited down to the Bay Area Drupal Camp (#BadCamp) to give a talk on community management to a side meeting of the 100 or so core Drupal developers.

I gave a hour long version of my OSCON keynote on the Science of Community Management and had a great time engaging what was clearly a room of smart, caring people who want to do good things, ship great code, and work well with one anther. As part of my talk I ran them through some basic negotiation skills – particularly around separating positions (a demand) from interests (the reasons/concerns that created that demand). Positions are challenging to work with as they tend to lock people into what they are asking and makes outcomes either binary or fosters compromises that may make little sense, where as interests (which you get by being curious and asking lots of whys) can create the conditions for make creative, value generative outcomes that also strengthen the relationship.

Obviously, understanding the difference is key, but so is acting on it, e.g. asking questions are critical moments to try to open up the dialogue and uncover interests.

Seems like someone was listening during the workshop since I just sent this link to a conversation about a tricky drupal bug (Screen shot below)

Drupal-bug-fixing2

I love the questions. This is exactly the type of skill and community norms I think we need to build tino more of bug tracking environments/communities, which can sometimes be pretty hostile and aggressive, something that I think turns off many potentially good contributors.

International Open Data Day – An Update

(Can’t read the whole post? Important stuff is highlighted in grey below.)

Two years ago, I met some open data advocates from Brazil and Ottawa, and we schemed of doing an international open data hackathon. A few weeks later, this blog post launched International Open Data Day with the hope that supporters would emerge in 5-6 cities to host local events.

Instead, on December 4th, 2010, an amazing group of volunteers organized events in 60 cities on every continent except Australia (OK, and Antartica). A year later, our second effort enjoyed similar success – including Australia this time! We also benefited from a generous informal partnership with the Random Hacks of Kindness, who let our open data hackers participate in spaces where they were organizing hackathons on the same day.

A number of people have been asking me about this year’s International Open Data Day. First, I want to apologize to the community of wonderful people who have been asking me if will we do it again. Between an outrageous travel schedule, work commitments and, most happily, the advent of my little boy – whose 36 hour birth(!) prevented me from participating in Vancouver’s Open Data hackathon last year(!) – I have been remiss in not organizing and communicating information regarding this year’s event.

So, over the past 4 weeks I’ve been consulting with some of the previous year’s organizers (I deeply apologize if I have not spoken to you), and here is the status update:

  1. I’ve been reminded of the awesomeness, organizational skill, patience and general wonderfulness of open data advocates around the world
  2. We are DEFINITELY DOING International Open Data Day and WANT YOU TO PARTICIPATE
  3. For a number of reasons* we are MOVING THE EVENT TO FEB 23rd, 2013.
  4. We are more keen than ever to have this not just be about hacking code, but reusing other projects and BROADENING THE COMMUNITY by using data to do all sorts of things from analysis to visualizations and even just sharing an interesting insight.

So with that in mind, here are a few things that are going on as we speak.

  • If you are interested in participating in, or organizing an International Open Data Day event in your community, please join the Open Data Day mailing list. We’ll post any updates there, and it is a great place to ask questions and get to know other organizers.
  • You may have noticed that opendataday.org is not working. Yikes! We know! We are in the midst of transferring the site over to the Open Knowledge Foundation, who has generously offered to manage it.
  • Once we get the site up – and especially the wiki – I’ll be hitting the mailing list again asking people to start registering their cities, noting locations and sharing information. In the past, the wiki has been amazingly well organized, which has been exceedingly helpful and awesome.
  • For those interested in hosting an event, there is a great guide on how to do so, thanks to Kevin McArthur, Herb Lainchbury and Donna Horn – that can be found here.
  • We are starting to reach out to some of our past partners including The Open Knowledge Foundation, Data.gov/White House, the World Bank, ScraperWiki, the Sunlight Foundation, various Hackerspaces, Hacks and Hackers – and anyone else who wants to be involved, we’d love to hear from you.
  • Feel free to send me an email if you have any thoughts, questions or concerns.

Okay, apologies for the long blog post and the delay in communicating about this. If you have participated or organized an event in the past, thank you. I hope you’re excited about doing it again and that the new date works for you!

Am looking forward to hearing from you.

*Those reasons being: too close to various holidays, too much at the best time of summer in the southern hemisphere, too little time to organize, timing around the Code for America fellows being in their partner cities.

Ontario's Open Data Policy: The Good, The Bad, The Ugly and the (Missed?) Opportunity

Yesterday the province of Ontario launched its Open Data portal. This is great news and is the culmination of a lot of work by a number of good people. The real work behind getting open data program launched is, by and large, invisible to the public, but it is essential – and so congratulations are in order for those who helped out.

Clearly this open data portal is in its early stages – something the province is upfront about. As a result, I’m less concerned with the number of data sets on the site (which however, needs to, and should, grow over time). Hopefully the good people in the government of Ontario have some surprises for us around interesting data sets.

Nor am I concerned about the layout of the site (which needs to, and should, improve over time – for example, once you start browsing the data you end up on this URL and there is no obvious path back to the open data landing page, it makes navigating the site hard).

In fact, unlike some I find any shortcomings in the website downright encouraging. Hopefully it means that speed, iteration and an attitude to ship early has won out over media obsessive, rigid, risk adverse approach governments all to often take. Time will tell if my optimism is warranted.

What I do want to focus on is the license since this is a core piece of infrastructure to an open data initiative. Indeed, it is the license that determines whether the data is actually open or closed. And I think we should be less forgiving of errors in this regard than in the past. It was one thing if you launched in the early days of open data two or four years ago. But we aren’t in early days anymore. There over 200 government open data portal around the world. We’ve crossed the chasm people. Not getting the license right is not a “beta” mistake any more. It’s just a mistake.

So what can we say about the Ontario Open Data license?

First, the Good

There is lots of good things to be said about it. It clearly keys off the UK’s Open Government License much like BC’s license did as does the proposed Canadian Open Government License. This means that above all, it is written in  plain english and is easily understood. In addition, the general format is familiar to many people interested in open data.

The other good thing about the license (pointed out to me by the always sharp Jason Birch) is that it’s attribution clause is softer than the UK, BC or even the proposed Federal Government license. Ontario uses the term “should” whereas the others use the term “must.”

Sadly, this one improvement pales in comparison to some of the problems and, most importantly the potentially lost opportunity I urgently highlight at the bottom of this post.

The Bad

While this license does have many good qualities initiated by the UK, it does suffer from some major flaws. The most notable comes in this line:

Ontario does not guarantee the continued supply of the Datasets, updates or corrections or that they are or will be accurate, useful, complete, current or free and clear of any possible third party copyright, moral right, other intellectual property right or other claim.

Basically this line kills the possibility that any business, non-profit or charity will ever use this data in any real sense. Hobbyests, geeks, academics will of course use it but this provision is deeply flawed.

Why?

Well, let me explain what it means. This says that the government cannot be held accountable to only release data it has the right to release. For example: say the government has software that tracks road repair data and it starts to release it and, happily all sorts of companies and app developers use it to help predict traffic and do other useful things. But then, one day the vendor that provided that road repair tracking software suddenly discovers in the fine print of the contract that they, not the government, own that data! Well! All those companies, non-profits and app developers are suddenly using proprietary data, not (open) government data. And the vendor would be entirely in its rights to go either sue them, or demand a license fee in exchange of letting them continue to use the data.

Now, I understand why the government is doing this. It doesn’t want to be liable if such a mistake is made. But, of course, if they don’t want to absorbe the risk, that risk doesn’t magically disappear, it transfers to the data user. But of course they have no way of managing that risk! Those users don’t know what the contracts say and what the obligations are, the party best positioned to figure that out is the government! Essentially this line transfers a risk to the party (in this case the user) that is least able to manage it. You are left asking yourself, what business, charity or non-profit is going to invest hundreds of thousands of dollars (or more) and people time to build a product, service or analysis around an asset (government data) that it might suddenly discover it doesn’t have the right to use?

The government is the only organization that can clear the rights. If it is unwilling to do so, then I think we need to question whether this is actually open data.

The Ugly

But of course the really ugly part of the license (which caused me to go on a bit of a twitter rant) comes early. Here it is:

If you do any of the above you must ensure that the following conditions are met:

  • your use of the Datasets causes no harm to others.

Wowzers.

This clause is so deeply problematic it is hard to know where to begin.

First, what is the definition of harm? If I use open data from the  Ontario government to rate hospitals and the some hospitals are sub-standard am I “harming” the hospital? Its workers? The community? The Ministry of Health?

So then who decides what the definition is? Well, since the Government of Ontario is the licensor of the data… it would seem to suggest that they do. Whatever the standing government of the data wants to decree is a “harm” suddenly becomes legit. Basically this clause could be used to strip many users – particularly those interested in using the data as a tool for accountability – of their right to use the data, simply because it makes the licensor (e.g. the government) uncomfortable.

A brief history lesson here for the lawyers who inserted this clause. Back in in March of 2011 when the Federal Government launched data.gc.ca they had a similar clause in their license. It read as follows:

“You shall not use the data made available through the GC Open Data Portal in any way which, in the opinion of Canada, may bring disrepute to or prejudice the reputation of Canada.”

While the language is a little more blunt, its effect was the same. After the press conference launching the site I sat down with Stockwell Day (who was the Minister responsible at the time) for 45 minutes and walked him through the various problems with their license.

After our conversations, guess how long it took for that clause to be removed from the license? 3 hours.

If this license is going to be taken seriously, that clause is going to have to go, otherwise, it risks being a laughing stock and a case study of what not to do in Open Government workshops around the world.

(An aside: What was particularly nice was the Minister Day personally called my cell phone to let me know that he’d removed that clause a few hours after our conversation. I’ve disagreed with Day on many, many, many things, but was deeply impressed by his knowledge of the open data file and his commitment to its ideals. Certainly his ability to change the license represents one of the fastest changes to policy I’ve ever witnessed.)

The (Missed?) Opportunity

What is ultimately disappointing about the Ontario license however is that it was never needed. Why every jurisdiction feels the need to invent its own license is always beyond me. What, beyond the softening of the attribution clause, has the Ontario license added to the Open Data world. Not much that I can see. And, as I’ve noted above, it many ways it is a step back.

You know data users would really like? A common license. That would make it MUCH easier to user data from the federal government, the government of Ontario and the Toronto City government all at the same time and not worry about compatibility issues and whether you are telling the end user the right thing or not. In this regard the addition of another license is a major step backwards. Yes, let me repeat that for other jurisdictions thinking about doing open data: The addition of another new license is a major step backwards.

Given that the Federal Government has proposed a new Open Government License that is virtually identical to this license but has less problematic language, why not simply use it? It would make the lives of the people who this license is supposed to enable  – the policy wonks, the innovators, the app developers, the data geeks – lives so much easier.

That opportunity still exists. The Government of Ontario could still elect to work with the Feds around a common license. Indeed given that the Ontario Open Data portal says they are asking for advice on how to improve this program, I implore, indeed beg, that you consider doing that. It would be wonderful if we could move to a single license in this country, and if a partnership between the Federal Government and Ontario might give such an initiative real momentum and weight. If not, into the balkanized abyss of a thousand licenses we wil stumble.

 

Re-Architecting the City by Changing the Timelines and Making it Disappear

A couple of weeks ago I was asked by one of the city’s near where I live to sit on an advisory board around the creation of their Digital Government strategy. For me the meeting was good since I felt that a cohort of us on the advisory board were really pushing the city into a place of discomfort (something you want an advisory board to do in certain ways). My sense is a big part of that conversation had to do with a subtle gap between the city staff and some of the participants around what a digital strategy should deal with.

Gord Ross (of Open Roads) – a friend and very smart guy – and I were debriefing afterwards about where and why the friction was arising.

We had been pushing the city hard on its need to iterate more and use data to drive decisions. This was echoed by some of the more internet oriented members of the board. But at one point I feel like I got healthy push back from one of the city staff. How, they asked, can I iterate when I’ve got 10-60 years timelines that I need to plan around? I simply cannot iterate when some of the investments I’m making are that longterm.

Gord raised Stewart Brands building layers as a metaphor which I think sums up the differing views nicely.

Brand presents his basic argument in an early chapter, “Shearing Layers,” which argues that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In his business-consulting manner, he calls these the “Six S’s” (borrowed in part from British architect and historian F. Duffy’s “Four S’s” of capital investment in buildings).

The Site is eternal; the Structure is good for 30 to 300 years (“but few buildings make it past 60, for other reasons”); the Skin now changes every 15 to 20 years due to both weathering and fashion; the Services (wiring, plumbing, kitchen appliances, heating and cooling) change every seven to 15 years, perhaps faster in more technological settings; Space Planning, the interior partitioning and pedestrian flow, changes every two or three years in offices and lasts perhaps 30 years in the most stable homes; and the innermost layers of Stuff (furnishings) change continually.

My sense is the city staff are trying to figure out what the structure, skin and services layers should be for a digital plan, whereas a lot of us in the internet/tech world live occasionally in the services layer but most in the the space planning and stuff layers where the time horizons are WAY shorter. It’s not that we have to think that way, it is just that we have become accustomed to thinking that way… doubly so since so much of what works on the internet isn’t really “planned” it is emergent. As a result, I found this metaphor useful for trying to understanding how we can end up talking past one another.
It also goes to the heart of what I was trying to convey to the staff: that I think there are a number of assumptions governments make about what has been a 10 or 50 year lifecycle versus what that lifecycle could be in the future.
In other words, a digital strategy could allow some things “phase change” from being say in the skin or service layer to being able to operate on the faster timeline, lower capital cost and increased flexibility of a space planning layer. This could have big implications on how the city works. If you are buying software or hardware on the expectation that you will only have to do it every 15 years your design parameters and expectations will be very different than if it is designed for 5 years. It also has big implications for the systems that you connect to or build around that software. If you accept that the software will constantly be changing, easy integration becomes a necessary feature. If you think you will have things for decades than, to a certain degree, stability and rigidity are a byproduct.
This is why, if the choice is between trying to better predict how to place a 30 year bet (e.g. architect something to be in the skin or services layer) or place a 5 year bet (architect it to be in the space planning or stuff layer) put as much of it in the latter as possible. If you re-read my post on the US government’s Digital Government strategy, this is functionally what I think they are trying to do. By unbundling the data from the application they are trying to push the data up to the services layer of the metaphor, while pushing the applications built upon it down to the space planning and stuff layer.
This is not to say that nothing should be long term, or that everything long term is bad. I hope not to convey this. Rather, that by being strategic about what we place where we can foster really effective platforms (services) that can last for decades (think data) while giving ourselves a lot more flexibility around what gets built around them (think applications, programs, etc…).
The Goal
The reason why you want to do all this, is because you actually want to give the city the flexibility to a) compete in a global marketplace and b) make itself invisible to its citizens. I hinted at this goal the other day at the end of my piece in TechPresident on the UK’s digital government strategy.
On the competitive front I suspect that across Asia and Africa about 200 cities, and maybe a lot more, are going to get brand new infrastructure over the coming 100 years. Heck some of these cities are even being built from scratch. If you want your city to compete in that environment, you’d better be able to offer new and constantly improving services in order to keep up. If not, others may create efficiencies and discover improvements that given them structural advantages in the competition for talent and other resources.
But the other reason is that this kind of flexibility is, I think, critical to making (what Gord now has me referring to as the big “C” city) disappear. I like my government services best when they blend into my environment. If you live a privilidged Western World existence… how often do you think about electricity? Only when you flick the switch and it doesn’t work. That’s how I suspect most people want government to work. Seamless, reliable, designed into their lives, but not in the way of their lives. But more importantly, I want the “City” to be invisible so that it doesn’t get in the way of my ability to enjoy, contribute to, and be part of the (lower case) city – the city that we all belong to. The “city” as that messy, idea swapping, cosmopolitan, wealth and energy generating, problematic space that is the organism humans create where ever the gather in large numbers. I’d rather be writing the blog post on a WordPress installation that does a lot of things well but invisibly, rather than monkeying around with scripts, plugins or some crazy server language I don’t want to know. Likewise, the less time I spend on “the City,” and the more seamlessly it works, the more time I spend focused on “the city” doing the things that make life more interesting and hopefully better for myself and the world.
Sorry for the rambling post. But digesting a lot of thoughts. Hope there were some tasty pieces in that for you. Also, opaque blog post title eh? Okay bed time now.

The UK's Digital Government Strategy – Worth a Peek

I’ve got a piece up on TechPresident about the UK Government’s Digital Strategy which was released today.

The strategy (and my piece!) are worth checking out. They are saying a lot of the right things – useful stuff for anyone in industry or sector that has been conservative vis-a-vis online services (I’m looking at you governments and banking).

As  I note in the piece… there is reason we should expect better:

The second is that the report is relatively frank, as far as government reports go. The website that introduces the three reports is emblazoned with an enormous title: “Digital services so good that people prefer to use them.” It is a refreshing title that amounts to a confession I’d like to see from more governments: “sorry, we’ve been doing it wrong.” And the report isn’t shy about backing that statement up with facts. It notes that while the proportion of Internet users who shop online grew from 74 percent in 2005 to 86 percent in 2011, only 54 percent of UK adults have used a government service online. Many of those have only used one.

Of course the real test will come with execution. The BC Government, the White House and others have written good reports on digital government, but it is rolling it out that is the tricky part. The UK Government has pretty good cred as far as I’m concerned, but I’ll be watching.

You can read the piece here – hope you enjoy!