Tag Archives: opendata

Ontario's Open Data Policy: The Good, The Bad, The Ugly and the (Missed?) Opportunity

Yesterday the province of Ontario launched its Open Data portal. This is great news and is the culmination of a lot of work by a number of good people. The real work behind getting open data program launched is, by and large, invisible to the public, but it is essential – and so congratulations are in order for those who helped out.

Clearly this open data portal is in its early stages – something the province is upfront about. As a result, I’m less concerned with the number of data sets on the site (which however, needs to, and should, grow over time). Hopefully the good people in the government of Ontario have some surprises for us around interesting data sets.

Nor am I concerned about the layout of the site (which needs to, and should, improve over time – for example, once you start browsing the data you end up on this URL and there is no obvious path back to the open data landing page, it makes navigating the site hard).

In fact, unlike some I find any shortcomings in the website downright encouraging. Hopefully it means that speed, iteration and an attitude to ship early has won out over media obsessive, rigid, risk adverse approach governments all to often take. Time will tell if my optimism is warranted.

What I do want to focus on is the license since this is a core piece of infrastructure to an open data initiative. Indeed, it is the license that determines whether the data is actually open or closed. And I think we should be less forgiving of errors in this regard than in the past. It was one thing if you launched in the early days of open data two or four years ago. But we aren’t in early days anymore. There over 200 government open data portal around the world. We’ve crossed the chasm people. Not getting the license right is not a “beta” mistake any more. It’s just a mistake.

So what can we say about the Ontario Open Data license?

First, the Good

There is lots of good things to be said about it. It clearly keys off the UK’s Open Government License much like BC’s license did as does the proposed Canadian Open Government License. This means that above all, it is written in  plain english and is easily understood. In addition, the general format is familiar to many people interested in open data.

The other good thing about the license (pointed out to me by the always sharp Jason Birch) is that it’s attribution clause is softer than the UK, BC or even the proposed Federal Government license. Ontario uses the term “should” whereas the others use the term “must.”

Sadly, this one improvement pales in comparison to some of the problems and, most importantly the potentially lost opportunity I urgently highlight at the bottom of this post.

The Bad

While this license does have many good qualities initiated by the UK, it does suffer from some major flaws. The most notable comes in this line:

Ontario does not guarantee the continued supply of the Datasets, updates or corrections or that they are or will be accurate, useful, complete, current or free and clear of any possible third party copyright, moral right, other intellectual property right or other claim.

Basically this line kills the possibility that any business, non-profit or charity will ever use this data in any real sense. Hobbyests, geeks, academics will of course use it but this provision is deeply flawed.

Why?

Well, let me explain what it means. This says that the government cannot be held accountable to only release data it has the right to release. For example: say the government has software that tracks road repair data and it starts to release it and, happily all sorts of companies and app developers use it to help predict traffic and do other useful things. But then, one day the vendor that provided that road repair tracking software suddenly discovers in the fine print of the contract that they, not the government, own that data! Well! All those companies, non-profits and app developers are suddenly using proprietary data, not (open) government data. And the vendor would be entirely in its rights to go either sue them, or demand a license fee in exchange of letting them continue to use the data.

Now, I understand why the government is doing this. It doesn’t want to be liable if such a mistake is made. But, of course, if they don’t want to absorbe the risk, that risk doesn’t magically disappear, it transfers to the data user. But of course they have no way of managing that risk! Those users don’t know what the contracts say and what the obligations are, the party best positioned to figure that out is the government! Essentially this line transfers a risk to the party (in this case the user) that is least able to manage it. You are left asking yourself, what business, charity or non-profit is going to invest hundreds of thousands of dollars (or more) and people time to build a product, service or analysis around an asset (government data) that it might suddenly discover it doesn’t have the right to use?

The government is the only organization that can clear the rights. If it is unwilling to do so, then I think we need to question whether this is actually open data.

The Ugly

But of course the really ugly part of the license (which caused me to go on a bit of a twitter rant) comes early. Here it is:

If you do any of the above you must ensure that the following conditions are met:

  • your use of the Datasets causes no harm to others.

Wowzers.

This clause is so deeply problematic it is hard to know where to begin.

First, what is the definition of harm? If I use open data from the  Ontario government to rate hospitals and the some hospitals are sub-standard am I “harming” the hospital? Its workers? The community? The Ministry of Health?

So then who decides what the definition is? Well, since the Government of Ontario is the licensor of the data… it would seem to suggest that they do. Whatever the standing government of the data wants to decree is a “harm” suddenly becomes legit. Basically this clause could be used to strip many users – particularly those interested in using the data as a tool for accountability – of their right to use the data, simply because it makes the licensor (e.g. the government) uncomfortable.

A brief history lesson here for the lawyers who inserted this clause. Back in in March of 2011 when the Federal Government launched data.gc.ca they had a similar clause in their license. It read as follows:

“You shall not use the data made available through the GC Open Data Portal in any way which, in the opinion of Canada, may bring disrepute to or prejudice the reputation of Canada.”

While the language is a little more blunt, its effect was the same. After the press conference launching the site I sat down with Stockwell Day (who was the Minister responsible at the time) for 45 minutes and walked him through the various problems with their license.

After our conversations, guess how long it took for that clause to be removed from the license? 3 hours.

If this license is going to be taken seriously, that clause is going to have to go, otherwise, it risks being a laughing stock and a case study of what not to do in Open Government workshops around the world.

(An aside: What was particularly nice was the Minister Day personally called my cell phone to let me know that he’d removed that clause a few hours after our conversation. I’ve disagreed with Day on many, many, many things, but was deeply impressed by his knowledge of the open data file and his commitment to its ideals. Certainly his ability to change the license represents one of the fastest changes to policy I’ve ever witnessed.)

The (Missed?) Opportunity

What is ultimately disappointing about the Ontario license however is that it was never needed. Why every jurisdiction feels the need to invent its own license is always beyond me. What, beyond the softening of the attribution clause, has the Ontario license added to the Open Data world. Not much that I can see. And, as I’ve noted above, it many ways it is a step back.

You know data users would really like? A common license. That would make it MUCH easier to user data from the federal government, the government of Ontario and the Toronto City government all at the same time and not worry about compatibility issues and whether you are telling the end user the right thing or not. In this regard the addition of another license is a major step backwards. Yes, let me repeat that for other jurisdictions thinking about doing open data: The addition of another new license is a major step backwards.

Given that the Federal Government has proposed a new Open Government License that is virtually identical to this license but has less problematic language, why not simply use it? It would make the lives of the people who this license is supposed to enable  – the policy wonks, the innovators, the app developers, the data geeks – lives so much easier.

That opportunity still exists. The Government of Ontario could still elect to work with the Feds around a common license. Indeed given that the Ontario Open Data portal says they are asking for advice on how to improve this program, I implore, indeed beg, that you consider doing that. It would be wonderful if we could move to a single license in this country, and if a partnership between the Federal Government and Ontario might give such an initiative real momentum and weight. If not, into the balkanized abyss of a thousand licenses we wil stumble.

 

On Being Misquoted – Access Info Europe and Freedominfo.org

I’ve just been alerted to a new post out on Freedominfo.org has quotes of mine that are used in way that is deeply disappointing. It’s never fund to see your ideas misused to make it appear that you are against something that you deeply support.

The most disappointing misquote comes from Helen Darbishire, a European FOI expert at Access Info Europe. Speaking about the convergence between open data and access to information laws (FOIA) she “lamented that comments like Eaves’ exacerbate divisions at a time when  “synergies” are developing at macro and micro levels.” The comment she is referring to is this one:

“I just think FOIA is broken; the wait time makes it broken….” David Eaves, a Canadian open government “evangelist,” told the October 2011 meeting of International information commissioners. He said “efforts to repair it are at the margins” and governments have little incentive for reform.

I’m not sure if Darbishire was present at the 7th International Conference of Information Commissioners where I made this comment in front of a room of mostly FOI experts but the comment actually got a very warm reception. Specifically, I was talking about how the wait times of access to information requests – not theidea of Access to Information. The fact is, that for many people waiting 4-30 weeks for a response from a government for a piece of information makes the process broken. In addition, I often see the conversation among FOIA experts focus on how to reduce that time by a week or a few days. But for most people, that will still leave them feeling like the system is too slow and so, in their mind, broken, particularly in a world where people are increasingly used to getting the information they want in about .3 seconds (the length of a Google search).

What I find particularly disappointing about Darbishire’s comments is that I’ve been advocating for for the open data and access to information communities to talk more to one another – indeed long before I find any reference of her calling for it. Back in April during the OGP meeting I wrote:

There remain important and interest gaps particularly between the more mature “Access to Information” community and the younger, still coalescing “Gov2.0/OpenGov/Tech/Transparency” community. It often feels like members of the access to information community are dismissive of the technology aspects of the open government movement in general and the OGP in particular. This is disappointing as technology is likely going to have a significant impact on the future of access to information. As more and more government work gets digitized, how way we access information is going to change, and the opportunities to architect for accessibility (or not) will become more important. These are important conversations and finding a way to knit these two communities together more could help the advance everyone’s thinking.

And of course, rather than disparage Access to Information as a concept I frequently praise it, such as during this article about the challenges of convergence between open data and access to information:

Let me pause to stress, I don’t share the above to disparage FOI. Quite the opposite. It is a critical and important tool and I’m not advocating for its end. Nor am I arguing the open data can – in the short or even medium term – solve the problems raised above.

That said, I’m willing to point out the failures of both Open Data and Access to information. But to then cherry pick my comments about FOIA and paint me as someone who is being unhelpful strikes me as problematic.

I feel doubly that way since, not only have I advocated for efforts to bridge the communities, I’ve tried to make efforts to make it happen. I was the one who suggested that Warren Krafchik – the Civil Society co-chair of the Open Government Partnership be invited to the Open Knowledge Festival to help with a conversation around helping bring the two communities together and reached out to him with the invitation.

If someone wants to label me as someone who is opinionated in the space, that’s okay – I do have opinions about what works and what doesn’t work and try to share them, sometimes in a constructive way, and sometimes – such as when on a panel – in a way that helps spur discussion. But to lay the charge of being divisive, when I’ve been trying for several years to bridge the conversation and bring the open data perspective into the FOIA community, feels unfair and problematic.

Lies, Damned Lies, and Open Data

I have an article titles Lies, Damn Lies and Open Data in Slate Magazine as part of their Future Tense series.

Here, for me, is the core point:

On the surface, the open data movement was about who could access and use government data. It rested on the idea that data was as much a public asset as a highway, bridge, or park and so should be made available to those who paid for its creation and curation: taxpayers. But contrary to the hopes of some advocates, improving public access to data—that is, access to the evidence upon which public policy is going to be constructed—does not magically cause governments’, and politicians’, desire for control to evaporate. Quite the opposite. Open data will not depoliticize debate. It will force citizens, and governments, to realize how politicized data is, and always has been.

The long form census debacle here in Canada was, I think, a great example of data getting politicized, and was really helped clarify my thinking around this. This piece has been germinating since then, but the core thesis has occasionally leaked out during some of my talks and discussion. Indeed, you can see me share some of it during the tail end of my opening keynote at the Open Knowledge Foundation International Open Data Camp almost three years ago.

Anyways, please hop on over to Slate and take a look – I hope you enjoy the read.

How Government should interact with Developers, Data Geeks and Analysts

Below is a screen shot from the Opendatabc google group from about two months ago. I meant to blog about this earlier but life has been in the way. For me, this is a prefect example of how many people in the data/developer/policy world probably would like to interact with their local, regional or national government.

A few notes on this interaction:

  • I occasionally hear people try to claim the governments are not responsive to requests for data sets. Some aren’t. Some are. To be fair, this was not a request for the most controversial data set in the province. But it is was a request. And it was responded to. So clearly there are some governments that are responsive. The questions is figuring out which one’s are, why they are, and see if we can export that capacity to other jurisdictions.
  • This interaction took place in a google group – so the whole context is social and norm driven. I love that public officials in British Columbia as well as with the City of Vancouver are checking in every once in a while on google groups about open data, contributing to conversations and answering questions that citizens have about government, policies and open data. It’s a pretty responsive approach. Moreover, when people are not constructive it is the group that tends to moderate the behaviour, rather than some leviathan.
  • Yes, I’ve blacked out the email/name of the public servant. This is not because I think they’d mind being known or because they shouldn’t be know, but because I just didn’t have a chance to ask for permission. What’s interesting is that this whole interaction is public and the official was both doing what that government wanted and compliant with all social media rules. And yet, I’m blacking it out, which is a sign of how messed up current rules and norms make citizens relationships with public officials they interact with online -I’m worried of doing something wrong by telling others about a completely public action. (And to be clear, the province of BC has really good and progressive rules around these types of things)
  • Yes, this is not the be all end all of the world. But it’s a great example of a small thing being doing right. It’s nice to be able to show that to other government officials.

 

Containers, Facebook, Baseball & the Dark Matter around Open Data (#IOGDC keynote)

Below is a extended blog post that summarizes the keynote address I gave at the World Bank/Data.gov International Open Government Data Conference in Washington DC on Wednesday July 11th. This piece is cross posted over at the WeGov blog on TechPresident where I’m also write on transparency, technology and politics.

Yesterday, after spending the day at the International Open Government Data Conference at the World Bank (and co-hosted by data.gov) I left both upbeat and concerned. Upbeat because of the breadth of countries participating and the progress being made.

I was worried however because of the type of conversation we are having how it might limit the growth of both our community and the impact open data could have. Indeed as we talk about technology and how to do open data we risk missing the real point of the whole exercise – which is about use and impacts.

To get drive this point home I want to share three stories that highlight the challenges, I believe, we should be talking about.

Challenge 1: Scale Open Data

IDealx-300x247In 1956 Ideal-X, the ship pictured to the left, sailed from Newark to Houston and changed the world.

Confused? Let me explain.

As Marc Levine chronicles in his excellent book The Box, the world in 1956 was very different to our world today. Global trade was relatively low. China was a long way off from becoming the world’s factory floor. And it was relatively unusual for people to buy goods made elsewhere. Indeed, as Levine puts it, the cost of shipping goods was “so expensive that it did not pay to ship many things halfway across the country, much less halfway around the world.” I’m a child of the second era of globalization. I grew up in a world of global transport and shipping. The world before all of that which Levine is referring to is actually foreign to me. What is amazing is how much of that has just become a basic assumption of life.

And this is why Ideal-X, the aforementioned ship, is so important. It is the first cargo container ship (in how we understand containers). Its trip from Newark to Houston marked the beginning of a revolution because containers slashed the cost of shipping goods. Before Ideal-X the cost of loading cargo onto a medium sized cargo ship was $5.83 per ton, with containers, the cost dropped to 15.8 cents. Yes, the word you are looking for is: “wow.”

You have to understand that before containers loading a ship was a lot more like packing a mini-van for a family vacation to the beach than the orderly process of what could be described as stacking very large lego blocks on a boat. Before containers literally everything had to be hand packed, stored and tied down in the hull. (see picture to the right)

This is a little bit what our open data world looks like right today. The people who are consuming open data are like digital longshoreman. They have to look at each open data set differently, unpack it accordingly and figure out where to put it, how to treat it and what to do with it. Worse,  when looking at data from across multiple jurisdictions it is often much like cargo going around the world before 1956: a very slow and painful process. (see man on the right).

Of course, the real revolution in container shipping happened in 1966 when the size of containers was standardized. Within a few years containers could move from pretty much anywhere in the world from truck to train to boat and back again. In the following decades global shipping trade increased by 2.5 times the rate of economic output. In other words… it exploded.

Container-300x260Geek side bar: For techies, think of shipping containers as the TCP-IP packet of globalization. TCP-IP standardized the packet of information that flowed over the network so that data could move from anywhere to anywhere. Interestingly, like containers, what was in the package was actually not relevant and didn’t need to be known by the person transporting it. But the fact that it could move anywhere created scale and allowed for logarithmic growth.

What I’m trying to drive at is that, when it comes to open data, the number of open data sets that gets published is no longer the critical metric. Nor is the number of open data portals. We’ve won. There are more and more. The marginal political and/or persuasive benefit of an addition of another open data portal or data set won’t change the context anymore. I want to be clear – this is not to say that more open data sets and more open data portals are not important or valuable – from a policy and programmatic perspective more is much, much better. What I am saying is that having more isn’t going to shift the conversation about open data any more. This is especially true if data continues to require large amounts of work and time for people to unpack and understanding it over and over again across every portal.

In other words, what IS going to count, is how many standardized open data sets get created. This is what we SHOULD be measuring. The General Transit Feed Specification revolutionized how people engaged with public transit because the standard made it so easy to build applications and do analysis around it. What we need to do is create similar standards for dozens, hundreds, thousands of other data sets so that we can drive new forms of use and engagement. More importantly we need to figure out how to do this without relying on a standards process that take 8 to 15 to infinite years to decide on said standard. That model is too slow to serve us, and so re-imaging/reinventing that process is where the innovation is going to shift next.

So let’s stop counting the number of open data portals and data sets, and start counting the number of common standards – because that number is really low. More critically, if we want to experience the kind of explosive growth in use like that experienced by global trade and shipping after the rise of the standardized container then our biggest challenge is clear: We need to containerize open data.

Challenge 2: Learn from Facebook

facebook-logoOne of the things I find most interesting about Facebook is that everyone I’ve talked to about it notes how the core technology that made it possible was not particularly new. It wasn’t that Zuckerberg leveraged some new code or invented a new, better coding language. Rather it was that he accomplished a brilliant social hack.

Part of this was luck, that the public had come a long way and was much more willing to do social things online in 2004 than they were willing to do even two years earlier with sites like friendster. Or, more specifically, young people who’d grown up with internet access were willing to do things and imagine using online tools in ways those who had not grown up with those tools wouldn’t or couldn’t. Zuckerberg, and his users, had grown up digital and so could take the same tools everyone else had and do something others hadn’t imagined because their assumptions were just totally different.

My point here is that, while it is still early, I’m hoping we’ll soon have the beginnings of a cohort of public servants who’ve “grown up data.” For whom, despite their short career in the public service, have matured in a period where open data has been an assumption, not a novelty. My hope and suspicion is that this generation of public servants are going to think about Open Data very differently than many of us do. Most importantly, I’m hoping they’ll spur a discussion about how to use open data – not just to share information with the public – but to drive policy objectives. The canonical opportunity for me around this remains restaurant inspection data, but I know there are many, many, more.

What I’m trying to say is that the conferences we organize have got to talk less and less about how to get data open and have to start talking more about how do we use data to drive public policy objectives. I’m hoping the next International Open Government Data Conference will have an increasing number of presentations by citizens, non-profits and other outsiders are using open data to drive their agenda, and how public servants are using open data strategically to drive to a outcome.

I think we have to start fostering that conversation by next year at the latest and that this conversation, about use, has to become core to everything we talk about within 2 years, or we will risk losing steam. This is why I think the containerization of open data is so important, as well as why I think the White House’s digital government strategy is so important since it makes internal use core to the governments open data strategy.

Challenge 3: The Culture and Innovation Challenge.

In May 2010 I gave this talk on Open Data, Baseball and Government at the Gov 2.0 Summit in Washington DC. It centered around the story outline in the fantastic book Moneyball by Michael Lewis. It traces the story about how a baseball team – the Oakland A’s – used a new analysis of players stats to ferret out undervalued players. This enabled them to win a large number of games on a relatively small payroll. Consider the numbers to the right.

I mean if you are the owner of the Texas Rangers, you should be pissed! You are paying 250% in salary for 25% fewer wins than Oakland. If this were a government chart, where “wins” were potholes found and repaired, and “payroll” was costs… everyone in the world bank would be freaking out right now.

For those curious, the analytical “hack” was recognizing that the most valuable thing a player can do on offense is get on base. This is because it gives them an opportunity to score (+) but it also means you don’t burn one of your three “outs” that would end the inning and the chance for other players to score. The problem was, to measure the offensive power of a player, most teams were looking at hitting percentages (along with a lot of other weird, totally non-quantitative stuff) which ignores the possibility of getting walked, which allows you to get on base without hitting the ball!

What’s interesting however is that the original thinking about the fact that people were using the wrong metrics to assess baseball players first happened decades before the Oakland “A”s started to use it. Indeed it was a nighttime security guard with a strong mathematics background and an obsession for baseball that first began point this stuff out.

20-yearsThe point I’m making is that it took 20 years for a manager in baseball to recognize that there was better evidence and data they could be using to make decisions. TWENTY YEARS. And that manager was hated by all the other managers who believed he was ruining the game. Today, this approach to assessing baseball is common place – everyone is doing it – but see how the problem of using baseball’s “open data” to create better outcomes was never just an accessibility issue. Once that was resolved the bigger challenge centered around culture and power. Those with the power had created a culture in which new ideas – ideas grounded in evidence but that were disruptive – couldn’t find an audience. Of course, there were structural issues as well, many people had jobs that depended on not using the data, on instead relying on their “instincts” but I think the cultural issue is a significant one.

So we can’t expect that we are going to go from open portal today to better decisions tomorrow. There is a good chance that some of the ideas data causes us to think will be so radical and challenging that either the ideas, the people who champion them, or both, could get marginalized. On the up side, I feel like I’ve seen some evidence to the contrary to this in city’s like New York and Chicago, but the risk is still there.

So what are we going to do to ensure that the culture of government is one that embraces the challenges to our thinking and assumptions that doesn’t require 20 years to pass for us to make progress. This is a critical challenge for us – and it is much, much bigger than open data.

Conclusion: Focus on the Dark Matter

I’m deeply indebted to my friend – the brilliant Gordon Ross – who put me on to this idea the other day over tea.

Macguffin

Do you remember the briefcase in Pulp Fiction? The on that glowed when opened? That the characters were all excited about but you never knew what was inside it. It’s called a MacGuffin. I’m not talking about the briefcase per se. Rather I mean the object in a story that all the characters are obsessed about, but that you – the audience – never find out what it is, and frankly, really isn’t that important to you. In Pulp Fiction I remember reading that the briefcase is allegedly Marsellus Wallace soul. But ultimately, it doesn’t matter. What matters is that Vincent Vega, Jules Winnfield and a ton of other characters think it is important, and that drives the action and the plot forward.

Again – let me be clear – Open Data Portals are our MacGuffin device. We seem to care A LOT about them. But trust me, what really matters is everything that can happens around them. What makes open data important is not a data portal. It is a necessary prerequisite but it’s not the end, it just the means. We’re here because we believe that the things open data can let us and others do, matter. The Open Data portal was only ever a MacGuffin device – something that focused our attention and helped drive action so that we could do the other things – that dark matter that lies all around the MacGuffin device.

And that is what brings me back to our three challenges. Right now, the debate around open data risks become too much like a Pulp Fiction conference in which all the panels talk about the briefcase. Instead we should be talking more and more about all the action – the dark matter – taking place around the briefcase. Because that is what is really matters. For me, I think the three things that matter most are what I’ve mentioned about in this talk:

  • standards – which will let us scale, I believe strongly that the conversation is going to shift from portals to standards
  • strategic use – starting us down the path of learning how open data can drive policy outcomes; and
  • culture and power – recognizing that there are lots of open data is going to surface a lot of reasons why governments don’t want to engage data driven in decision making

In other words, I want to be talking about how open data can make the world a better place, not about how we do open data. That conversation still matters, open data portals still matter, but the path forward around them feels straightforward, and if they remain the focus we’ll be obsessing about the wrong thing.

So here’s what I’d like to see in the future from our Open Data conferences. We got to stop talking about how to do open data. This is because all of our efforts here, everything we are trying to accomplish… it has nothing to do with the data. What I think we want to be talking about is how open data can be a tool to make the world a better place. So let’s make sure that is the conversation we are have.

Open Postal Codes: A Public Response to Canada Post on how they undermine the public good

Earlier this week the Ottawa Citizen ran a story in which I’m quoted about a fight between Treasury Board and Canada Post officials over making postal code data open. Treasury Board officials would love to add it to data.gc.ca while Canada post officials are, to put it mildly, deeply opposed.

This is of course, unsurprising since Canada Post recently launched a frivolous law suit against a software developer who is – quite legally – recreating the postal code data set. For those new to this issue I blogged about this, why postal codes matter and cover the weakness (and incompetence) of Canada Post’s legal case here.

But this new Ottawa Citizen story had me rolling my eyes anew – especially after reading the quotes and text from Canada Post spokesperson. This is in no way an attack on the spokesperson, who I’m sure is a nice person. It is an attack on their employer whose position, sadly, is not just in opposition to the public interest because of the outcome in generates but because of the way it treats citizens. Let me break down Canada Posts platform of ignorance public statement line by line, in order to spell out how they are undermining both the public interest, public debate and accountability.

Keeping the information up-to-date is one of the main reasons why Canada Post needs to charge for it, said Anick Losier, a spokeswoman for the crown corporation, in an interview earlier this year. There are more than 250,000 new addresses and more than a million address changes every year and they need the revenue generated from selling the data to help keep the information up-to-date.

So what is interesting about this is that – as far as I understand – it is not Canada Post that actually generates most of this data. It is local governments that are responsible for creating address data and, ironically, they are required to share it for free with Canada Post. So Canada Post’s data set is itself built on data that it receives for free. It would be interesting for cities to suddenly claim that they needed to engage in “cost-recovery” as well and start charging Canada Post. At some point you recognize that a public asset is a public asset and that it is best leveraged when widely adopted – something Canada Post’s “cost-recovery” prevents. Indeed, what Canada Post is essentially saying is that it is okay for it to leverage the work of other governments for free, but it isn’t okay for the public to leverage its works for free. Ah, the irony.

“We need to ensure accuracy of the data just because if the data’s inaccurate it comes into the system and it adds more costs,” she said.

“We all want to make sure these addresses are maintained.”

So, of course, do I. That said, the statement makes it sound like there is a gap between Canada Post – which is interested in the accuracy of the data – and everyone else – who isn’t. I can tell you, as someone who has engaged with non-profits and companies that make use of public data, no one is more concerned about accuracy of data than those who reuse it. That’s because when you make use of public data and share the results with the public or customers, they blame you, not the government source from which you got the data, for any problems or mistakes. So invariable one thing that happens when you make data open is that you actually have more stakeholders with strong interests in ensuring the data is accurate.

But there is also something subtly misleading about Canada Posts statement. At the moment, the only reason there is inaccurate data out there is because people are trying to find cheaper ways of creating the postal code data set and so are willing to tolerate less accurate data in order to not have to pay Canada Post. If (and that is a big if) Canada Post’s main concern was accuracy, then making the data open would be the best protection as it would eliminate less accurate version of postal code data. Indeed, this suggests a failure of understanding economics. Canada states that other parts of its business become more expensive when postal code data is inaccurate. That would suggest that providing free data might help reduce those costs – incenting people to create inaccurate postal code data by charging for it may be hurting Canada Post more than any else. But we can’t assess that, for reason I outline below. And ultimately, I suspect Canada Post’s main interest in not accuracy – it is cost recovery – but that doesn’t sound nearly as good as talking about accuracy or quality, so they try to shoe horn those ideas into their argument.

She said the data are sold on a “cost-recovery” basis but declined to make available the amount of revenue it brings in or the amount of money it costs the Crown corporation to maintain the data.

This is my favourite part. Basically, a crown corporation, whose assets belong to the public, won’t reveal the cost of a process over which it has a monopoly. Let’s be really clear. This is not like other parts of their business where there are competative risk in releasing information – Canada Post is a monopoly provider. Instead, we are being patronized and essentially asked to buzz off. There is no accountability and there is no reasons why they could give us these numbers. Indeed, the total disdain for the public is so appalling it reminds me of why I opt out of junk mail and moved my bills to email and auto-pay ages ago.

This matters because the “cost-recovery” issue goes to the heart of the debate. As I noted above, Canada Post gets the underlying address data for free. That said, there is no doubt that it then creates some value to the data by adding postal codes. The question is, should that value best be recouped through cost-recovery at this point in the value chain, or at later stages through additional economy activity (and this greater tax revenue). This debate would be easier to have if we knew the scope of the costs. Does creating postal code data cost Canada Post $100,000 a year? A million? 10 million? We don’t know and they won’t tell us. There are real economic benefits to be had in a digital economy where postal code data is open, but Canada Post prevents us from having a meaningful debate since we can’t find out the tradeoffs.

In addition, it also means that we can’t assess if their are disruptive ways in which postal code data could be generated vastly more efficiently. Canada Post has no incentive (quite the opposite actually) to generate this data more efficiently and there for make the “cost-recovery” much, much lower. It may be that creating postal code data really is a $100,000 a year problem, with the right person and software working on it.

So in the end, a government owned Crown Corporation refuses to not only do something that might help spur Canada’s digital economy – make postal code data open – it refuses to even engage in a legitimate public policy debate. For an organization that is fighting to find its way in the 21st century it is a pretty ominous sign.

* As an aside, in the Citizen article it says that I’m an open government activist who is working with the federal government on the website’s development. The first part – on activism – is true. The latter half, that I work on the open government website’s development, is not. The confusion may arise from the fact that I sit on the Treasury Board’s Open Government Advisory Panel, for which I’m not paid, but am asked for feedback, criticism and suggestions – like making postal code data open – about the government’s open government and open data initiatives.

The US Government's Digital Strategy: The New Benchmark and Some Lessons

Last week the White House launched its new roadmap for digital government. This included the publication of Digital Government: Building a 21st Century Platform to Better Serve the American People (PDF version), the issuing of a Presidential directive and the announcement of White House Innovation Fellows.

In other words, it was a big week for those interested in digital and open government. Having had some time to digest these docs and reflect upon them, below are some thoughts on these announcement and lessons I hope governments and other stakeholders take from it.

First off, the core document – Digital Government: Building a 21st Century Platform to Better Serve the American People – is a must read if you are a public servant thinking about technology or even about program delivery in general. In other words, if your email has a .gov in it or ends in something like .gc.ca you should probably read it. Indeed, I’d put this document right up there with another classic must read, The Power of Information Taskforce Report commissioned by the Cabinet Office in the UK (which if you have not read, you should).

Perhaps most exciting to me is that this is the first time I’ve seen a government document clearly declare something I’ve long advised governments I’ve worked with: data should be a layer in your IT architecture. The problem is nicely summarized on page 9:

Traditionally, the government has architected systems (e.g. databases or applications) for specific uses at specific points in time. The tight coupling of presentation and information has made it difficult to extract the underlying information and adapt to changing internal and external needs.

Oy. Isn’t that the case. Most government data is captured in an application and designed for a single use. For example, say you run the license renewal system. You update your database every time someone wants to renew their license. That makes sense because that is what the system was designed to do. But, maybe you like to get track, in real time, how frequently the database changes, and by who. Whoops. System was designed for that because that wasn’t needed in the original application. Of course, being able to present the data in that second way might be a great way to assess how busy different branches are so you could warn prospective customers about wait times. Now imagine this lost opportunity… and multiply it by a million. Welcome to government IT.

Decoupling data from application is pretty much close to the first think in the report. Here’s my favourite chunk from the report (italics mine, to note extra favourite part).

The Federal Government must fundamentally shift how it thinks about digital information. Rather than thinking primarily about the final presentation—publishing web pages, mobile applications or brochures—an information-centric approach focuses on ensuring our data and content are accurate, available, and secure. We need to treat all content as data—turning any unstructured content into structured data—then ensure all structured data are associated with valid metadata. Providing this information through web APIs helps us architect for interoperability and openness, and makes data assets freely available for use within agencies, between agencies, in the private sector, or by citizens. This approach also supports device-agnostic security and privacy controls, as attributes can be applied directly to the data and monitored through metadata, enabling agencies to focus on securing the data and not the device.

To help, the White House provides a visual guide for this roadmap. I’ve pasted it below. However, I’ve taken the liberty to highlight how most governments try to tackle open data on the right – just so people can see how different the White House’s approach is, and why this is not just an issue of throwing up some new data but a total rethink of how government architects itself online.

There are of course, a bunch of things that flow out of the White House’s approach that are not spelled out in the document. The first and most obvious is once you make data an information layer you have to manage it directly. This means that data starts to be seen and treated as a asset – this means understanding who’s the custodian and establishing a governance structure around it. This is something that, previously, really only libraries and statistical bureaus have really understand (and sometimes not even!).

This is the dirty secret about open data – is that to do it effectively you actually have to start treating data as an asset. For the White House the benefit of taking that view of data is that it saves money. Creating a separate information layer means you don’t have to duplicate it for all the different platforms you have. In addition, it gives you more flexibility in how you present it, meaning the costs of showing information on different devices (say computers vs. mobile phones) should also drop. Cost savings and increased flexibility are the real drivers. Open data becomes an additional benefit. This is something I dive into deeper detail in a blog post from July 2011: It’s the icing, not the cake: key lesson on open data for governments.

Of course, having a cool model is nice and all, but, as like the previous directive on open government, this document has hard requirements designed to force departments to being shifting their IT architecture quickly. So check out this interesting tidbit out of the doc:

While the open data and web API policy will apply to all new systems and underlying data and content developed going forward, OMB will ask agencies to bring existing high-value systems and information into compliance over a period of time—a “look forward, look back” approach To jump-start the transition, agencies will be required to:

  • Identify at least two major customer-facing systems that contain high-value data and content;
  • Expose this information through web APIs to the appropriate audiences;
  • Apply metadata tags in compliance with the new federal guidelines; and
  • Publish a plan to transition additional systems as practical

Note the language here. This is again not a “let’s throw some data up there and see what happens” approach. I endorse doing that as well, but here the White House is demanding that departments be strategic about the data sets/APIs they create. Locate a data set that you know people want access to. This is easy to assess. Just look at pageviews, or go over FOIA/ATIP requests and see what is demanded the most. This isn’t rocket science – do what is in most demand first. But you’d be surprised how few governments want to serve up data that is in demand.

Another interesting inference one can make from the report is that its recommendations embrace the possibility of participants outside of government – both for and non-profit – can build services on top of government information and data. Referring back to the chart above see how the Presentation Layer includes both private and public examples? Consequently, a non-profits website dedicated to say… job info veterans could pull live data and information from various Federal Government websites, weave it together and present in a way that is most helpful to the veterans it serves. In other words the opportunity for innovation is fairly significant. This also has two addition repercussions. It means that services the government does not currently offer – at least in a coherent way – could be woven together by others. It also means there may be information and services the government simply never chooses to develop a presentation layer for – it may simply rely on private or non-profit sector actors (or other levels of government) to do that for it. This has interesting political ramifications in that it could allow the government to “retreat” from presenting these services and rely on others. There are definitely circumstances where this would make me uncomfortable, but the solution is not to not architect this system this way, it is to ensure that such programs are funded in a way that ensures government involvement in all aspects – information, platform and presentation.

At this point I want to interject two tangential thoughts.

First, if you are wondering why it is your government is not doing this – be it at the local, state or national level. Here’s a big hint: this is what happens when you make the CIO an executive who reports at the highest level. You’re just never going to get innovation out of your government’s IT department if the CIO reports into the fricking CFO. All that tells me is that IT is a cost centre that should be focused on sustaining itself (e.g. keeping computers on) and that you see IT as having no strategic relevance to government. In the private sector, in the 21st century, this is pretty much the equivalent of committing suicide for most businesses. For governments… making CIO’s report into CFO’s is considered a best practice. I’ve more to say on this. But I’m taking a deep breath and am going to move on.

Second, I love how the document also is so clear on milestones – and nicely visualized as well. It may be my poor memory but I feel like it is rare for me to read a government road map on any issues where the milestones are so clearly laid out.

It’s particularly nice when a government treats its citizens as though they can understand something like this, and aren’t afraid to be held accountable for a plan. I’m not saying that other governments don’t set out milestones (some do, many however do not). But often these deadlines are buried in reams of text. Here is a simply scorecard any citizen can look at. Of course, last time around, after the open government directive was issued immediately after Obama took office, they updated these score cards for each department, highlight if milestones were green, yellow or red, depending on how the department was performing. All in front of the public. Not something I’ve ever seen in my country, that’s for sure.

Of course, the document isn’t perfect. I was initially intrigued to see the report advocates that the government “Shift to an Enterprise-Wide Asset Management and Procurement Model.” Most citizens remain blissfully unaware of just how broken government procurement is. Indeed, I say this dear reader with no idea where you live and who your government is, but I enormously confident your government’s procurement process is totally screwed. And I’m not just talking about when they try to buy fighter planes. I’m talking pretty much all procurement.

Today’s procurement is perfectly designed to serve one group. Big (IT) vendors. The process is so convoluted and so complicated they are really the only ones with the resources to navigate it. The White House document essentially centralizes procurement further. On the one hand this is good, it means the requirements around platforms and data noted in the document can be more readily enforced. Basically the centre is asserting more control at the expense of the departments. And yes, there may be some economies of scale that benefit the government. But the truth is whenever procurement decision get bigger, so to do the stakes, and so to does the process surrounding them. Thus there are a tiny handful of players that can respond to any RFP and real risks that the government ends up in a duopoly (kind of like with defense contractors). There is some wording around open source solutions that helps address some of this, but ultimately, it is hard to see how the recommendations are going to really alter the quagmire that is government procurement.

Of course, these are just some thoughts and comments that struck me and that I hope, those of you still reading, will find helpful. I’ve got thoughts on the White House Innovation Fellows especially given it appears to have been at least in part inspired by the Code for America fellowship program which I have been lucky enough to have been involved with. But I’ll save those for another post.

Mainstreaming The Gov 2.0 Message in the Canadian Public Service

A couple of years ago I wrote a Globe Op-Ed “A Click Heard Across the Public Service” that outlined the significance of the clerk using GCPEDIA to communicate with public servants. It was a message – or even more importantly – an action to affirm his commitment to change how government works. For those unfamiliar, the Clerk of the Privy Council is the head of the public service for the federal government, a crude analogy would be he is the CEO and the Prime Minister is the Chairman (yes, I know that analogy is going to get me in trouble with people…)

Well, the clerk continues to broadcast that message, this time in his Nineteenth Annual Report to the Prime Minister on the Public Service of Canada. As an observer in this space what is particularly exciting for me is that:

  • The Clerk continues to broadcast this message. Leadership and support at the top is essential on these issues. It isn’t sufficient, but it is necessary.
  • The role of open data and social media is acknowledged on several occasions

And as a policy entrepreneur, what is doubly exciting is that:

  • Projects I’ve been personally involved in get called out; and
  • Language I’ve been using in briefs, blog posts and talks to public servants is in this text

You can, of course, read the whole report here. There is much more in it than just talk of social media and rethinking the public service, there is obviously talk about the budget and other policy areas as well. But bot the continued prominence given to renewal and technology, and explicit statements about the failure to move fast enough to keep up with the speed of change in society at large, suggests that the clerk continues to be worried about this issue.

For those less keen to read the whole thing, here are some juice bits that mattered to me:

In the section “The World in Which We Serve” which is basically providing context…

At the same time, the traditional relationship between government and citizens continues to evolve. Enabled by instantaneous communication and collaboration technologies, citizens are demanding a greater role in public policy development and in the design and delivery of services. They want greater access to government data and more openness and transparency from their institutions.

Under “Our Evolving Institution” which lays out some of the current challenges and priorities we find this as one of the four areas of focus mentioned:

  • The Government expanded its commitment to Open Government through three main streams: Open Data (making greater amounts of government data available to citizens), Open Information (proactively releasing information about Government activities) and Open Dialogue (expanding citizen engagement with Government through Web 2.0 technologies).

This is indeed interesting. The more this government talks about open in general, the more it will be interesting to see how the public reacts, particularly in regards to its treatment of certain sectors (e.g. environmental groups). Still more interesting is what appears to be a growing recognition of the importance of data (from a government that cut the long form census). Just yesterday the Health Minister, while talking about a controversial multiple sclerosis vein procedure stated that:

“Before our government will give the green light to a limited clinical trial here in Canada, the proposed trial would need to receive all necessary ethical and medical approvals. As Minister of Health, when it comes to clinical issues, I rely on advice from doctors and scientists who are continually monitoring the latest research, and make recommendations in the best interests of patient health and safety.”

This is, interestingly, an interesting statement from a government that called doctors “unethical” because of their support for the insite injection site which, the evidence shows, is the best way to save lives and get drug users into detox programs.

For evidence based policy advocates – such as myself – the adoption of the language of data is one that I think could help refocus debates onto a more productive terrain.

Then towards the bottom of the report there is a call out that mentions the Open Policy conference at DFAIT I had the real joy of helping out convene and that I served as the host and facilitator for.

Policy Built on Shared Knowledge
The Department of Foreign Affairs and International Trade (DFAIT) has been experimenting with an Open Policy Development Model that uses social networking and technology to leverage ideas and expertise from both inside and outside the department. A recent full-day event convened 400 public and private sector participants and produced a number of open policy pilots, e.g., an emergency response simulation involving consular officials and a volunteer community of digital crisis-mappers.

DFAIT is also using GCConnex, the Public Service’s social networking site, to open up policy research and development to public servants across departments.

This is a great, a much deserved win for the team at DFAIT that went out on a limb to run this conference and we rewarded with participation from across the public service.

Finally, anyone who has seen me speak will recognize a lot of this text as well:

As author William Gibson observed, “The future is already here, it’s just unevenly distributed.” Across our vast enterprise, public servants are already devising creative ways to do a better job and get better results. We need to shine a light on these trailblazers so that we can all learn from their experiments and build on them. Managers and senior leaders can foster innovation—large and small—by encouraging their teams to ask how their work can be done better, test out new approaches and learn from mistakes.

So much innovation in the 21st century is being made possible by well-developed communication technologies. Yet many public servants are frustrated by a lack of access to the Web 2.0 and social media tools that have such potential for helping us transform the way we work and serve Canadians. Public servants should enjoy consistent access to these new tools wherever possible. We will find a way to achieve this while at the same time safeguarding the data and information in our care.

I also encourage departments to continue expanding the use of Web 2.0 technologies and social media to engage with Canadians, share knowledge, facilitate collaboration, and devise new and efficient services.

To be fully attribute, the William Gibson quote, which I use a great deal, was something I first saw used by my friend Tim O’Reilly who is, needless to say, a man with a real ability to understand a trend and explain an idea to people. I hope his approach to thinking is reflected in much of what I do.

What, in sum, all these call outs really tell us is that the Gov 2.0 message in the federal public service is being mainstreamed, at the very least among the most senior public servants. This does not mean that our government is going to magically transform, it simply means that the message is getting through and people are looking for ways to push this type of thinking into the organization. As I said before, this is not sufficient to change the way government works, but it is necessary.

Going to keep trying to see what I can do to help.

Canada's Action Plan on Open Government: A Review

The other day the Canadian Government published its Action Plan on Open Government, a high level document that both lays out the Government’s goals on this file as well as fulfill its pledge to create tangible goals as part of its participation in next week’s Open Government Partnership 2012 annual meeting in Brazil.

So what does the document say and what does it mean? Here is my take.

Take Away #1: Not a breakthrough document

There is much that is good in the government’s action plan – some of which I will highlight later. But for those hoping that Canada was going to get the Gov 2.0 bug and try to leapfrog leaders like the United States or the United Kingdom, this document will disappoint. By and large this document is not about transforming government – even at its most ambitious it appears to be much more about engaging in some medium sized experiments.

As a result the document emphasizes a number of things that the UK and US started doing several years ago such  getting license that adheres to international norms or posting government resource allocation and performance management information online in machine readable forms or refining the open data portal.

What you don’t see are explicit references to try to re-think how government leverages citizens experience and knowledge with a site like Challenge.gov, engage experts in innovative ways such as with Peer to Patent, or work with industries or provinces to generate personal open data such as the US has done with the Blue Button (for Healthcare) or the Green Button (for utilities).

Take Away #2: A Solid Foundation

This said, there is much in the document that is good. Specifically, in many areas, it does lay a solid foundation for some future successes. Probably the most important statements are the “foundational commitments” that appear on this page. Here are some key points:

Open Government Directive

In Year 1 of our Action Plan, we will confirm our policy direction for Open Government by issuing a new Directive on Open Government. The Directive will provide guidance to 106 federal departments and agencies on what they must do to maximize the availability of online information and data, identify the nature of information to be published, as well as the timing, formats, and standards that departments will be required to adopt… The clear goal of this Directive is to make Open Government and open information the ‘default’ approach.

This last sentence is nice to read. Of course the devil will be in the detail (and in the execution) but establishing a directive around open information could end being as important (although admittedly not as powerful – an important point) as the establishment of Access to Information. Done right such a directive could vastly expand the range of documents made available to the public, something that should be very doable as more and more government documentation moves into digital formats.

For those complaining about the lack of ATI reform in the document this directive, and its creation will be with further exploration. There is an enormous opportunity here to reset how government discloses information – and “the default to open” line creates a public standard that we can try to hold the government to account on.

And of course the real test for all this will come in years 2-3 when it comes time to disclose documents around something sensitive to the government… like, say, around the issue of the Northern Gateway Pipeline (or something akin to the Afghan Prisoner issue). In theory this directive should make all government research and assessments open, when this moment happens we’ll have a real test of the robustness of any new such directive.

Open Government License:

To support the Directive and reduce the administrative burden of managing multiple licensing regimes across the Government of Canada, we will issue a new universal Open Government License in Year 1 of our Action Plan with the goal of removing restrictions on the reuse of published Government of Canada information (data, info, websites, publications) and aligning with international best practices… The purpose of the new Open Government License will be to promote the re-use of federal information as widely as possible...

Full Disclosure: I have been pushing (in an unpaid capacity) for the government to reform its license and helping out in its discussions with other jurisdictions around how it can incorporate the best practices and most permissive language possible.

This is another important foundational piece. To be clear, this is not about an “open data” license. This is about creating a licensing for all government information and media. I suspect this appeals to this government in part because it ends the craziness of having lawyers across government constantly re-inventing new licenses and creating a complex set of licenses to manage. Let me be clear about what I think this means: This is functionally about neutering crown copyright. It’s about creating a licensing regime that makes very clear what the users rights are (which crown copyright does not doe) and that is as permissive as possible about re-use (which crown copyright, because of its lack of clarity, is not). Achieving such a license is a critical step to doing many of the more ambitious open government and gov 2.0 activities that many of us would like to see happen.

Take Away #3: The Good and Bad Around Access to Information

For many, I think this may be the biggest disappointment is that the government has chosen not to try to update the Access to Information Act. It is true that this is what the Access to Information Commissioners from across the country recommended they do in an open letter (recommendation #2 in their letter). Opening up the act likely has a number of political risks – particularly for a government that has not always been forthcoming documents (the Afghan detainee issue and F-35 contract both come to mind) – however, I again propose that it may be possible to achieve some of the objectives around improved access through the Open Government Directive.

What I think shouldn’t be overlooked, however, is the government’s “experiment” around modernizing the administration of Access to Information:

To improve service quality and ease of access for citizens, and to reduce processing costs for institutions, we will begin modernizing and centralizing the platforms supporting the administration of Access to Information (ATI). In Year 1, we will pilot online request and payment services for a number of departments allowing Canadians for the first time to submit and pay for ATI requests online with the goal of having this capability available to all departments as soon as feasible. In Years 2 and 3, we will make completed ATI request summaries searchable online, and we will focus on the design and implementation of a standardized, modern, ATI solution to be used by all federal departments and

These are welcome improvements. As one colleague – James McKinney – noted, the fact that you have to pay with a check means that only people with Canadian bank accounts can make ATIP requests. This largely means just Canadian citizens. This is ridiculous. Moreover, the process is slow and painful (who uses check! the Brits are phasing them out by 2018 – good on em!). The use of checks creates a real barrier – particularly I think, for young people.

Also, being able search summaries of previous requests is a no-brainer.

Take Away #4: The is a document of experiments

As I mentioned earlier, outside the foundational commitments, the document reads less like a grand experiment and more like a series of small experiments.

Here the Virtual Library is another interesting commitment – certainly during the consultations the number one complaint was that people have a hard time finding what they are looking for on government websites. Sadly, even if you know the name of the document you want, it is still often hard to find. A virtual library is meant to address this concern – obviously it is all going to be in the implementation – but it is a response to a genuine expressed need.

Meanwhile the Advancing Recordkeeping in the Government of Canada and User-Centric Web Services feel like projects that were maybe already in the pipeline before Open Government came on the scene. They certainly do conform with the shared services and IT centralization announced by Treasury Board last year. They could be helpful but honestly, these will all be about execution since these types of projects can harmonize processes and save money, or they can become enormous boondoggles that everyone tries to work around since they don’t meet anyone’s requirements. If they do go the right way, I can definitely imagine how they might help the management of ATI requests (I have to imagine it would make it easier to track down a document).

I am deeply excited about the implementation of International Aid Transparency Initiative (IATI). This is something I’ve campaigned for and urged the government to adopt, so it is great to see. I think these types of cross jurisdictional standards have a huge role to play in the open government movement, so joining one, figuring out what about the implementation works and doesn’t work, and assessing its impact, is important both for Open Government in general but also for Canada, as it will let us learn lessons that, I hope, will become applicable in other areas as more of these types of standards emerge.

Conclusion:

I think it was always going to be a stretch to imagine Canada taking a leadership role in Open Government space, at least at this point. Frankly, we have a lot of catching up to do, just to draw even with places like the US and the UK which have been working hard to keep experimenting with new ideas in the space. What is promising about the document is that it does present an opportunity for some foundational pieces to be put into play. The bad news is that real efforts to rethink governments relationship with citizens, or even the role of the public servant within a digital government, have not been taken very far.

So… a C+?

 

Additional disclaimer: As many of my readers know, I sit on the Federal Government’s Open Government Advisory Panel. My role on this panel is to serve as a challenge function to the ideas that are presented to us. In this capacity I share with them the same information I share with you – I try to be candid about what I think works and doesn’t work around ideas they put forward. Interestingly, I did not see even a draft version of the Action Plan until it was posted to the website and was (obviously by inference) not involved in its creation. Just want to share all that to be, well, transparent, about where I’m coming from – which remains as a citizen who cares about these issues and wants to push governments to do more around gov 2.0 and open gov.

Also, sorry or the typos, but I’m sick and it is 1am. So I’m checking out. Will proof read again when I awake.

Using BHAG's to Change Organizations: A Management, Open Data & Government Mashup

I’m a big believer in the ancillary benefits of a single big goal. Set a goal that has one clear objective, but as a result a bunch of other things have to change as well.

So one of my favourite Big Hairy Audacious Goals (BHAG) for an organization is to go paperless. I like the goal for all sorts of reasons. Much like a true BHAG it is is clear, compelling, and has obvious “finish line.” And while hard, it is achievable.

It has the benefit of potentially making the organization more “green” but, what I really like about it is that it requires a bunch of other steps to take place that should position the organization to become more efficient, effective and faster.

This is because paper is dumb technology. Among many, many other things, information on paper can’t be tracked, changes can’t be noted, pageviews can’t be recorded, data can’t be linked. It is hard to run a lean business when you’re using paper.

Getting rid of it often means you have get a better handle on workflow and processes so they can be streamlined. It means rethinking the tools you use. It means getting rid of checks and into direct deposit, moving off letters and into email, getting your documents, agendas, meeting minutes, policies and god knows what else out of MS Word and onto wikis, shifting from printed product manuals to PDFs or better still, YouTube videos. These changes in turn require a rethinking of how your employees work together and the skills they require.

So what starts off as a simple goal – getting rid of paper – pretty soon requires some deep organizational change. Of course, the rallying cry of “more efficient processes!” or “better understanding our workflow” have pretty limited appeal and, can be hard for everyone to wrap their head around. However, “getting rid of paper”? It is simple, clear and, frankly, is something that everyone in the organization can probably contribute an idea towards achieving. And, it will achieve many of the less sexy but more important goals.

Turns out, maybe some governments may be thinking this way.

The State of Oklahoma has a nice website that talks about all their “green” initiatives. Of course, it just so happens that many of these initiatives – reducing travel, getting rid of paper, etc… also happen to reduce costs and improve service but are easier to measure. I haven’t spoken with anyone at the State of Oklahoma to see if this is the real goal, but the website seems to acknowledges that it is:

OK.gov was created to improve access to government, reduce service-processing costs and enable state agencies to provide a higher quality of service to their constituents.

So for Oklahoma, going paperless becomes a way to get at some larger transformations. Nice BHAG. Of course, as with any good BHAG, you can track these changes and share them with your shareholders, stakeholders or… citizens.

And behold! The Oklahoma go green website invites different state agencies to report data on how their online services reduce paper consumption and/or carbon emissions. Data that they in turn track and share with the public via the state’s Socrata data portal. This graph shows how much agencies have reduced their paper output over the past four years.

Notice how some departments have no data – if I were an Oklahoma taxpayer, I’m not too sure I’d be thrilled with them.But take a step back. This is a wonderful example of how transparency and open data can help drive a government initiative. Not only can that data make it easier for the public to understand what has happened (and so be more readily engaged) but it can help cultivating a culture of accountability as well as – and perhaps more importantly – promote a culture of metrics that I believe will be critical for the future of government.

I often say to governments “be strategic about how you use some of the data you make open.” Don’t just share a bunch of stuff, use what you share to achieve policy or organizational objectives. This is a great example. It’s also a potentially a great example at organizational change in a large and complex environment. Interesting stuff.