Category Archives: public service sector renewal

Thesis Question Idea: Probing Power & Promotions in the Public Service

Here’s an idea for a PhD candidate out there with some interest in government or HR and some quant skills.

Imagine you could access the a sensible slice of the HR history of a 300,000+ person organization, so you could see when people were promoted and where they moved in the organization?.

I’m not sure if would work, but the Government Electronic Directory Service (GEDS), essentially a “white pages” of Canada’s national government, could prove to be such a dataset. The service is actually designed to let people find one another within government. However, this also means it could potentially allow an someone to track the progress of public servants careers since you can see the different titles an employee enjoys each time they change jobs (and thus get a new title and phone number in GEDS). While not a perfect match, job titles generally map up to pay scales and promotions, likely making it not a prefect, but likely still a good, metric for career trajectory.

The screen shot below is for a random name I tried. I’ve attempted to preserve the privacy of the employee, which, in truth isn’t really necessary, since anyone can access GEDS and so the data isn’t actually private to begin with.

GEDS 2

There are a number of interesting questions I could imagine an engaged researcher could ask with such data. For example, where are the glass ceilings: are there particular senior roles that seem harder for women to get promoted into? Who are the super mentors: is there a manager whose former charges always seem to go on to lofty careers? Are there power cliques: are there super public servants around whom others cluster and whose promotions or career moves are linked? Are there career paths that are more optimal, or suboptimal? Or worse is ones path predetermined early on by where and in what role one enters the public service? And (frighteningly), could you create a predictive algorithm that allowed one to accurately forecast who might be promoted.

These types of questions could be enormously illuminating and shed an important light on how the public service works. Indeed, this data set would not only be important to issues of equity and fairness within the public service, but also around training and education. In many ways, I wish the public service itself would look at this data to learn about itself.

Of course, given that there is not effectively a pan-government HR group (that I’m aware of) it is unlikely that anyone is thinking about the GEDS data in a pan-government and longitudinal way (more likely there are HR groups organized by ministry that just focus on their ministry’s employees). All this, in my mind, would make this research in an academic institution all the more important.

I’m sure there are probably fears that would drive opposition to this. Privacy is an obvious one (this is why I’m saying an academic, or the government itself, should do this). Another might be lawsuits. Suppose such a study did discover institutional sexism? Or that some other group of people were disproportionally passed over for roles in a way that suggested unfair treatment. If this hypothetical study were able to quantify this discrimination in a new way, could it then be used to support lawsuits? I’ve no idea. Nor do I think I care. I’d rather have a government that was leveraging its valuable talent in the most equitable and effective way then one that stayed blind to understanding itself in order to avoid a possible lawsuit.

The big if of course, is whether snapshots of the GEDS database have been saved over the years, either on purpose or inadvertently (backups?). It is also possible that some geek somewhere has been scrapping GEDS on a nightly, weekly or monthly basis. The second big if is, would anyone be willing to hand the data over? I’d like to think that the answer would be yes, particularly for an academic whose proposal had been successfully vetted by an Institutional Review Board.

If anyone ever decides to pursue this, I’d be happy to talk to you more about ideas I have. Also, I suspect there may be other levels of government that similar applications. Maybe this would work easier on a smaller scale.

The Value of Open Data – Don’t Measure Growth, Measure Destruction

Alexander Howard – who, in my mind, is the best guy covering the Gov 2.0 space – pinged me the other night to ask “What’s the best evidence of open data leading to economic outcomes that you’ve seen?”

I’d like to hack the question because – I suspect – for many people, they will be looking to measure “economic outcomes” in ways that I don’t think will be so narrow as to be helpful. For example, if you are wondering what the big companies are going to be that come out of the open data movement and/or what are the big savings that are going to be found by government via sifting through the data, I think you are probably looking for the wrong indicators.

Why? Part of it is because the number of “big” examples is going to be small.

It’s not that I don’t think there won’t be any. For example several years ago I blogged about how FOIed (or, in Canada ATIPed) data that should have been open helped find $3.2B in evaded tax revenues channeled through illegal charities. It’s just that this is probably not where the wins will initially take place.

This is in part because most data for which there was likely to be an obvious and large economic impact (eg spawning a big company or saving a government millions) will have already been analyzed or sold by governments before the open data movement came along. On the analysis side of the question- if you are very confident a data set could yield tens or hundreds of millions in savings… well… you were probably willing to pay SAS or some other analytics firm 30-100K to analyze it. And you were probably willing to pay SAP a couple of million (a year?) to set up the infrastructure to just gather the data.

Meanwhile, on the “private sector company” side of the equation – if that data had value, there were probably eager buyers. In Canada for example, interest in census data – to help with planning where to locate stores or how to engage in marketing and advertising effectively – was sold because the private sector made it clear they were willing to pay to gain access to it. (Sadly, this was bad news for academics, non-profits and everybody else, for whom it should have been free, as it was in the US).

So my point is, that a great deal of the (again) obvious low hanging fruit has probably been picked long before the open data movement showed up, because governments – or companies – were willing to invest some modest amounts to create the benefits that picking those fruit would yield.

This is not to say I don’t think there are diamonds in the rough out there – data sets that will reveal significant savings – but I doubt they will be obvious or easy finds. Nor do I think that billion dollar companies are going to spring up around open datasets over night since –  by definition – open data has low barriers to entry to any company that adds value to them. One should remember it took Red Hat two decades to become a billion dollar company. Impressive, but it is still a tiny compared to many of its rivals.

And that is my main point.

The real impact of open data will likely not be in the economic wealth it generates, but rather in its destructive power. I think the real impact of open data is going to be in the value it destroys and so in the capital it frees up to do other things. Much like Red Hat is fraction of the size of Microsoft, Open Data is going to enable new players to disrupt established data players.

What do I mean by this?

Take SeeClickFix. Here is a company that, leveraging the Open311 standard, is able to provide many cities with a 311 solution that works pretty much out of the box. 20 years ago, this was a $10 million+ problem for a major city to solve, and wasn’t even something a small city could consider adopting – it was just prohibitively expensive. Today, SeeClickFix takes what was a 7 or 8 digit problem, and makes it a 5 or 6 digit problem. Indeed, I suspect SeeClickFix almost works better in a small to mid-sized government that doesn’t have complex work order software and so can just use SeeClickFix as a general solution. For this part of the market, it has crushed the cost out of implementing a solution.

Another example. And one I’m most excited. Look at CKAN and Socrata. Most people believe these are open data portal solutions. That is a mistake. These are data management companies that happen to have simply made “sharing (or “open”) a core design feature. You know who does data management? SAP. What Socrata and CKAN offer is a way to store, access, share and engage with data previously gathered and held by companies like SAP at a fraction of the cost. A SAP implementation is a 7 or 8 (or god forbid, 9) digit problem. And many city IT managers complain that doing anything with data stored in SAP takes time and it takes money. CKAN and Socrata may have only a fraction of the features, but they are dead simple to use, and make it dead simple to extract and share data. More importantly they make these costly 7 and 8 digital problems potentially become cheap 5 or 6 digit problems.

On the analysis side, again, I do hope there will be big wins – but what I really think open data is going to do is lower the costs of creating lots of small wins – crazy numbers of tiny efficiencies. If SAP and SAS were about solving the 5 problems that could create 10s of millions in operational savings for governments and companies then Socrata, CKAN and the open data movement is about finding the 1000 problems for which you can save between $20,000 and $1M in savings. For example, when you look at the work that Michael Flowers is doing in NYC, his analytics team is going to transform New York City’s budget. They aren’t finding $30 million dollars in operational savings, but they are generating a steady stream of very solid 6 to low 7 digit savings, project after project. (this is to say nothing of the lives they help save with their work on ambulances and fire safety inspections). Cumulatively  over time, these savings are going to add up to a lot. But there probably isn’t going to be a big bang. Rather, we are getting into the long tail of savings. Lots and lots of small stuff… that is going to add up to a very big number, while no one is looking.

So when I look at open data, yes, I think there is economic value. Lots and lots of economic value. Hell, tons of it.

But it isn’t necessarily going to happen in a big bang, and it may take place in the creative destruction it fosters and so the capital it frees up to spend on other things. That may make it potentially harder to measure (I’m hoping some economist much smarter than me is going tell me I’m wrong about that) but that’s what I think the change will look like.

Don’t look for the big bang, and don’t measure the growth in spending or new jobs. Rather let’s try to measure the destruction and cumulative impact of a thousand tiny wins. Cause that is where I think we’ll see it most.

Postscript: Apologies again for any typos – it’s late and I’m just desperate to get this out while it is burning in my brain. And thank you Alex for forcing me to put into words something I’ve been thinking about saying for months.

 

Canada Post and the War on Open Data, Innovation & Common Sense (continued, sadly)

Almost exactly a year ago I wrote a blog post on Canada Post’s War on the 21st Century, Innovation & Productivity. In it I highlighted how Canada Post launched a lawsuit against a company – Geocoder.ca – that recreates the postal code database via crowdsourcing. Canada Posts case was never strong, but then, that was not their goal. As a large, tax payer backed company the point wasn’t to be right, it was to use the law as a way to financial bankrupt a small innovator.

This case matters – especially to small start ups and non-profits. Open North – a non-profit on which I sit on the board of directors – recently explored what it would cost to use Canada Posts postal code data base on represent.opennorth.ca, a website that helps identify elected officials who serve a given address. The cost? $9,000 a year, nothing near what it could afford.

But that’s not it. There are several non-profits that use Represent to help inform donors and other users of their website about which elected officials represent geographies where they advocate for change. The licensing cost if you include all of these non-profits and academic groups? $50,000 a year.

This is not a trivial sum, and it is very significant for non-profits and academics. It is also a window into why Canada Post is trying to sue Geocoder.ca – which offers a version of its database for… free. That a private company can offers a similar service at a fraction of the cost (or for nothing) is, of couse, a threat.

Sadly, I wish I could report good news on the one year anniversary of the case. Indeed, I should be!

This is because what should have been the most important development was how the Federal Court of Appeal made it even more clear that data cannot be copyrighted. This probably made it Canada Post’s lawyers that they were not going to win and made it even more obvious to us in the public that the lawsuit against geocoder.ca – which has not been dropped-  was completely frivolous.

Sadly, Canada Post reaction to this erosion of its position was not to back off, but to double down. Recognizing that they likely won’t win a copyright case over postal code data, they have decided:

a) to assert that they hold trademark on the words ‘postal code’

b) to name Ervin Ruci – the opertator of Geocoder.ca – as a defendent in the case, as opposed to just his company.

The second part shows just how vindictive Canada Post’s lawyers are, and reveals the true nature of this lawsuit. This is not about protecting trademark. This is about sending a message about legal costs and fees. This is a predatory lawsuit, funded by you, the tax payer.

But part a is also sad. Having seen the writing on the wall around its capacity to win the case around data, Canada Post is suddenly decided – 88 years after it first started using “Postal Zones” and 43 years after it started using “Postal Codes” to assert a trade mark on the term? (You can read more on the history of postal codes in canada here).

Moreover the legal implications if Canada Post actually won the case would be fascinating. It is unclear that anyone would be allowed to solicit anybody’s postal code – at least if they mentioned the term “postal code” – on any form or website without Canada Posts express permission. It leads one to ask. Does the federal government have Canada Post’s express permission to solicit postal code information on tax forms? On Passport renewal forms? On any form they have ever published? Because if not, they are, I understand Canada Posts claim correctly, in violation of Canada Post trademark.

Given the current government’s goal to increase the use of government data and spur innovation, will they finally intervene in what is an absurd case that Canada Post cannot win, that is using tax payer dollars to snuff out innovators, increases the costs of academics to do geospatial oriented social research and that creates a great deal of uncertainty about how anyone online be they non-profits, companies, academics, or governments, can use postal codes.

I know of no other country in the world that has to deal with this kind of behaviour from their postal service. The United Kingdom compelled its postal service to make postal code information public years ago.In Canada, we handle the same situation by letting a tax payer subsidized monopoly hire expensive lawyers to launch frivolous lawsuits against innovators who are not breaking the law.

That is pretty telling.

You can read more about this this, and see the legal documents on Ervin Ruci’s blog has also done a good job covering this story at canada.com.

CivicOpen: New Name, Old Idea

The other day Zac Townsend published a piece, “Introducing the idea of an open-source suite for municipal governments,” laying out the case for why cities should collaboratively create open source software that can be shared among them.

I think it is a great idea. And I’m thrilled to hear that more people are excited about exploring this model, and think any such discussion would be helped with having some broader context, and more importantly, because any series of posts on this subject that fails to look at previous efforts is, well, doomed to repeat the failures of the past.

Context

I wrote several blog posts making the case for it in 2009. (Rather than CivicOpen, I called it Muniforge.) These post, I’m told, helped influence the creation of CivicCommons, a failed effort to do something similar (and upon which I sat on an advisory board).

Back when I published my posts I thought I was introducing the idea of shared software development in the civic space. It was only shortly after that l learned of Kuali – a simliar effort that was occurring the university sector – and was so enamoured with it I wrote a piece about how we should clone its governance structure and create a simliar organization for the civic space (something that the Kuali leadership told me they would be happy to facilitate). I also tried tried to expose anyone I could to Kuali. I had a Kuali representative speak at the first Code for America Summit and have talked about it from time to time while helping teach the Code for America Fellows at the beginning of each CfA year. I also introduced them to anyone I would meet in municipal IT and helped spur conference calls between them and people in San Francisco, Boston and other cities so they could understand the model better.

But even in 2009 I was not introducing the idea of shared municipal open source projects. While discovering that Kuali existed was great (and perhaps I can be forgiven for my oversight, as they are in the educational and not municipal space) I completely failed to locate the Association des développeurs et utilisateurs de logiciels libres pour les administrations et les collectivités territoriales (ADULLACT) a French project created seven(!) years prior to my piece that does exactly what I described in my Muniforge posts and what Zac describes in his post. (The English translation of ADULLACT’s name is the Association of Developers and Users of Free Software for Governments and Local Authorities; there is no English Wikipedia page that I could see, but a good French version is here.) I know little about why ADULLACT has survived, but their continued existence suggests that they are doing something right – ideas and processes that should inform any post about what a good structure for CivicOpen or another initiative might want to draw upon.

Of course, in addition to these, there are several other groups that have tried – some with little, others with greater, success – to talk about or create open source projects that span cities. Indeed, last year, Berkeley Unviersity’s Smart Cities program proclaiming the Open Source city arrived. In that piece Berkeley references OpenPlans, which has for years tried to do something similar (and was indeed one of the instigators behind CivicCommons – a failed effort to get cities to share code. Here you can read Philip Ashlock, who was then at OpenPlans, talk about the desirability of cities creating and sharing open source code. In addition to CivicCommons, there is CityMart, a private sector actor that seeks to connect cities with solutions, including those that are open source; in essence it could be a catalyst for building municipal open source communities, but, as far as I can tell, it isn’t. (update) There was also the US Federal Government’s Open Code Initiative, which now seems defunct, and Mark Headd of Code for America tells me Google “Government Open Code Collaborative” but to which I can find no information on.

To understand why some models have failed and why some models have succeeded, Andrew Oram’s article in Journal of Information Technology and Politics is another good starting point. There was also some research into these ideas shared at the Major Cities of Europe IT Users Group back in 2009 by James Fogarty and Willoughby that can be read here; this includes several mini-case studies from several cities and a crude, but good, cost-benefit analysis.

And there are other efforts that are more random. Like  The Intelligent/Smart Cities Open Source Community which for “anyone interested on intelligent / smart cities development and looks for applications and solutions which have been successfully implemented in other cities, mainly open source applications.”

Don’t be ahistorical

I share all this for several reasons.

  1. I want Zac to succeed in figuring out a model that works.
  2. I’d love to help.
  3. To note that there has been a lot of thought into this idea already. I myself thought I was breaking ground when I wrote my Muniforge piece back in 2009. I was not. There were a ton of lessons I could have incorporated into that piece that I did not, and previous successes and failures I could have linked to, but didn’t (until at least discovering Kuali).

I get nervous when I see posts – like that on the Ash Centre’s blog – that don’t cite previous efforts and that feel, to put it bluntly, ahistorical. I think laying out a model is a great idea. But we have a lot of data and stories about what works and doesn’t work. To not draw on these examples (or even mention or link to them) seems to be a recipe for repeating the mistakes of the past. There are reasons CivicCommons failed, and why Kuali and ADDULACT have succeeded. I’ve interviewed a number of people at the former (and sadly no one at the latter) and this feels like table stakes before venturing down this path. It also feels like a good way of modelling what you eventually want a municipal/civic open source community to do – build and learn from the social code as well as business and organizational models of those that have failed and succeeded before you. That is the core of what the best and most successful open source (and, frankly many successful non-open source) projects do.

What I’ve learned is that the biggest challenge are not technical, but cultural and institutional. Many cities have policies, explicit or implicit that prevent them from using open source software, to say nothing of co-creating open source software. Indeed, after helping draft the Open Motion adopted by the City of Vancouver, I helped the city revise their procurement policies to address these obstacles. Indeed, drawing on the example mentioned in Zac’s post, you will struggle to find many small and medium sized municipalities that use Linux, or even that let employees install Firefox on computers. Worse, many municipal IT staff have been trained that Open Source is unstable, unreliable and involves questionable people. It is a slow process to reverse these opinions.

Another challenge that needs to be addressed is that many city IT departments have been hollowed out and don’t have the capacity to write much code. For many cities IT is often more about operations and selecting who to procure from, not writing software. So a CivicOpen/Muniforge/CivicCommons/ADULLACT approach will represent a departure into an arena where many cities have little capacity and experience. Many will be reluctant to built this capacity.

There are many more concerns of course and, despite them, I continue to think the idea is worth pursuing.

I also fear this post will be read as a critique. That is not my intent. Zac is an ally and I want to help. Above all, I share the above because   the good news is this isn’t an introduction. There is a rich history of ideas and efforts from which to learn and build upon. We don’t need do this on our own or invent anew. There is a community of us who are thinking about these things and have lessons to share, so let’s share and make this work!

A note to the Ash Centre

As an aside, I’d have loved to linked to this at the bottom of Zac’s post on the Havard Kennedy School Ash Centre website, but webmaster has opted to not allow comments. Even more odd is that it does not show any dates on their posts. Again, fearing I sound critical but just wanted to be constructive, I believe it is important for anyone, but academic institution especially to have dates listed on articles so that we can better understand the timing and context in which they were written. In addition, (and I understand this sentiment may not be shared) but a centre focused on Democratic Governance and Innovation should allow for some form of feedback or interaction, at least some way people can respond to and/or build on the ideas they publish.

The South -> North Innovation Path in Government: An Example?

I’ve always felt that a lot of innovation happens where resources are scarcest. Scarcity forces us to think differently, to be efficient and to question traditional (more expensive) models.

This is why I’m always interested to see how local governments in developing economies are handling various problems. There is always an (enormous) risk that these governments will be lured into doing things they way they have been done in developing economies (hello SAP!). Sometimes this makes sense, but often, newer, disruptive and cheaper ways of accomplishing the goal have emerged in the interim.

What I think is really interesting is when a trend started in the global south migrates to the global north. I think I may have just spotted one example.

The other week the City of Boston announced its City Hall to Go trucks – mobile vans that, like food trucks, will drive around the city and be at various civic events available to deliver citizen services on the go! See the video and “menu” below.

 

city-hall-menu-225x300

This is really cool. In Vancouver we have a huge number of highly successful food carts. It is not hard to imagine an experiment like this as well – particularly in underserved neighborhoods or at the numerous public festivals and public food markets that take place across the city.

But, as the title of this post suggests, Boston is not the first city to do this. This United Nations report points out how the state government of Bahia started to do something similar in the mid 90s in the state capital of Salvador.

In 1994 the Government of Bahia hosted the first of several annual technology fairs in the state capital, Salvador. A few government services were offered there, using new ICT systems (e.g., issuing identification cards). The service was far more efficient and well-received by the public. The idea was then raised: Why not deliver services this way on a regular basis?

…A Mobile Documents SAC also was developed to reach the most remote and deprived communities in Bahia. This Mobile SAC is a large, 18-wheel truck equipped with air-conditioning, TV set, toilets, and a covered waiting area. Inside the truck, four basic citizenship services are provided: issuance of birth certificates, identification card, labor identification card, and criminal record verification.

I feel very much like I’ve read about smaller trucks delivering services in other cities in Brazil as well – I believe one community in Brazil had mobile carts with computers on them that toured neighborhoods so citizens could more effectively participate in online petitions and crowdsourcing projects being run by the local government.

I’m not sure if the success of these projects in developing economy cities influenced the thinking in Boston – if yes, that is interesting. If not, it is still interesting. It suggests that thinking and logic behind this type innovation is occurring in several cities simultaneously, even if when these cities have markedly different levels of GDP per capita and internet access (among many other things). My hope is that those in government will be more and more willing to see how their counterparts elsewhere in the world – no matter where – are doing things. Money is tight for governments everywhere, so good ideas may be more likely to go from those who feel the burden of costs the greatest.

Proactive Disclosure – An Example of Doing it Wrong from Shared Service Canada

Just got flagged about this precious example of doing proactive disclosure wrong.

So here is a Shared Service Canada website dedicated the Roundtable on Information Technology Infrastructure. Obviously this is a topic of real interest to me – I write a fair bit about delivering (or failing to deliver) government service online effectively. I think it is great that Service Canada is reaching out to the private sector to try to learn lessons. Sadly, some of the links on the site didn’t work for me, specifically the important sounding: Summary of Discussions: Shared Services Canada Information and Communications Technology Sector Engagement Process.

But that is not the best part. Take a look at the website below. In one glance the entirety of the challenge of rethinking communications and government transparency  is nicely summed up.
proactive-nonedisclosure2

Apparently, if you want a copy of the presentation the Minister made to the committee you have to request it.

That’s odd, since really, the cost of making it downloadable is essentially zero. While the cost of emailing someone and making them get it back to you, is well, a colossal waste of my, and that public servants, time. (Indeed, to demonstrate this to the government, I hope that everyone of my readers requests this document).

There are, in my mind, two explanations for this. The first, more ominous one, is that someone wants to create barriers to getting this document. Maybe that is the case – who knows.

The second, less ominous, but in some ways more depressing answer is that this is simply standard protocol, or worse, that no one involved in this site has the know how or access rights to upload the document.

Noted added 6 mins after posting: There is also a third reason, less innocuous than reasons one and two. That being that the government cannot post the document unless it is in both official languages. And since this presentation is only available in (likely) english, it cannot be posted. This actually feels the most likely and will be teeing up a whole new post shortly on bilingualism and transparency. The number of times I’m told a document or data set can’t be proactively shared because of language issues is frustratingly frequent. I’ve spoken to the Language Commissioner on this and believe more dialogue is required. Bilingualism cannot be an excuse for a poor experience, or worse, opaque government.

In either case, it is a sad outcome. Either our government is maliciously trying to make it difficult to get information to Canadians (true of most governments) or they don’t know how to.

Of course, you may be saying… but David – who cares if there is an added step to geting this document that is slightly inconvenient? Well, let me remind you THIS IS SHARED SERVICE CANADA AND IT IS ABOUT A COMMITTEE FOCUSED ON DELIVERING ONLINE SERVICES (INTERNALLY AND EXTERNALLY) MORE EFFECTIVELY. If there was one place where you wanted to show you were responsive, proactive and reducing the transaction costs to citizens… the kind of approach you were going to use to make all government service more efficient and effective… this would be it.

The icing on the cake? There is that beautiful “transparency” button right below the text that talks about how the government is interested in proactive disclosure (see screenshot below). I love the text here – this is exactly what I want my government to be doing.

And yet, this is experience, while I’m sure conforming to the letter of the policy, feels like it violates pretty much everything around the spirit of proactive disclosure. This is after all a document that has already been made public… and now we are requiring citizens to request it.

We have a lot of work to do.

The UK's Digital Government Strategy – Worth a Peek

I’ve got a piece up on TechPresident about the UK Government’s Digital Strategy which was released today.

The strategy (and my piece!) are worth checking out. They are saying a lot of the right things – useful stuff for anyone in industry or sector that has been conservative vis-a-vis online services (I’m looking at you governments and banking).

As  I note in the piece… there is reason we should expect better:

The second is that the report is relatively frank, as far as government reports go. The website that introduces the three reports is emblazoned with an enormous title: “Digital services so good that people prefer to use them.” It is a refreshing title that amounts to a confession I’d like to see from more governments: “sorry, we’ve been doing it wrong.” And the report isn’t shy about backing that statement up with facts. It notes that while the proportion of Internet users who shop online grew from 74 percent in 2005 to 86 percent in 2011, only 54 percent of UK adults have used a government service online. Many of those have only used one.

Of course the real test will come with execution. The BC Government, the White House and others have written good reports on digital government, but it is rolling it out that is the tricky part. The UK Government has pretty good cred as far as I’m concerned, but I’ll be watching.

You can read the piece here – hope you enjoy!

Playing with Budget Cutbacks: On a Government 2.0 Response, Wikileaks & Analog Denial of Service Attacks

Reflecting on yesterday’s case study in broken government I had a couple of addition thoughts that I thought fun to explore and that simply did not make sense including in the original post.

A Government 2.0 Response

Yesterday’s piece was all about how Treasury Board’s new rules were likely to increase the velocity of paperwork to a far greater cost than the elimination of excess travel.

One commentator noted a more Gov 2.0 type solution that I’d been mulling over myself. Why not simply treat the government travel problem as a big data problem? Surely there are tools that would allow you to look at government travel in aggregate, maybe mashed it up against GEDS data (job title and department information) that would enable one to quickly identify outliers and other high risk travel that are worthy of closer inspection. I’m not talking about people who travel a lot (that wouldn’t be helpful) but rather people who engage in unusual travel that is hard to reconcile with their role.

While I’m confident that many public servants would find such an approach discomforting, it would be entirely within the purview of their employer to engage in such an analysis. It would also be far more effective, targeted and a deterrent (I suspect, over time) than the kind of blanket policy I wrote about yesterday that is just as (if not more) likely to eliminate necessary travel as it is unnecessary travel. Of course, if you just want to eliminate travel because you think any face to face, group or in person learning is simply not worth the expense – than the latter approach is probably more effective.

Wikileaks and Treasury Board

Of course re-reading yesterday’s post I was having a faint twinge of familiarity. I suddenly realized that my analysis of the impact of the travel restriction policy on government has parallels to the goal that drove Assange to create wikileaks. If you’ve not read Zunguzungu blog post exploring Assange’s writings about the “theory of change” of wikileaks I cannot encourage you enough to go and read it. At its core lies a simple assessment – that wikileaks is trying to shut down the “conspiracy of the state” by making it harder for effective information to be transmitted within the state. Of course, restricting travel is not nearly the same as making it impossible for public servants to communicate, but it does compromise the ability to coordinate and plan effectively – as such the essay is illuminating in thinking about how these types of policies impact not the hierarchy of an organization, but the hidden and open networks (the secret government) that help make the organization function.

Read this extract below below for a taste:

This is however, not where Assange’s reasoning leads him. He decides, instead, that the most effective way to attack this kind of organization would be to make “leaks” a fundamental part of the conspiracy’s  information environment. Which is why the point is not that particular leaks are specifically effective. Wikileaks does not leak something like the “Collateral Murder” video as a way of putting an end to that particular military tactic; that would be to target a specific leg of the hydra even as it grows two more. Instead, the idea is that increasing the porousness of the conspiracy’s information system will impede its functioning, that the conspiracy will turn against itself in self-defense, clamping down on its own information flows in ways that will then impede its own cognitive function. You destroy the conspiracy, in other words, by making it so paranoid of itself that it can no longer conspire:

This is obviously a totally different context – but it is interesting to see that one way to alter an organizations  is to change the way in which information flows around it. This was not – I suspect – the primary goal of the Treasury Board directive (it was a cost driven measure) but the above paragraph is an example of the unintended consequences. Less communication means the ability of the organization to function could be compromised.

Bureaucratic Directive’s as an Analog Denial of Service Attack

There is, of course, another more radical way of thinking about the Treasury Board directive. One of the key points I tried to make yesterday was that the directive was likely to increase the velocity of bureaucratic paperwork, tie up a larger amount of junior and, more preciously, senior resource time, all while actually allowing less work to be done.

Now if a government department were a computer, and I was able to make it send more requests that slowed its CPU (decision making capacity) and thus made other functions harder to perform – and in extreme cases actually prevented any work from happening – that would be something pretty similar to a Denial of Service attack.

Again, I’m not claiming that this was the intent, but it is a fun and interesting lens by which to look at the problem. More to explore here, I’m sure.

Hopefully this has bent a few minds and helped people see the world differently.

Open Postal Codes: A Public Response to Canada Post on how they undermine the public good

Earlier this week the Ottawa Citizen ran a story in which I’m quoted about a fight between Treasury Board and Canada Post officials over making postal code data open. Treasury Board officials would love to add it to data.gc.ca while Canada post officials are, to put it mildly, deeply opposed.

This is of course, unsurprising since Canada Post recently launched a frivolous law suit against a software developer who is – quite legally – recreating the postal code data set. For those new to this issue I blogged about this, why postal codes matter and cover the weakness (and incompetence) of Canada Post’s legal case here.

But this new Ottawa Citizen story had me rolling my eyes anew – especially after reading the quotes and text from Canada Post spokesperson. This is in no way an attack on the spokesperson, who I’m sure is a nice person. It is an attack on their employer whose position, sadly, is not just in opposition to the public interest because of the outcome in generates but because of the way it treats citizens. Let me break down Canada Posts platform of ignorance public statement line by line, in order to spell out how they are undermining both the public interest, public debate and accountability.

Keeping the information up-to-date is one of the main reasons why Canada Post needs to charge for it, said Anick Losier, a spokeswoman for the crown corporation, in an interview earlier this year. There are more than 250,000 new addresses and more than a million address changes every year and they need the revenue generated from selling the data to help keep the information up-to-date.

So what is interesting about this is that – as far as I understand – it is not Canada Post that actually generates most of this data. It is local governments that are responsible for creating address data and, ironically, they are required to share it for free with Canada Post. So Canada Post’s data set is itself built on data that it receives for free. It would be interesting for cities to suddenly claim that they needed to engage in “cost-recovery” as well and start charging Canada Post. At some point you recognize that a public asset is a public asset and that it is best leveraged when widely adopted – something Canada Post’s “cost-recovery” prevents. Indeed, what Canada Post is essentially saying is that it is okay for it to leverage the work of other governments for free, but it isn’t okay for the public to leverage its works for free. Ah, the irony.

“We need to ensure accuracy of the data just because if the data’s inaccurate it comes into the system and it adds more costs,” she said.

“We all want to make sure these addresses are maintained.”

So, of course, do I. That said, the statement makes it sound like there is a gap between Canada Post – which is interested in the accuracy of the data – and everyone else – who isn’t. I can tell you, as someone who has engaged with non-profits and companies that make use of public data, no one is more concerned about accuracy of data than those who reuse it. That’s because when you make use of public data and share the results with the public or customers, they blame you, not the government source from which you got the data, for any problems or mistakes. So invariable one thing that happens when you make data open is that you actually have more stakeholders with strong interests in ensuring the data is accurate.

But there is also something subtly misleading about Canada Posts statement. At the moment, the only reason there is inaccurate data out there is because people are trying to find cheaper ways of creating the postal code data set and so are willing to tolerate less accurate data in order to not have to pay Canada Post. If (and that is a big if) Canada Post’s main concern was accuracy, then making the data open would be the best protection as it would eliminate less accurate version of postal code data. Indeed, this suggests a failure of understanding economics. Canada states that other parts of its business become more expensive when postal code data is inaccurate. That would suggest that providing free data might help reduce those costs – incenting people to create inaccurate postal code data by charging for it may be hurting Canada Post more than any else. But we can’t assess that, for reason I outline below. And ultimately, I suspect Canada Post’s main interest in not accuracy – it is cost recovery – but that doesn’t sound nearly as good as talking about accuracy or quality, so they try to shoe horn those ideas into their argument.

She said the data are sold on a “cost-recovery” basis but declined to make available the amount of revenue it brings in or the amount of money it costs the Crown corporation to maintain the data.

This is my favourite part. Basically, a crown corporation, whose assets belong to the public, won’t reveal the cost of a process over which it has a monopoly. Let’s be really clear. This is not like other parts of their business where there are competative risk in releasing information – Canada Post is a monopoly provider. Instead, we are being patronized and essentially asked to buzz off. There is no accountability and there is no reasons why they could give us these numbers. Indeed, the total disdain for the public is so appalling it reminds me of why I opt out of junk mail and moved my bills to email and auto-pay ages ago.

This matters because the “cost-recovery” issue goes to the heart of the debate. As I noted above, Canada Post gets the underlying address data for free. That said, there is no doubt that it then creates some value to the data by adding postal codes. The question is, should that value best be recouped through cost-recovery at this point in the value chain, or at later stages through additional economy activity (and this greater tax revenue). This debate would be easier to have if we knew the scope of the costs. Does creating postal code data cost Canada Post $100,000 a year? A million? 10 million? We don’t know and they won’t tell us. There are real economic benefits to be had in a digital economy where postal code data is open, but Canada Post prevents us from having a meaningful debate since we can’t find out the tradeoffs.

In addition, it also means that we can’t assess if their are disruptive ways in which postal code data could be generated vastly more efficiently. Canada Post has no incentive (quite the opposite actually) to generate this data more efficiently and there for make the “cost-recovery” much, much lower. It may be that creating postal code data really is a $100,000 a year problem, with the right person and software working on it.

So in the end, a government owned Crown Corporation refuses to not only do something that might help spur Canada’s digital economy – make postal code data open – it refuses to even engage in a legitimate public policy debate. For an organization that is fighting to find its way in the 21st century it is a pretty ominous sign.

* As an aside, in the Citizen article it says that I’m an open government activist who is working with the federal government on the website’s development. The first part – on activism – is true. The latter half, that I work on the open government website’s development, is not. The confusion may arise from the fact that I sit on the Treasury Board’s Open Government Advisory Panel, for which I’m not paid, but am asked for feedback, criticism and suggestions – like making postal code data open – about the government’s open government and open data initiatives.

The US Government's Digital Strategy: The New Benchmark and Some Lessons

Last week the White House launched its new roadmap for digital government. This included the publication of Digital Government: Building a 21st Century Platform to Better Serve the American People (PDF version), the issuing of a Presidential directive and the announcement of White House Innovation Fellows.

In other words, it was a big week for those interested in digital and open government. Having had some time to digest these docs and reflect upon them, below are some thoughts on these announcement and lessons I hope governments and other stakeholders take from it.

First off, the core document – Digital Government: Building a 21st Century Platform to Better Serve the American People – is a must read if you are a public servant thinking about technology or even about program delivery in general. In other words, if your email has a .gov in it or ends in something like .gc.ca you should probably read it. Indeed, I’d put this document right up there with another classic must read, The Power of Information Taskforce Report commissioned by the Cabinet Office in the UK (which if you have not read, you should).

Perhaps most exciting to me is that this is the first time I’ve seen a government document clearly declare something I’ve long advised governments I’ve worked with: data should be a layer in your IT architecture. The problem is nicely summarized on page 9:

Traditionally, the government has architected systems (e.g. databases or applications) for specific uses at specific points in time. The tight coupling of presentation and information has made it difficult to extract the underlying information and adapt to changing internal and external needs.

Oy. Isn’t that the case. Most government data is captured in an application and designed for a single use. For example, say you run the license renewal system. You update your database every time someone wants to renew their license. That makes sense because that is what the system was designed to do. But, maybe you like to get track, in real time, how frequently the database changes, and by who. Whoops. System was designed for that because that wasn’t needed in the original application. Of course, being able to present the data in that second way might be a great way to assess how busy different branches are so you could warn prospective customers about wait times. Now imagine this lost opportunity… and multiply it by a million. Welcome to government IT.

Decoupling data from application is pretty much close to the first think in the report. Here’s my favourite chunk from the report (italics mine, to note extra favourite part).

The Federal Government must fundamentally shift how it thinks about digital information. Rather than thinking primarily about the final presentation—publishing web pages, mobile applications or brochures—an information-centric approach focuses on ensuring our data and content are accurate, available, and secure. We need to treat all content as data—turning any unstructured content into structured data—then ensure all structured data are associated with valid metadata. Providing this information through web APIs helps us architect for interoperability and openness, and makes data assets freely available for use within agencies, between agencies, in the private sector, or by citizens. This approach also supports device-agnostic security and privacy controls, as attributes can be applied directly to the data and monitored through metadata, enabling agencies to focus on securing the data and not the device.

To help, the White House provides a visual guide for this roadmap. I’ve pasted it below. However, I’ve taken the liberty to highlight how most governments try to tackle open data on the right – just so people can see how different the White House’s approach is, and why this is not just an issue of throwing up some new data but a total rethink of how government architects itself online.

There are of course, a bunch of things that flow out of the White House’s approach that are not spelled out in the document. The first and most obvious is once you make data an information layer you have to manage it directly. This means that data starts to be seen and treated as a asset – this means understanding who’s the custodian and establishing a governance structure around it. This is something that, previously, really only libraries and statistical bureaus have really understand (and sometimes not even!).

This is the dirty secret about open data – is that to do it effectively you actually have to start treating data as an asset. For the White House the benefit of taking that view of data is that it saves money. Creating a separate information layer means you don’t have to duplicate it for all the different platforms you have. In addition, it gives you more flexibility in how you present it, meaning the costs of showing information on different devices (say computers vs. mobile phones) should also drop. Cost savings and increased flexibility are the real drivers. Open data becomes an additional benefit. This is something I dive into deeper detail in a blog post from July 2011: It’s the icing, not the cake: key lesson on open data for governments.

Of course, having a cool model is nice and all, but, as like the previous directive on open government, this document has hard requirements designed to force departments to being shifting their IT architecture quickly. So check out this interesting tidbit out of the doc:

While the open data and web API policy will apply to all new systems and underlying data and content developed going forward, OMB will ask agencies to bring existing high-value systems and information into compliance over a period of time—a “look forward, look back” approach To jump-start the transition, agencies will be required to:

  • Identify at least two major customer-facing systems that contain high-value data and content;
  • Expose this information through web APIs to the appropriate audiences;
  • Apply metadata tags in compliance with the new federal guidelines; and
  • Publish a plan to transition additional systems as practical

Note the language here. This is again not a “let’s throw some data up there and see what happens” approach. I endorse doing that as well, but here the White House is demanding that departments be strategic about the data sets/APIs they create. Locate a data set that you know people want access to. This is easy to assess. Just look at pageviews, or go over FOIA/ATIP requests and see what is demanded the most. This isn’t rocket science – do what is in most demand first. But you’d be surprised how few governments want to serve up data that is in demand.

Another interesting inference one can make from the report is that its recommendations embrace the possibility of participants outside of government – both for and non-profit – can build services on top of government information and data. Referring back to the chart above see how the Presentation Layer includes both private and public examples? Consequently, a non-profits website dedicated to say… job info veterans could pull live data and information from various Federal Government websites, weave it together and present in a way that is most helpful to the veterans it serves. In other words the opportunity for innovation is fairly significant. This also has two addition repercussions. It means that services the government does not currently offer – at least in a coherent way – could be woven together by others. It also means there may be information and services the government simply never chooses to develop a presentation layer for – it may simply rely on private or non-profit sector actors (or other levels of government) to do that for it. This has interesting political ramifications in that it could allow the government to “retreat” from presenting these services and rely on others. There are definitely circumstances where this would make me uncomfortable, but the solution is not to not architect this system this way, it is to ensure that such programs are funded in a way that ensures government involvement in all aspects – information, platform and presentation.

At this point I want to interject two tangential thoughts.

First, if you are wondering why it is your government is not doing this – be it at the local, state or national level. Here’s a big hint: this is what happens when you make the CIO an executive who reports at the highest level. You’re just never going to get innovation out of your government’s IT department if the CIO reports into the fricking CFO. All that tells me is that IT is a cost centre that should be focused on sustaining itself (e.g. keeping computers on) and that you see IT as having no strategic relevance to government. In the private sector, in the 21st century, this is pretty much the equivalent of committing suicide for most businesses. For governments… making CIO’s report into CFO’s is considered a best practice. I’ve more to say on this. But I’m taking a deep breath and am going to move on.

Second, I love how the document also is so clear on milestones – and nicely visualized as well. It may be my poor memory but I feel like it is rare for me to read a government road map on any issues where the milestones are so clearly laid out.

It’s particularly nice when a government treats its citizens as though they can understand something like this, and aren’t afraid to be held accountable for a plan. I’m not saying that other governments don’t set out milestones (some do, many however do not). But often these deadlines are buried in reams of text. Here is a simply scorecard any citizen can look at. Of course, last time around, after the open government directive was issued immediately after Obama took office, they updated these score cards for each department, highlight if milestones were green, yellow or red, depending on how the department was performing. All in front of the public. Not something I’ve ever seen in my country, that’s for sure.

Of course, the document isn’t perfect. I was initially intrigued to see the report advocates that the government “Shift to an Enterprise-Wide Asset Management and Procurement Model.” Most citizens remain blissfully unaware of just how broken government procurement is. Indeed, I say this dear reader with no idea where you live and who your government is, but I enormously confident your government’s procurement process is totally screwed. And I’m not just talking about when they try to buy fighter planes. I’m talking pretty much all procurement.

Today’s procurement is perfectly designed to serve one group. Big (IT) vendors. The process is so convoluted and so complicated they are really the only ones with the resources to navigate it. The White House document essentially centralizes procurement further. On the one hand this is good, it means the requirements around platforms and data noted in the document can be more readily enforced. Basically the centre is asserting more control at the expense of the departments. And yes, there may be some economies of scale that benefit the government. But the truth is whenever procurement decision get bigger, so to do the stakes, and so to does the process surrounding them. Thus there are a tiny handful of players that can respond to any RFP and real risks that the government ends up in a duopoly (kind of like with defense contractors). There is some wording around open source solutions that helps address some of this, but ultimately, it is hard to see how the recommendations are going to really alter the quagmire that is government procurement.

Of course, these are just some thoughts and comments that struck me and that I hope, those of you still reading, will find helpful. I’ve got thoughts on the White House Innovation Fellows especially given it appears to have been at least in part inspired by the Code for America fellowship program which I have been lucky enough to have been involved with. But I’ll save those for another post.