Category Archives: public service sector renewal

Government Procurement Reform – It matters

Earlier this week I posted a slidecast on my talk to Canada’s Access to Information Commissioners about how, as they do their work, they need to look deeper into the government “stack.”

My core argument was how decisions about what information gets made accessible is no longer best managed at the end of a policy development or program delivery process but rather should be embedded in it. This means monkeying around and ensuring there is capacity to export government information and data from the tools (e.g. software) government uses every day. Logically, this means monkeying around in procurement policy (see slide below) since that is where the specs for the tools public servants use get set. Trying to bake “access” into processes after the software has been chosen is, well, often an expensive nightmare.

Gov stack

Privately, one participant from a police force, came up to me afterward and said that I was simply guiding people to another problem – procurement. He is right. I am. Almost everyone I talk to in government feels like procurement is broken. I’ve said as much myself in the past. Clay Johnson is someone who has thought about this more than others, here he is below at the Code for America Summit with a great slide (and talk) about how the current government procurement regime rewards all the wrong behaviours and often, all the wrong players.

Clay Risk profile

So yes, I’m pushing the RTI and open data community to think about procurement on purpose. Procurement is borked. Badly. Not just from a wasting tax dollars money perspective, or even just from a service delivery perspective, but also because it doesn’t serve the goals of transparency well. Quite the opposite. More importantly, it isn’t going to get fixed until more people start pointing out that it is broken and start contributing to solving this major bottle neck of a problem.

I highly, highly recommend reading Clay Johnson’s and Harper Reed’s opinion piece in today’s New York Times about procurement titled Why the Government Never Gets Tech Right.

All of this becomes more important if the White House’s (and other governments’ at all levels) have any hope of executing on their digital strategies (image below).  There is going to be a giant effort to digitize much of what governments do and a huge number of opportunities for finding efficiencies and improving services is going to come from this. However, if all of this depends on multi-million (or worse 10 or 100 million) dollar systems and websites we are, to put it frankly, screwed. The future of government isn’t to be (continue to be?) taken over by some massive SAP implementation that is so rigid and controlled it gives governments almost no opportunity to innovate. And this is the future our procurement policies steer us toward. A future with only a tiny handful of possible vendors, a high risk of project failure and highly rigid and frail systems that are expensive to adapt.

Worse there is no easy path here. I don’t see anyone doing procurement right. So we are going to have to dive into a thorny, tough problem. However, the more governments that try to tackle it in radical ways, the faster we can learn some new and interesting lessons.

Open Data WH

New Zealand: The World’s Lab for Progressive Tech Legislation?

Cross posted with TechPresident.

One of the nice advantage of having a large world with lots of diverse states is the range of experiments it offers us. Countries (or regions within them) can try out ideas, and if they work, others can copy them!

For example, in the world of drug policy, Portugal effectively decriminalized virtually all drugs. The result has been dramatic. And much of it positive. Some of the changes include a decline in both HIV diagnoses amongst drug users by 17% and drug use among adolescents (13-15 yrs). For those interested you can read more about this in a fantastic report by the Cato Institute written by Glenn Greenwald back in 2009 before he started exposing the unconstitutional and dangerous activities of the NSA. Now some 15 years later there have been increasing demands to decriminalize and even legalize drugs, especially in Latin America. But even the United States is changing, with both the states of Washington and Colorado opting to legalize marijuana. The lessons of Portugal have helped make the case, not by penetrating the public’s imagination per se, but by showing policy elites that decriminalization not only works but it saves lives and saves money. Little Portugal may one day be remembered for changing the world.

I wonder if we might see a similar paper written about New Zealand ten years from now about technology policy. It may be that a number of Kiwis will counter the arguments in this post by exposing all the reasons why I’m wrong (which I’d welcome!) but at a glance, New Zealand would probably be the place I’d send a public servant or politician wanting to know more about how to do technology policy right.

So why is that?

First, for those who missed it, this summer New Zealand banned software patents. This is a stunning and entirely sensible accomplishment. Software patents, and the legal morass and drag on innovation they create, are an enormous problem. The idea that Amazon can patent “1-click” (e.g. the idea that you pre-store someone’s credit card information so they can buy an item with a single click) is, well, a joke. This is a grand innovation that should be protected for years?

And yet, I can’t think of single other OECD member country that is likely to pass similar legislation. This means that it will be up to New Zealand to show that the software world will survive just fine without patents and the economy will not suddenly explode into flames. I also struggle to think of an OECD country where one of the most significant industry groups – the Institute of IT Professionals appeared – would not only both support such a measure but help push its passage:

The nearly unanimous passage of the Bill was also greeted by Institute of IT Professionals (IITP) chief executive Paul Matthews, who congratulated [Commerce Minister] Foss for listening to the IT industry and ensuring that software patents were excluded.

Did I mention that the bill passed almost unanimously?

Second, New Zealanders are further up the learning curve around the dangerous willingness their government – and foreign governments – have for illegally surveilling them online.

The arrest of Kim Dotcom over MegaUpload has sparked some investigations into how closely the country’s police and intelligence services follow the law. (For an excellent timeline of the Kim Dotcom saga, check out this link). This is because Kim Dotcom was illegally spied on by New Zealand’s intelligence services and police force, at the behest of the United States, which is now seeking to extradite him. The arrest and subsequent fall out has piqued public interest and lead to investigations including the Kitteridge report (PDF) which revealed that “as many as 88 individuals have been unlawfully spied on” by the country’s Government Communications Security Bureau.

I wonder if the Snowden documents and subsequent furor probably surprised New Zealanders less than many of their counterparts in other countries since it was less a bombshell than another data point on a trend line.

I don’t want to overplay the impact of the Kim Dotcom scandal. It has not, as far as I can tell, lead to a complete overhaul of the rules that govern intelligence gathering and online security. That said, I suspect, it has created a political climate that amy be more (healthily) distrustful of government intelligence services and the intelligence services of the United States. As a result, it is likely that politicians have been more sensitive to this matter for a year or two longer than elsewhere and that public servants are more accustomed at policies through the lens of its impact on rights and privacy of citizens than in many other countries.

Finally, (and this is somewhat related to the first point) New Zealand has, from what I can tell, a remarkably strong open source community. I’m not sure why this is the case, but suspect that people like Nat Torkington – and open source and open data advocate in New Zealand – and others like him play a role in it. More interestingly, this community has had influence across the political spectrum. The centre left labour party deserves much of the credit for the patent reform while the centre-right New Zealand National Party has embraced both open data. The country was among the first to embrace open source as a viable option when procuring software and in 2003 the government developed an official open source policy to help clear the path for greater use of open source software. This contrasts sharply with my experience in Canada where, as late as 2008, open source was still seen by many government officials as a dangerous (some might say cancerous?) option that needed to be banned and/or killed.

All this is to say that in both the public (e.g. civil society and the private sector) and within government there is greater expertise around thinking about open source solutions and so an ability to ask different questions about intellectual property and definitions of the public good. While I recognize that this exists in many countries now, it has existed longer in New Zealand than in most, which suggests that it enjoys greater acceptance in senior ranks and there is greater experience in thinking about and engaging these perspectives.

I share all this for two reasons:

First, I would keep my eye on New Zealand. This is clearly a place where something is happening in a way that may not be possible in other OECD countries. The small size of its economy (and so relative lack of importance to the major proprietary software vendors) combined with a sufficient policy agreement both among the public and elites enables the country to overcome both internal and external lobbying and pressure that would likely sink similar initiatives elsewhere. And while New Zealand’s influence may be limited, don’t underestimate the power of example. Portugal also has limited influence, but its example has helped show the world that the US -ed narrative on the “war on drugs” can be countered. In many ways this is often how it has to happen. Innovation, particularly in policy, often comes from the margins.

Second, if a policy maker, public servant or politician comes to me and asks me who to talk to around digital policy, I increasingly find myself looking at New Zealand as the place that is the most compelling. I have similar advice for PhD students. Indeed, if what I’m arguing is true, we need research to describe, better than I have, the conditions that lead to this outcome as well as the impact these policies are having on the economy, government and society. Sadly, I have no names to give to those I suggest this idea to, but I figure they’ll find someone in the government to talk to, since, as a bonus to all this, I’ve always found New Zealanders to be exceedingly friendly.

So keep an eye on New Zealand, it could be the place where some of the most progressive technology policies first get experimented with. It would be a shame if no one noticed.

(Again If some New Zealanders want to tell me I’m wrong, please do. Obviously, you know your country better than I do).

Thesis Question Idea: Probing Power & Promotions in the Public Service

Here’s an idea for a PhD candidate out there with some interest in government or HR and some quant skills.

Imagine you could access the a sensible slice of the HR history of a 300,000+ person organization, so you could see when people were promoted and where they moved in the organization?.

I’m not sure if would work, but the Government Electronic Directory Service (GEDS), essentially a “white pages” of Canada’s national government, could prove to be such a dataset. The service is actually designed to let people find one another within government. However, this also means it could potentially allow an someone to track the progress of public servants careers since you can see the different titles an employee enjoys each time they change jobs (and thus get a new title and phone number in GEDS). While not a perfect match, job titles generally map up to pay scales and promotions, likely making it not a prefect, but likely still a good, metric for career trajectory.

The screen shot below is for a random name I tried. I’ve attempted to preserve the privacy of the employee, which, in truth isn’t really necessary, since anyone can access GEDS and so the data isn’t actually private to begin with.

GEDS 2

There are a number of interesting questions I could imagine an engaged researcher could ask with such data. For example, where are the glass ceilings: are there particular senior roles that seem harder for women to get promoted into? Who are the super mentors: is there a manager whose former charges always seem to go on to lofty careers? Are there power cliques: are there super public servants around whom others cluster and whose promotions or career moves are linked? Are there career paths that are more optimal, or suboptimal? Or worse is ones path predetermined early on by where and in what role one enters the public service? And (frighteningly), could you create a predictive algorithm that allowed one to accurately forecast who might be promoted.

These types of questions could be enormously illuminating and shed an important light on how the public service works. Indeed, this data set would not only be important to issues of equity and fairness within the public service, but also around training and education. In many ways, I wish the public service itself would look at this data to learn about itself.

Of course, given that there is not effectively a pan-government HR group (that I’m aware of) it is unlikely that anyone is thinking about the GEDS data in a pan-government and longitudinal way (more likely there are HR groups organized by ministry that just focus on their ministry’s employees). All this, in my mind, would make this research in an academic institution all the more important.

I’m sure there are probably fears that would drive opposition to this. Privacy is an obvious one (this is why I’m saying an academic, or the government itself, should do this). Another might be lawsuits. Suppose such a study did discover institutional sexism? Or that some other group of people were disproportionally passed over for roles in a way that suggested unfair treatment. If this hypothetical study were able to quantify this discrimination in a new way, could it then be used to support lawsuits? I’ve no idea. Nor do I think I care. I’d rather have a government that was leveraging its valuable talent in the most equitable and effective way then one that stayed blind to understanding itself in order to avoid a possible lawsuit.

The big if of course, is whether snapshots of the GEDS database have been saved over the years, either on purpose or inadvertently (backups?). It is also possible that some geek somewhere has been scrapping GEDS on a nightly, weekly or monthly basis. The second big if is, would anyone be willing to hand the data over? I’d like to think that the answer would be yes, particularly for an academic whose proposal had been successfully vetted by an Institutional Review Board.

If anyone ever decides to pursue this, I’d be happy to talk to you more about ideas I have. Also, I suspect there may be other levels of government that similar applications. Maybe this would work easier on a smaller scale.

The Value of Open Data – Don’t Measure Growth, Measure Destruction

Alexander Howard – who, in my mind, is the best guy covering the Gov 2.0 space – pinged me the other night to ask “What’s the best evidence of open data leading to economic outcomes that you’ve seen?”

I’d like to hack the question because – I suspect – for many people, they will be looking to measure “economic outcomes” in ways that I don’t think will be so narrow as to be helpful. For example, if you are wondering what the big companies are going to be that come out of the open data movement and/or what are the big savings that are going to be found by government via sifting through the data, I think you are probably looking for the wrong indicators.

Why? Part of it is because the number of “big” examples is going to be small.

It’s not that I don’t think there won’t be any. For example several years ago I blogged about how FOIed (or, in Canada ATIPed) data that should have been open helped find $3.2B in evaded tax revenues channeled through illegal charities. It’s just that this is probably not where the wins will initially take place.

This is in part because most data for which there was likely to be an obvious and large economic impact (eg spawning a big company or saving a government millions) will have already been analyzed or sold by governments before the open data movement came along. On the analysis side of the question- if you are very confident a data set could yield tens or hundreds of millions in savings… well… you were probably willing to pay SAS or some other analytics firm 30-100K to analyze it. And you were probably willing to pay SAP a couple of million (a year?) to set up the infrastructure to just gather the data.

Meanwhile, on the “private sector company” side of the equation – if that data had value, there were probably eager buyers. In Canada for example, interest in census data – to help with planning where to locate stores or how to engage in marketing and advertising effectively – was sold because the private sector made it clear they were willing to pay to gain access to it. (Sadly, this was bad news for academics, non-profits and everybody else, for whom it should have been free, as it was in the US).

So my point is, that a great deal of the (again) obvious low hanging fruit has probably been picked long before the open data movement showed up, because governments – or companies – were willing to invest some modest amounts to create the benefits that picking those fruit would yield.

This is not to say I don’t think there are diamonds in the rough out there – data sets that will reveal significant savings – but I doubt they will be obvious or easy finds. Nor do I think that billion dollar companies are going to spring up around open datasets over night since –  by definition – open data has low barriers to entry to any company that adds value to them. One should remember it took Red Hat two decades to become a billion dollar company. Impressive, but it is still a tiny compared to many of its rivals.

And that is my main point.

The real impact of open data will likely not be in the economic wealth it generates, but rather in its destructive power. I think the real impact of open data is going to be in the value it destroys and so in the capital it frees up to do other things. Much like Red Hat is fraction of the size of Microsoft, Open Data is going to enable new players to disrupt established data players.

What do I mean by this?

Take SeeClickFix. Here is a company that, leveraging the Open311 standard, is able to provide many cities with a 311 solution that works pretty much out of the box. 20 years ago, this was a $10 million+ problem for a major city to solve, and wasn’t even something a small city could consider adopting – it was just prohibitively expensive. Today, SeeClickFix takes what was a 7 or 8 digit problem, and makes it a 5 or 6 digit problem. Indeed, I suspect SeeClickFix almost works better in a small to mid-sized government that doesn’t have complex work order software and so can just use SeeClickFix as a general solution. For this part of the market, it has crushed the cost out of implementing a solution.

Another example. And one I’m most excited. Look at CKAN and Socrata. Most people believe these are open data portal solutions. That is a mistake. These are data management companies that happen to have simply made “sharing (or “open”) a core design feature. You know who does data management? SAP. What Socrata and CKAN offer is a way to store, access, share and engage with data previously gathered and held by companies like SAP at a fraction of the cost. A SAP implementation is a 7 or 8 (or god forbid, 9) digit problem. And many city IT managers complain that doing anything with data stored in SAP takes time and it takes money. CKAN and Socrata may have only a fraction of the features, but they are dead simple to use, and make it dead simple to extract and share data. More importantly they make these costly 7 and 8 digital problems potentially become cheap 5 or 6 digit problems.

On the analysis side, again, I do hope there will be big wins – but what I really think open data is going to do is lower the costs of creating lots of small wins – crazy numbers of tiny efficiencies. If SAP and SAS were about solving the 5 problems that could create 10s of millions in operational savings for governments and companies then Socrata, CKAN and the open data movement is about finding the 1000 problems for which you can save between $20,000 and $1M in savings. For example, when you look at the work that Michael Flowers is doing in NYC, his analytics team is going to transform New York City’s budget. They aren’t finding $30 million dollars in operational savings, but they are generating a steady stream of very solid 6 to low 7 digit savings, project after project. (this is to say nothing of the lives they help save with their work on ambulances and fire safety inspections). Cumulatively  over time, these savings are going to add up to a lot. But there probably isn’t going to be a big bang. Rather, we are getting into the long tail of savings. Lots and lots of small stuff… that is going to add up to a very big number, while no one is looking.

So when I look at open data, yes, I think there is economic value. Lots and lots of economic value. Hell, tons of it.

But it isn’t necessarily going to happen in a big bang, and it may take place in the creative destruction it fosters and so the capital it frees up to spend on other things. That may make it potentially harder to measure (I’m hoping some economist much smarter than me is going tell me I’m wrong about that) but that’s what I think the change will look like.

Don’t look for the big bang, and don’t measure the growth in spending or new jobs. Rather let’s try to measure the destruction and cumulative impact of a thousand tiny wins. Cause that is where I think we’ll see it most.

Postscript: Apologies again for any typos – it’s late and I’m just desperate to get this out while it is burning in my brain. And thank you Alex for forcing me to put into words something I’ve been thinking about saying for months.

 

Canada Post and the War on Open Data, Innovation & Common Sense (continued, sadly)

Almost exactly a year ago I wrote a blog post on Canada Post’s War on the 21st Century, Innovation & Productivity. In it I highlighted how Canada Post launched a lawsuit against a company – Geocoder.ca – that recreates the postal code database via crowdsourcing. Canada Posts case was never strong, but then, that was not their goal. As a large, tax payer backed company the point wasn’t to be right, it was to use the law as a way to financial bankrupt a small innovator.

This case matters – especially to small start ups and non-profits. Open North – a non-profit on which I sit on the board of directors – recently explored what it would cost to use Canada Posts postal code data base on represent.opennorth.ca, a website that helps identify elected officials who serve a given address. The cost? $9,000 a year, nothing near what it could afford.

But that’s not it. There are several non-profits that use Represent to help inform donors and other users of their website about which elected officials represent geographies where they advocate for change. The licensing cost if you include all of these non-profits and academic groups? $50,000 a year.

This is not a trivial sum, and it is very significant for non-profits and academics. It is also a window into why Canada Post is trying to sue Geocoder.ca – which offers a version of its database for… free. That a private company can offers a similar service at a fraction of the cost (or for nothing) is, of couse, a threat.

Sadly, I wish I could report good news on the one year anniversary of the case. Indeed, I should be!

This is because what should have been the most important development was how the Federal Court of Appeal made it even more clear that data cannot be copyrighted. This probably made it Canada Post’s lawyers that they were not going to win and made it even more obvious to us in the public that the lawsuit against geocoder.ca – which has not been dropped-  was completely frivolous.

Sadly, Canada Post reaction to this erosion of its position was not to back off, but to double down. Recognizing that they likely won’t win a copyright case over postal code data, they have decided:

a) to assert that they hold trademark on the words ‘postal code’

b) to name Ervin Ruci – the opertator of Geocoder.ca – as a defendent in the case, as opposed to just his company.

The second part shows just how vindictive Canada Post’s lawyers are, and reveals the true nature of this lawsuit. This is not about protecting trademark. This is about sending a message about legal costs and fees. This is a predatory lawsuit, funded by you, the tax payer.

But part a is also sad. Having seen the writing on the wall around its capacity to win the case around data, Canada Post is suddenly decided – 88 years after it first started using “Postal Zones” and 43 years after it started using “Postal Codes” to assert a trade mark on the term? (You can read more on the history of postal codes in canada here).

Moreover the legal implications if Canada Post actually won the case would be fascinating. It is unclear that anyone would be allowed to solicit anybody’s postal code – at least if they mentioned the term “postal code” – on any form or website without Canada Posts express permission. It leads one to ask. Does the federal government have Canada Post’s express permission to solicit postal code information on tax forms? On Passport renewal forms? On any form they have ever published? Because if not, they are, I understand Canada Posts claim correctly, in violation of Canada Post trademark.

Given the current government’s goal to increase the use of government data and spur innovation, will they finally intervene in what is an absurd case that Canada Post cannot win, that is using tax payer dollars to snuff out innovators, increases the costs of academics to do geospatial oriented social research and that creates a great deal of uncertainty about how anyone online be they non-profits, companies, academics, or governments, can use postal codes.

I know of no other country in the world that has to deal with this kind of behaviour from their postal service. The United Kingdom compelled its postal service to make postal code information public years ago.In Canada, we handle the same situation by letting a tax payer subsidized monopoly hire expensive lawyers to launch frivolous lawsuits against innovators who are not breaking the law.

That is pretty telling.

You can read more about this this, and see the legal documents on Ervin Ruci’s blog has also done a good job covering this story at canada.com.

CivicOpen: New Name, Old Idea

The other day Zac Townsend published a piece, “Introducing the idea of an open-source suite for municipal governments,” laying out the case for why cities should collaboratively create open source software that can be shared among them.

I think it is a great idea. And I’m thrilled to hear that more people are excited about exploring this model, and think any such discussion would be helped with having some broader context, and more importantly, because any series of posts on this subject that fails to look at previous efforts is, well, doomed to repeat the failures of the past.

Context

I wrote several blog posts making the case for it in 2009. (Rather than CivicOpen, I called it Muniforge.) These post, I’m told, helped influence the creation of CivicCommons, a failed effort to do something similar (and upon which I sat on an advisory board).

Back when I published my posts I thought I was introducing the idea of shared software development in the civic space. It was only shortly after that l learned of Kuali – a simliar effort that was occurring the university sector – and was so enamoured with it I wrote a piece about how we should clone its governance structure and create a simliar organization for the civic space (something that the Kuali leadership told me they would be happy to facilitate). I also tried tried to expose anyone I could to Kuali. I had a Kuali representative speak at the first Code for America Summit and have talked about it from time to time while helping teach the Code for America Fellows at the beginning of each CfA year. I also introduced them to anyone I would meet in municipal IT and helped spur conference calls between them and people in San Francisco, Boston and other cities so they could understand the model better.

But even in 2009 I was not introducing the idea of shared municipal open source projects. While discovering that Kuali existed was great (and perhaps I can be forgiven for my oversight, as they are in the educational and not municipal space) I completely failed to locate the Association des développeurs et utilisateurs de logiciels libres pour les administrations et les collectivités territoriales (ADULLACT) a French project created seven(!) years prior to my piece that does exactly what I described in my Muniforge posts and what Zac describes in his post. (The English translation of ADULLACT’s name is the Association of Developers and Users of Free Software for Governments and Local Authorities; there is no English Wikipedia page that I could see, but a good French version is here.) I know little about why ADULLACT has survived, but their continued existence suggests that they are doing something right – ideas and processes that should inform any post about what a good structure for CivicOpen or another initiative might want to draw upon.

Of course, in addition to these, there are several other groups that have tried – some with little, others with greater, success – to talk about or create open source projects that span cities. Indeed, last year, Berkeley Unviersity’s Smart Cities program proclaiming the Open Source city arrived. In that piece Berkeley references OpenPlans, which has for years tried to do something similar (and was indeed one of the instigators behind CivicCommons – a failed effort to get cities to share code. Here you can read Philip Ashlock, who was then at OpenPlans, talk about the desirability of cities creating and sharing open source code. In addition to CivicCommons, there is CityMart, a private sector actor that seeks to connect cities with solutions, including those that are open source; in essence it could be a catalyst for building municipal open source communities, but, as far as I can tell, it isn’t. (update) There was also the US Federal Government’s Open Code Initiative, which now seems defunct, and Mark Headd of Code for America tells me Google “Government Open Code Collaborative” but to which I can find no information on.

To understand why some models have failed and why some models have succeeded, Andrew Oram’s article in Journal of Information Technology and Politics is another good starting point. There was also some research into these ideas shared at the Major Cities of Europe IT Users Group back in 2009 by James Fogarty and Willoughby that can be read here; this includes several mini-case studies from several cities and a crude, but good, cost-benefit analysis.

And there are other efforts that are more random. Like  The Intelligent/Smart Cities Open Source Community which for “anyone interested on intelligent / smart cities development and looks for applications and solutions which have been successfully implemented in other cities, mainly open source applications.”

Don’t be ahistorical

I share all this for several reasons.

  1. I want Zac to succeed in figuring out a model that works.
  2. I’d love to help.
  3. To note that there has been a lot of thought into this idea already. I myself thought I was breaking ground when I wrote my Muniforge piece back in 2009. I was not. There were a ton of lessons I could have incorporated into that piece that I did not, and previous successes and failures I could have linked to, but didn’t (until at least discovering Kuali).

I get nervous when I see posts – like that on the Ash Centre’s blog – that don’t cite previous efforts and that feel, to put it bluntly, ahistorical. I think laying out a model is a great idea. But we have a lot of data and stories about what works and doesn’t work. To not draw on these examples (or even mention or link to them) seems to be a recipe for repeating the mistakes of the past. There are reasons CivicCommons failed, and why Kuali and ADDULACT have succeeded. I’ve interviewed a number of people at the former (and sadly no one at the latter) and this feels like table stakes before venturing down this path. It also feels like a good way of modelling what you eventually want a municipal/civic open source community to do – build and learn from the social code as well as business and organizational models of those that have failed and succeeded before you. That is the core of what the best and most successful open source (and, frankly many successful non-open source) projects do.

What I’ve learned is that the biggest challenge are not technical, but cultural and institutional. Many cities have policies, explicit or implicit that prevent them from using open source software, to say nothing of co-creating open source software. Indeed, after helping draft the Open Motion adopted by the City of Vancouver, I helped the city revise their procurement policies to address these obstacles. Indeed, drawing on the example mentioned in Zac’s post, you will struggle to find many small and medium sized municipalities that use Linux, or even that let employees install Firefox on computers. Worse, many municipal IT staff have been trained that Open Source is unstable, unreliable and involves questionable people. It is a slow process to reverse these opinions.

Another challenge that needs to be addressed is that many city IT departments have been hollowed out and don’t have the capacity to write much code. For many cities IT is often more about operations and selecting who to procure from, not writing software. So a CivicOpen/Muniforge/CivicCommons/ADULLACT approach will represent a departure into an arena where many cities have little capacity and experience. Many will be reluctant to built this capacity.

There are many more concerns of course and, despite them, I continue to think the idea is worth pursuing.

I also fear this post will be read as a critique. That is not my intent. Zac is an ally and I want to help. Above all, I share the above because   the good news is this isn’t an introduction. There is a rich history of ideas and efforts from which to learn and build upon. We don’t need do this on our own or invent anew. There is a community of us who are thinking about these things and have lessons to share, so let’s share and make this work!

A note to the Ash Centre

As an aside, I’d have loved to linked to this at the bottom of Zac’s post on the Havard Kennedy School Ash Centre website, but webmaster has opted to not allow comments. Even more odd is that it does not show any dates on their posts. Again, fearing I sound critical but just wanted to be constructive, I believe it is important for anyone, but academic institution especially to have dates listed on articles so that we can better understand the timing and context in which they were written. In addition, (and I understand this sentiment may not be shared) but a centre focused on Democratic Governance and Innovation should allow for some form of feedback or interaction, at least some way people can respond to and/or build on the ideas they publish.

The South -> North Innovation Path in Government: An Example?

I’ve always felt that a lot of innovation happens where resources are scarcest. Scarcity forces us to think differently, to be efficient and to question traditional (more expensive) models.

This is why I’m always interested to see how local governments in developing economies are handling various problems. There is always an (enormous) risk that these governments will be lured into doing things they way they have been done in developing economies (hello SAP!). Sometimes this makes sense, but often, newer, disruptive and cheaper ways of accomplishing the goal have emerged in the interim.

What I think is really interesting is when a trend started in the global south migrates to the global north. I think I may have just spotted one example.

The other week the City of Boston announced its City Hall to Go trucks – mobile vans that, like food trucks, will drive around the city and be at various civic events available to deliver citizen services on the go! See the video and “menu” below.

 

city-hall-menu-225x300

This is really cool. In Vancouver we have a huge number of highly successful food carts. It is not hard to imagine an experiment like this as well – particularly in underserved neighborhoods or at the numerous public festivals and public food markets that take place across the city.

But, as the title of this post suggests, Boston is not the first city to do this. This United Nations report points out how the state government of Bahia started to do something similar in the mid 90s in the state capital of Salvador.

In 1994 the Government of Bahia hosted the first of several annual technology fairs in the state capital, Salvador. A few government services were offered there, using new ICT systems (e.g., issuing identification cards). The service was far more efficient and well-received by the public. The idea was then raised: Why not deliver services this way on a regular basis?

…A Mobile Documents SAC also was developed to reach the most remote and deprived communities in Bahia. This Mobile SAC is a large, 18-wheel truck equipped with air-conditioning, TV set, toilets, and a covered waiting area. Inside the truck, four basic citizenship services are provided: issuance of birth certificates, identification card, labor identification card, and criminal record verification.

I feel very much like I’ve read about smaller trucks delivering services in other cities in Brazil as well – I believe one community in Brazil had mobile carts with computers on them that toured neighborhoods so citizens could more effectively participate in online petitions and crowdsourcing projects being run by the local government.

I’m not sure if the success of these projects in developing economy cities influenced the thinking in Boston – if yes, that is interesting. If not, it is still interesting. It suggests that thinking and logic behind this type innovation is occurring in several cities simultaneously, even if when these cities have markedly different levels of GDP per capita and internet access (among many other things). My hope is that those in government will be more and more willing to see how their counterparts elsewhere in the world – no matter where – are doing things. Money is tight for governments everywhere, so good ideas may be more likely to go from those who feel the burden of costs the greatest.