Tag Archives: CPSR

Open Data for Development Challenge on Jan 27-28

This just came across my email via Michael Roberts who has been doing great work in this space.

Mail Attachment

Open Data for Development Challenge
January 27–28, 2014 — Montreal, Canada

Do you want to share your creative ideas and cutting-edge expertise, and make a difference in the world?
Do you want to help Canadians and the world understand how development aid is spent and what its impact is?
Do you want to be challenged and have fun at the same time?

If so, take the Open Data for Development Challenge!

This unique 36-hour ”codathon” organized by Foreign Affairs, Trade and Development Canada will bring together Canadian and international technical experts and policy makers to generate new tools and ideas in the fields of open data and aid transparency and contribute to innovative solutions to the world’s pressing development challenges.

The event will feature keynote speakers Aleem Walji, Director of the World Bank’s Innovation Labs, and Mark Surman, Executive Director of the Mozilla Foundation. It will have two related dimensions:

  • Technical challenges that involve building applications to make existing open aid and development-related data more useful. Proposed topics include building a data viewer compatible with multilingual data, creating a publishing tool suitable for use by mid-sized Canadian non-profit organizations, developing and testing applications for open contracting, and taking a deep dive into the procurement data of the World Bank Group. There is room for challenges proposed by the community. Proposals should be submitted through the event website no later than January 8th. Challenges will be published prior to the event, along with key datasets and other related information, to enable participants to prepare for the event.
  • Policy discussions on how open data and open government can enable development results. This would include the use of big data in development programming, the innovative ways in which data can be mapped and visualized for development, and the impact of open data on developing countries.

The international aid transparency community will be encouraged to take promising tools and ideas from the event forward for further research and development.

An overview of the draft program is attached. The event will be in English and French, with interpretation provided in the plenary sessions and panel discussions.

We invite you to register, at no cost, at this website as soon as possible and no later than January 10. A message confirming your registration and providing additional information about the venue and accommodation will be sent to confirmed participants. Please wait for this confirmation before making any travel arrangements. Participants are asked to make their own accommodation arrangements. A limited number of guest rooms will be available to event participants at a preferential rate.

To find out more about the Open Data for Development Challenge, please go to DFATD’s website.

The Importance of Open Data Critiques – thoughts and context

Over at the Programmable City website Rob Kitchin has a thoughtful blog post on open data critiques. It is very much worth reading and wider discussion. Specifically, there are two competing things worth noting. First, it is important for the open data community – and advocates in particular – to acknowledge the responsibility we have in debates about open data. Second, I’d like to examine some of the critiques raised and discuss those I think misfire and those that deserve deeper dives.

Open Data as Dominant Discourse

During my 2011 keynote at Open Government Data camp I talked about how the open data movement was at an inflection point:

For years we have been on the outside, yelling that open data matters. But now we are being invited inside.

Two years later the transition is more than complete. If you have any doubts, consider this picture:OD as DCOnce you have these people talking about things like a G8 Open Data Charter you are no longer on the fringes. Not even remotely.

It also means understanding the challenges around open data has never been more important. We – open data advocates – are now complicit it what many of the above (mostly) men decide to do around open data. Hence the importance of Rob’s post. Previously those with power were dismissive of open data – you had to scream to get their attention. Today, those same actors want to act now and go far. Point them (or the institutions they represent) in the wrong direction and/or frame an issue incorrectly and you could have a serious problem on your hands. Consequently, the responsibility of advocates has never been greater. This is even more the case as open data has spread. Local variations matter. What works in Vancouver may not always be appropriate in Nairobi or London.

I shouldn’t have to say this but I will, because it matters so much: Read the critiques. They matter. They will make you better, smarter, and above all, more responsible.

The Four Critiques – a break down

Reading the critiques and agreeing with them is, of course, not the same thing. Rob cites four critiques of open data: funding and sustainability, politics of the benign and empowering the empowered, utility and usability, and neoliberalisation and marketisation of public services. Some of these I think miss the real concerns and risks around open data, others represent genuine concerns that everyone should have at the forefront of their thinking. Let me briefly touch on each one.

Funding and sustainability

This one strikes me as the least effective criticism. Outside the World Bank I’ve not heard of many examples where government effectively sell their data to make money. I would be very interested in examples to the contrary – it would make for a great list and would enlighten the discussion – although not, I suspect in ways that would make either side of the discussion happy.

The little research that has been done into this subject has suggested that charging for government data almost never yields much money, and often actually serves as a loss creating mechanism. Indeed a 2001 KPMG study of Canadian geospatial data found government almost never made money from data sales if purchases by other levels of government were not included. Again in Canada, Statistics Canada argued for years that it couldn’t “afford” to make its data open (free) as it needed the revenue. However, it turned out that the annual sum generated by these sales was around $2M dollars. This is hardly a major contributor to its bottom line. And of course, this does not count the money that had to go towards salaries and systems for tracking buyers and users, chasing down invoices, etc…

The disappointing line in the critique however was this:

de Vries et al. (2011) reported that the average apps developer made only $3,000 per year from apps sales, with 80 percent of paid Android apps being downloaded fewer than 100 times.  In addition, they noted that even successful apps, such as MyCityWay which had been downloaded 40 million times, were not yet generating profits.

Ugh. First, apps are not what is going to make open data interesting or sexy. I suspect they will make up maybe 5% of the ecosystem. The real value is going to be in analysis and enhancing other services. It may also be in the costs it eliminates (and thus capital and time it frees up, not in the companies it creates), something I outlined in Don’t Measure the Growth, Measure the Destruction.

Moreover, this is the internet. The average doesn’t mean anything. The average webpage probably gets 2 page views per day. That hardly means there aren’t lots of very successful webpages. The distribution is not a bell curve, its a long tail, so it is hard to see what the average tells us other than the cost of experimentation is very, very low. It tells us very little about if there are, or will be successful uses of open data.

Politics of the benign and empowering the empowered

The is the most important critique and it needs to be engaged. There are definitely cases where data can serve to further marginalize at risk communities. In addition, there are data sets that for reasons of security and privacy, should not be made open. I’m not interested in publishing the locations of women’s shelters or worse, the list of families taking refuge in them. Nor do I believe that open data will always serve to challenge the status quo or create greater equality. Even at its most reductionist – if one believes that information is power, then greater ability to access and make us of information makes one more powerful – this means that winners and losers will be created by the creation of new information.

There are however, two things that give me some hope in this space. The first is that, when it comes to open data, the axis of competition among providers usually centers around accessibility. For example, the Socrata platform (an provider of open data portals to government) invests heavily in creating tools that make government data accessible and usable to the broadest possible audience. This is not a claim that all communities are being engaged (far from it) and that a great deal more work cannot be done, but there is a desire to show greater use which drives some data providers to try to find ways to engage new communities.

The second is that if we want to create data literate society – and I think we do, for reasons of good citizenship, social justice and economic competitiveness – you need the data first for people to learn and play with. One of my most popular blog posts is Learning from Libraries: The Literacy Challenge of Open Data in which I point out that one of the best ways to help people become data literate is to give them more interesting data to play with. My point is that we didn’t build libraries after everyone knew how to read, we built them beforehand with the goal of having them as a place that could facilitate learning and education. Of course libraries also often have strong teaching components to them, and we definitely need more of this. Figuring out who to engage, and how it can be done most effectively is something I’m deeply interested in.

There are also things that often depress me. I struggle to think of technologies that did not empower the empowered – at least initially. From the cell phone to the car to the printing press to open source software, all these inventions have had helped billions of people, but they did not distribute themselves evenly, especially at first. So the question cannot be reduced to – will open data empower the empowered, but to what degree, and where and with whom. I’ve seen plenty of evidence where data has enabled small groups of people to protect their communities or make more transparent the impact (or lack there of) of a government regulation. Open data expands the number of people who can use government information for their own ends – this, I believe is a good thing – but that does not mean we shouldn’t be constantly looking for ways to ensure that it does not reinforce structural inequity. Achieving perfect distribution of the benefits of a new technology, or even public policy, is almost impossible. So we cannot make perfect the enemy of the good. However, that does not hide the fact that there are real risk – and responsibilities as advocates – that need to be considered here. This is an issue that will need to be constantly engaged.

Utility and Usability

Some of the issues around usability I’ve addressed above in the accessibility piece – for some portals (that genuinely want users) the axis of evolution is pointed in the right direction with governments and companies (like Socrata) trying to embed more tools on the website to make the data more usable.

I also agree with the central concern (not a critique) of this section, which is that rather than creating a virtuous circle, poorly thought out and launched open data portals will create a negative “doomloops” in which poor quality data begets little interest which begets less data. However, the concern, in my mind, focuses on to narrow a problem.

One of the big reasons I’ve been an advocate of open data was a desire not just to help citizens, non-profits and companies gain access to information that could help them with their missions, but to change the way government deals with its data so that it can share it internally more effectively. I often cite a public servant I know who had a summer intern spend 3 weeks surfing the national statistical agency website to find data they knew existed but could not find because of terrible design and search. A poor open data site is not just a sign that the public can’t access or effectively use government data, it usually suggests that the governments employees can’t access or effectively use their own data. This is often deeply frustrating to many public servants.

Thus, the most important outcome created by the open data movement may have been making governments realize that data represents an asset class that of which they have had little understanding (outside, sadly, the intelligence sector, which has been all too aware of this) and little policy and governance (outside, say, the GIS space and some personal records categories). Getting governments to think about data as a platform (yes, I’m a fan of government as a platform for external use, but above all for internal use) is, in my mind, one way we can both enable public servants to get better access to information while simultaneously attacking the huge vendors (like SAP and Oracle) whose $100 million dollar implementations often silo off data, rarely produce the results promised and are so obnoxiously expensive it boggles the mind (Clay Johnson has some wonderful examples of the roughly 50% of large IT projects that fail).

They key to all this is that open data can’t be something you slap on top of a big IT stack. I try to explain this in It’s the Icing Not the Cake, another popular blog post about why Washington DC was able to effectively launch an open data program so quickly (which was, apparently, so effective at bringing transparency to procurement data the subsequent mayor rolled it back). The point is, that governments need to start thinking in terms of platforms if – over the long term – open data is going to work. And it needs to start thinking of itself as the primary consumer of the data that is being served on that platform. Steve Yegge’s brilliant and sharp witted rant on how Google doesn’t get platforms is an absolute must read in this regard for any government official – the good news is you are not alone in not finding this easy. Google struggles with it as well.

My main point. Let’s not play at  the edges and merely define this challenge as one of usability. It is much, much bigger problem than that. It is a big, deep, culture-changing BHAG problem that needs tackling. If we get it wrong, then the big government vendors and he inertia of bureaucracy win. We get it right and we potentially could save taxpayers millions while enabling a more nimble, effective and responsive government.

Neoliberalisation and Marketisation of Government

If you not read Jo Bates article “Co-optation and contestation in the shaping of the UK’s Open Government Data Initiative” I highly recommend it. There are a number of arguments in the article I’m not sure I agree with (and feel are softened by her conclusion – so do read it all first). For example, the notion that open data has been co-opted into an “ideologically framed mould that champions the superiority of markets over social provision” strikes me as lacking nuance. One of the things open data can do is create a public recognition of a publicly held data set and the need to protect these against being privatized. Of course, what I suspect is that both things could be true simultaneously – there can be increased recognition of the importance of a public asset while also recognizing the increased social goods and market potential in leveraging said asset.

However, there is one thing Bates is absolutely correct about. Open data does not come into an empty playing field. It will be used by actors – on both the left and right – to advance their cause. So I too am uncomfortable with those that believe open data is going to somehow depoliticize government or politics – indeed I made a similar argument in a piece in Slate on the politics of data. As I try to point out you can only create a perverse, gerrymandered electoral district that looks like this…

gerrymandered in chicago… if you’ve got pretty good demographic data about target communities you want to engage (or avoid). Data – and even open data – doesn’t magically make things better. There are instances where open data can, I believe, create positive outcomes by shifting incentives in appropriate ways… but similarly, it can help all sorts of actors find ways to satisfy their own goals, which may not be aligned with your – or even society at large’s – goals.

This makes voices like Bates deeply important since they will challenge those of us interested in open data to be constantly evaluating the language we use, the coalitions we form and the priorities that get made, in ways that I think are profoundly important. Indeed, if you get to the end of Bates article there are a list of recommendations that I don’t think anyone I work with around open data would find objectionable, quite the opposite, they would agree are completely critical.

Summary

I’m so grateful to Rob for posting this piece. It is has helped me put into words some thoughts I’ve had, both about the open data criticisms as well as the important role the critiques play. I try hard to be critical advocate of open data – one who engages the risks and challenges posed by open data. I’m not perfect, and balancing these two goals – advocacy with a critical view – is not easy, but I hope this shines some window into the ways I’m trying to balance it and possible helps others do more of it as well.

The promise and challenges of open government – Toronto Star OpEd

As some readers many know it was recently announced that I’ve been asked by Ontario Premier Wynn and Government Services Minister John Milloy to be part of the Government of Ontario’s task force on Open Government.

The task force will look at best practices around the world as well as engage a number of stakeholders and conduct a series of public consultations across Ontario to make a number of recommendations around opening up the Ontario government.

I have an opinion piece in the Toronto Star today titled The Promise and Challenges of Open Government where I try (in a few words) to outline some of the challenges the task force faces as well as some of the opportunities I hope it can capitalize on.

The promise and challenges of open government

Last week, Premier Kathleen Wynne announced the launch of Ontario’s Open Government initiative, including an engagement task force (upon which I sit).

The premier’s announcement comes on the heels of a number of “open government” initiatives launched in recent years. President Barack Obama’s first act in 2009 was to sign the Memorandum on Transparency and Open Government. Since then numerous city, state and provincial governments across North America are finding new ways to share information. Internationally, 60 countries belong to the Open Government Partnership, a coalition of states and non-profits that seeks to improve accountability, transparency, technology and innovation and citizen participation.

Some of this is, to be blunt, mere fad. But there is a real sense among many politicians and the public that governments need to find new ways to be more responsive to a growing and more diverse set of citizen needs, while improving accountability.

Technology has a certainly been – in part – a driver, if only because it shifts expectations. Today a Google search takes about 30 milliseconds, with many users searching for mere minutes before locating what they are looking for. In contrast, access to information requests can take weeks, or months to complete. In an age of computers, government processes often seem more informed by the photocopier – clinging to complex systems for sorting, copying and sharing information – than using computer systems that make it easy to share information by design.

There is also growing recognition that government data and information can empower people both inside and outside government. In British Columbia, the province’s open data portal is widely used by students – many of whom previously used U.S. data as it was the only free source. Now the province benefits from an emerging workforce that uses local data while studying everything from the environment to demography to education. Meanwhile the largest user of B.C.’s open data portal are public servants, who are able to research and create policy while drawing on better information, all without endless meetings to ask for permission to use other departments’ data. The savings from fewer meetings alone is likely significant.

The benefits of better leveraging government data can affect us all. Take the relatively mundane but important issue of transit. Every day hundreds of thousands of Ontarians check Google Maps or locally developed applications for transit information. The accumulated minutes not spent waiting for transit has likely saved citizens millions of hours. Few probably realize however that it is because local governments “opened” transit data that it has become so accessible on our computers and phones.

Finally, there are a number of new ways to think about how to “talk” to Ontarians. It is possible that traditional public consultations could be improved. But there is also an opportunity to think more broadly about how the government interacts with citizens. Projects like Wikipedia demonstrate how many small contributions can create powerful resources and public assets. Could such a model apply to government?

All of these opportunities are exciting – and the province is right to explore them. But important policy questions remain. For example: how do we safeguard the data government collects to minimize political interference? The country lost a critical resource when the federal government destroyed the reliability of the long form census by making it voluntary. If crowdsourcing and other new forms of public engagement can be adopted for government, how do we manage privacy concerns and preserve equality of opportunity? And how will such changes affect public representation? Canada’s political system has been marked by increasing centralization of power over the past several decades – will new technologies and approaches further this trend? Or could they be shaped to arrest it? These are not simple questions.

It is also easy to dismiss these efforts. This will neither be the first nor the last time people talk about open government. Indeed, there is a wonderfully cynical episode of Yes, Minister from 1980 titled “Open Government.” More recently, various revelations about surveillance and national governments’ desire to snoop in on our every email and phone call reveals much about what is both opaque and to be feared about our governments. Such cynicism is both healthy and necessary. It is also a reason why we should demand more.

Open government is not something we will ever fully achieve. But I do hope that it can serve as an objective and a constantly critical lens for thinking about what we should demand. I can’t speak for the other panelists of the task force, but that will be how I approach my work.

David Eaves is a public policy entrepreneur, open government activist and negotiation expert. He is a member of the Ontario government’s new Engagement Task Force.

Government Procurement Reform – It matters

Earlier this week I posted a slidecast on my talk to Canada’s Access to Information Commissioners about how, as they do their work, they need to look deeper into the government “stack.”

My core argument was how decisions about what information gets made accessible is no longer best managed at the end of a policy development or program delivery process but rather should be embedded in it. This means monkeying around and ensuring there is capacity to export government information and data from the tools (e.g. software) government uses every day. Logically, this means monkeying around in procurement policy (see slide below) since that is where the specs for the tools public servants use get set. Trying to bake “access” into processes after the software has been chosen is, well, often an expensive nightmare.

Gov stack

Privately, one participant from a police force, came up to me afterward and said that I was simply guiding people to another problem – procurement. He is right. I am. Almost everyone I talk to in government feels like procurement is broken. I’ve said as much myself in the past. Clay Johnson is someone who has thought about this more than others, here he is below at the Code for America Summit with a great slide (and talk) about how the current government procurement regime rewards all the wrong behaviours and often, all the wrong players.

Clay Risk profile

So yes, I’m pushing the RTI and open data community to think about procurement on purpose. Procurement is borked. Badly. Not just from a wasting tax dollars money perspective, or even just from a service delivery perspective, but also because it doesn’t serve the goals of transparency well. Quite the opposite. More importantly, it isn’t going to get fixed until more people start pointing out that it is broken and start contributing to solving this major bottle neck of a problem.

I highly, highly recommend reading Clay Johnson’s and Harper Reed’s opinion piece in today’s New York Times about procurement titled Why the Government Never Gets Tech Right.

All of this becomes more important if the White House’s (and other governments’ at all levels) have any hope of executing on their digital strategies (image below).  There is going to be a giant effort to digitize much of what governments do and a huge number of opportunities for finding efficiencies and improving services is going to come from this. However, if all of this depends on multi-million (or worse 10 or 100 million) dollar systems and websites we are, to put it frankly, screwed. The future of government isn’t to be (continue to be?) taken over by some massive SAP implementation that is so rigid and controlled it gives governments almost no opportunity to innovate. And this is the future our procurement policies steer us toward. A future with only a tiny handful of possible vendors, a high risk of project failure and highly rigid and frail systems that are expensive to adapt.

Worse there is no easy path here. I don’t see anyone doing procurement right. So we are going to have to dive into a thorny, tough problem. However, the more governments that try to tackle it in radical ways, the faster we can learn some new and interesting lessons.

Open Data WH

The Real News Story about the Relaunch of data.gc.ca

As many of my open data friends know, yesterday the government launched its new open data portal to great fanfare. While there is much to talk about there – something I will dive into tomorrow – that was not the only thing that happened yesterday.

Indeed, I did a lot of media yesterday between flights and only after it was over did I notice that virtually all the questions focused on the relaunch of data.gc.ca. Yet it is increasingly clear that for me, the much, much bigger story of the portal relaunch was the Prime Minister announcing that Canada would adopt the Open Data Charter.

In other words, Canada just announced that it is moving towards making all government data open by default. Moreover, it even made commitments to make specific “high value” data sets open in the next couple of years.

As an aside, I don’t think the Prime Minister’s office has ever mentioned open data – as far as I can remember, so that was interesting in of itself. But what is still more interesting is what the Prime Minister committed Canada to. The open data charter commits the government to make data open by default as well as four other principles including:

  • Quality and Quantity
  • Useable by All
  • Releasing Data for Improved Governance
  • Releasing Data for Innovation

In some ways Canada has effectively agreed to implement the equivalent to Presidential Executive Order on Open Data the White House announced last month (and that I analyzed in this blog post). Indeed, the charter is more aggressive than the executive order since it goes on to layout the need to open up not just future data, but also current “high value” data sets. Included among these are data sets the Open Knowledge Foundation has been seeking to get opened via its open data census, as well as some data sets I and many others have argued should be made open, such as the company/business register. Other suggested high value data sets include data on crime, school performance, energy and environment pollution levels, energy consumption, government contracts, national budgets, health prescription data and many, many others. Also included on the list… postcodes – something we are presently struggling with here in Canada.

But the charter wasn’t all the government committed to. The final G8 communique contained many interesting tidbits that again, highlighted commitments to open up data and adhere to international data schemas.

Among these were:

  • Corporate Registry Data: There was a very interesting section on “Transparency of companies and legal arrangements” which is essentially on sharing data about who owns companies. As an advisory board member to OpenCorporates, this was music to my ears. However, the federal government already does this, the much, much bigger problem is with the provinces, like BC and Quebec that make it difficult or expensive to access this data.
  • Extractive Industries Transparency Initiative: A commitment that “Canada will launch consultations with stakeholders across Canada with a view to developing an equivalent mandatory reporting regime for extractive companies within the next two years.” This is something I fought to get included into our OGP commitment two years ago but failed to succeed at. Again, I’m thrilled to see this appear in the communique and look forward to the government’s action.
  • International Aid Transparency Initiative (IATI) and Busan Common Standard on Aid Transparency,: A commitment to make aid data more transparent and downloadable by 2015. Indeed, with all the G8 countries agreed to taking this step it may be possible to get greater transparency around who is spending what money, where on aid. This could help identify duplication as well as in assessments around effectiveness. Given how precious aid dollars are, this is a very welcome development. (h/t Michael Roberts of Acclar.org)

So lots of commitments, some on the more vague side (the open data charter) but some very explicit and precise. And that is the real story of yesterday, not that the country has a new open data portal, but that a lot more data is likely going to get put into that portal over then next 2-5 years. And a tsunami of data could end up in it over the next 10-25 years. Indeed, so much data, that I suspect a portal will no longer be a logical way to share it all.

And therein lies the deeper business and government story in all this. As I mentioned in my analysis of the White House Executive Order that made open data default, the big change here is in procurement. If implemented, this could have a dramatic impact on vendors and suppliers of equipement and computers that collect and store data for the government. Many vendors try to find ways to make their data difficult to export and share so as to lock the government in to their solution. Again, if (and this is a big if) the charter is implemented it will hopefully require a lot of companies to rethink what they offer to government. This is a potentially huge story as it could disrupt incumbents and lead to either big reductions in the costs of procurement (if done right) or big increases and the establishment of the same, or new, impossible to work with incumbents (if done incorrectly).

There is potentially a tremendous amount at stake in how the government handles the procurement side of all this, because whether it realizes it or not, it may have just completely shaken up the IT industry that serves it.

 

Postscript: One thing I found interesting about the G8 communique was how many times commitments about open data and open data sets occurred in the section that had nothing to do with open data. Will be interesting if that is a trend that continues at the next G8 meeting. Indeed, I wouldn’t be surprised is a specific open data section disappears and instead these references just become part of various issue related commitments.

 

 

 

The South -> North Innovation Path in Government: An Example?

I’ve always felt that a lot of innovation happens where resources are scarcest. Scarcity forces us to think differently, to be efficient and to question traditional (more expensive) models.

This is why I’m always interested to see how local governments in developing economies are handling various problems. There is always an (enormous) risk that these governments will be lured into doing things they way they have been done in developing economies (hello SAP!). Sometimes this makes sense, but often, newer, disruptive and cheaper ways of accomplishing the goal have emerged in the interim.

What I think is really interesting is when a trend started in the global south migrates to the global north. I think I may have just spotted one example.

The other week the City of Boston announced its City Hall to Go trucks – mobile vans that, like food trucks, will drive around the city and be at various civic events available to deliver citizen services on the go! See the video and “menu” below.

 

city-hall-menu-225x300

This is really cool. In Vancouver we have a huge number of highly successful food carts. It is not hard to imagine an experiment like this as well – particularly in underserved neighborhoods or at the numerous public festivals and public food markets that take place across the city.

But, as the title of this post suggests, Boston is not the first city to do this. This United Nations report points out how the state government of Bahia started to do something similar in the mid 90s in the state capital of Salvador.

In 1994 the Government of Bahia hosted the first of several annual technology fairs in the state capital, Salvador. A few government services were offered there, using new ICT systems (e.g., issuing identification cards). The service was far more efficient and well-received by the public. The idea was then raised: Why not deliver services this way on a regular basis?

…A Mobile Documents SAC also was developed to reach the most remote and deprived communities in Bahia. This Mobile SAC is a large, 18-wheel truck equipped with air-conditioning, TV set, toilets, and a covered waiting area. Inside the truck, four basic citizenship services are provided: issuance of birth certificates, identification card, labor identification card, and criminal record verification.

I feel very much like I’ve read about smaller trucks delivering services in other cities in Brazil as well – I believe one community in Brazil had mobile carts with computers on them that toured neighborhoods so citizens could more effectively participate in online petitions and crowdsourcing projects being run by the local government.

I’m not sure if the success of these projects in developing economy cities influenced the thinking in Boston – if yes, that is interesting. If not, it is still interesting. It suggests that thinking and logic behind this type innovation is occurring in several cities simultaneously, even if when these cities have markedly different levels of GDP per capita and internet access (among many other things). My hope is that those in government will be more and more willing to see how their counterparts elsewhere in the world – no matter where – are doing things. Money is tight for governments everywhere, so good ideas may be more likely to go from those who feel the burden of costs the greatest.

Proactive Disclosure – An Example of Doing it Wrong from Shared Service Canada

Just got flagged about this precious example of doing proactive disclosure wrong.

So here is a Shared Service Canada website dedicated the Roundtable on Information Technology Infrastructure. Obviously this is a topic of real interest to me – I write a fair bit about delivering (or failing to deliver) government service online effectively. I think it is great that Service Canada is reaching out to the private sector to try to learn lessons. Sadly, some of the links on the site didn’t work for me, specifically the important sounding: Summary of Discussions: Shared Services Canada Information and Communications Technology Sector Engagement Process.

But that is not the best part. Take a look at the website below. In one glance the entirety of the challenge of rethinking communications and government transparency  is nicely summed up.
proactive-nonedisclosure2

Apparently, if you want a copy of the presentation the Minister made to the committee you have to request it.

That’s odd, since really, the cost of making it downloadable is essentially zero. While the cost of emailing someone and making them get it back to you, is well, a colossal waste of my, and that public servants, time. (Indeed, to demonstrate this to the government, I hope that everyone of my readers requests this document).

There are, in my mind, two explanations for this. The first, more ominous one, is that someone wants to create barriers to getting this document. Maybe that is the case – who knows.

The second, less ominous, but in some ways more depressing answer is that this is simply standard protocol, or worse, that no one involved in this site has the know how or access rights to upload the document.

Noted added 6 mins after posting: There is also a third reason, less innocuous than reasons one and two. That being that the government cannot post the document unless it is in both official languages. And since this presentation is only available in (likely) english, it cannot be posted. This actually feels the most likely and will be teeing up a whole new post shortly on bilingualism and transparency. The number of times I’m told a document or data set can’t be proactively shared because of language issues is frustratingly frequent. I’ve spoken to the Language Commissioner on this and believe more dialogue is required. Bilingualism cannot be an excuse for a poor experience, or worse, opaque government.

In either case, it is a sad outcome. Either our government is maliciously trying to make it difficult to get information to Canadians (true of most governments) or they don’t know how to.

Of course, you may be saying… but David – who cares if there is an added step to geting this document that is slightly inconvenient? Well, let me remind you THIS IS SHARED SERVICE CANADA AND IT IS ABOUT A COMMITTEE FOCUSED ON DELIVERING ONLINE SERVICES (INTERNALLY AND EXTERNALLY) MORE EFFECTIVELY. If there was one place where you wanted to show you were responsive, proactive and reducing the transaction costs to citizens… the kind of approach you were going to use to make all government service more efficient and effective… this would be it.

The icing on the cake? There is that beautiful “transparency” button right below the text that talks about how the government is interested in proactive disclosure (see screenshot below). I love the text here – this is exactly what I want my government to be doing.

And yet, this is experience, while I’m sure conforming to the letter of the policy, feels like it violates pretty much everything around the spirit of proactive disclosure. This is after all a document that has already been made public… and now we are requiring citizens to request it.

We have a lot of work to do.

Re-Architecting the City by Changing the Timelines and Making it Disappear

A couple of weeks ago I was asked by one of the city’s near where I live to sit on an advisory board around the creation of their Digital Government strategy. For me the meeting was good since I felt that a cohort of us on the advisory board were really pushing the city into a place of discomfort (something you want an advisory board to do in certain ways). My sense is a big part of that conversation had to do with a subtle gap between the city staff and some of the participants around what a digital strategy should deal with.

Gord Ross (of Open Roads) – a friend and very smart guy – and I were debriefing afterwards about where and why the friction was arising.

We had been pushing the city hard on its need to iterate more and use data to drive decisions. This was echoed by some of the more internet oriented members of the board. But at one point I feel like I got healthy push back from one of the city staff. How, they asked, can I iterate when I’ve got 10-60 years timelines that I need to plan around? I simply cannot iterate when some of the investments I’m making are that longterm.

Gord raised Stewart Brands building layers as a metaphor which I think sums up the differing views nicely.

Brand presents his basic argument in an early chapter, “Shearing Layers,” which argues that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In his business-consulting manner, he calls these the “Six S’s” (borrowed in part from British architect and historian F. Duffy’s “Four S’s” of capital investment in buildings).

The Site is eternal; the Structure is good for 30 to 300 years (“but few buildings make it past 60, for other reasons”); the Skin now changes every 15 to 20 years due to both weathering and fashion; the Services (wiring, plumbing, kitchen appliances, heating and cooling) change every seven to 15 years, perhaps faster in more technological settings; Space Planning, the interior partitioning and pedestrian flow, changes every two or three years in offices and lasts perhaps 30 years in the most stable homes; and the innermost layers of Stuff (furnishings) change continually.

My sense is the city staff are trying to figure out what the structure, skin and services layers should be for a digital plan, whereas a lot of us in the internet/tech world live occasionally in the services layer but most in the the space planning and stuff layers where the time horizons are WAY shorter. It’s not that we have to think that way, it is just that we have become accustomed to thinking that way… doubly so since so much of what works on the internet isn’t really “planned” it is emergent. As a result, I found this metaphor useful for trying to understanding how we can end up talking past one another.
It also goes to the heart of what I was trying to convey to the staff: that I think there are a number of assumptions governments make about what has been a 10 or 50 year lifecycle versus what that lifecycle could be in the future.
In other words, a digital strategy could allow some things “phase change” from being say in the skin or service layer to being able to operate on the faster timeline, lower capital cost and increased flexibility of a space planning layer. This could have big implications on how the city works. If you are buying software or hardware on the expectation that you will only have to do it every 15 years your design parameters and expectations will be very different than if it is designed for 5 years. It also has big implications for the systems that you connect to or build around that software. If you accept that the software will constantly be changing, easy integration becomes a necessary feature. If you think you will have things for decades than, to a certain degree, stability and rigidity are a byproduct.
This is why, if the choice is between trying to better predict how to place a 30 year bet (e.g. architect something to be in the skin or services layer) or place a 5 year bet (architect it to be in the space planning or stuff layer) put as much of it in the latter as possible. If you re-read my post on the US government’s Digital Government strategy, this is functionally what I think they are trying to do. By unbundling the data from the application they are trying to push the data up to the services layer of the metaphor, while pushing the applications built upon it down to the space planning and stuff layer.
This is not to say that nothing should be long term, or that everything long term is bad. I hope not to convey this. Rather, that by being strategic about what we place where we can foster really effective platforms (services) that can last for decades (think data) while giving ourselves a lot more flexibility around what gets built around them (think applications, programs, etc…).
The Goal
The reason why you want to do all this, is because you actually want to give the city the flexibility to a) compete in a global marketplace and b) make itself invisible to its citizens. I hinted at this goal the other day at the end of my piece in TechPresident on the UK’s digital government strategy.
On the competitive front I suspect that across Asia and Africa about 200 cities, and maybe a lot more, are going to get brand new infrastructure over the coming 100 years. Heck some of these cities are even being built from scratch. If you want your city to compete in that environment, you’d better be able to offer new and constantly improving services in order to keep up. If not, others may create efficiencies and discover improvements that given them structural advantages in the competition for talent and other resources.
But the other reason is that this kind of flexibility is, I think, critical to making (what Gord now has me referring to as the big “C” city) disappear. I like my government services best when they blend into my environment. If you live a privilidged Western World existence… how often do you think about electricity? Only when you flick the switch and it doesn’t work. That’s how I suspect most people want government to work. Seamless, reliable, designed into their lives, but not in the way of their lives. But more importantly, I want the “City” to be invisible so that it doesn’t get in the way of my ability to enjoy, contribute to, and be part of the (lower case) city – the city that we all belong to. The “city” as that messy, idea swapping, cosmopolitan, wealth and energy generating, problematic space that is the organism humans create where ever the gather in large numbers. I’d rather be writing the blog post on a WordPress installation that does a lot of things well but invisibly, rather than monkeying around with scripts, plugins or some crazy server language I don’t want to know. Likewise, the less time I spend on “the City,” and the more seamlessly it works, the more time I spend focused on “the city” doing the things that make life more interesting and hopefully better for myself and the world.
Sorry for the rambling post. But digesting a lot of thoughts. Hope there were some tasty pieces in that for you. Also, opaque blog post title eh? Okay bed time now.

The UK's Digital Government Strategy – Worth a Peek

I’ve got a piece up on TechPresident about the UK Government’s Digital Strategy which was released today.

The strategy (and my piece!) are worth checking out. They are saying a lot of the right things – useful stuff for anyone in industry or sector that has been conservative vis-a-vis online services (I’m looking at you governments and banking).

As  I note in the piece… there is reason we should expect better:

The second is that the report is relatively frank, as far as government reports go. The website that introduces the three reports is emblazoned with an enormous title: “Digital services so good that people prefer to use them.” It is a refreshing title that amounts to a confession I’d like to see from more governments: “sorry, we’ve been doing it wrong.” And the report isn’t shy about backing that statement up with facts. It notes that while the proportion of Internet users who shop online grew from 74 percent in 2005 to 86 percent in 2011, only 54 percent of UK adults have used a government service online. Many of those have only used one.

Of course the real test will come with execution. The BC Government, the White House and others have written good reports on digital government, but it is rolling it out that is the tricky part. The UK Government has pretty good cred as far as I’m concerned, but I’ll be watching.

You can read the piece here – hope you enjoy!

Playing with Budget Cutbacks: On a Government 2.0 Response, Wikileaks & Analog Denial of Service Attacks

Reflecting on yesterday’s case study in broken government I had a couple of addition thoughts that I thought fun to explore and that simply did not make sense including in the original post.

A Government 2.0 Response

Yesterday’s piece was all about how Treasury Board’s new rules were likely to increase the velocity of paperwork to a far greater cost than the elimination of excess travel.

One commentator noted a more Gov 2.0 type solution that I’d been mulling over myself. Why not simply treat the government travel problem as a big data problem? Surely there are tools that would allow you to look at government travel in aggregate, maybe mashed it up against GEDS data (job title and department information) that would enable one to quickly identify outliers and other high risk travel that are worthy of closer inspection. I’m not talking about people who travel a lot (that wouldn’t be helpful) but rather people who engage in unusual travel that is hard to reconcile with their role.

While I’m confident that many public servants would find such an approach discomforting, it would be entirely within the purview of their employer to engage in such an analysis. It would also be far more effective, targeted and a deterrent (I suspect, over time) than the kind of blanket policy I wrote about yesterday that is just as (if not more) likely to eliminate necessary travel as it is unnecessary travel. Of course, if you just want to eliminate travel because you think any face to face, group or in person learning is simply not worth the expense – than the latter approach is probably more effective.

Wikileaks and Treasury Board

Of course re-reading yesterday’s post I was having a faint twinge of familiarity. I suddenly realized that my analysis of the impact of the travel restriction policy on government has parallels to the goal that drove Assange to create wikileaks. If you’ve not read Zunguzungu blog post exploring Assange’s writings about the “theory of change” of wikileaks I cannot encourage you enough to go and read it. At its core lies a simple assessment – that wikileaks is trying to shut down the “conspiracy of the state” by making it harder for effective information to be transmitted within the state. Of course, restricting travel is not nearly the same as making it impossible for public servants to communicate, but it does compromise the ability to coordinate and plan effectively – as such the essay is illuminating in thinking about how these types of policies impact not the hierarchy of an organization, but the hidden and open networks (the secret government) that help make the organization function.

Read this extract below below for a taste:

This is however, not where Assange’s reasoning leads him. He decides, instead, that the most effective way to attack this kind of organization would be to make “leaks” a fundamental part of the conspiracy’s  information environment. Which is why the point is not that particular leaks are specifically effective. Wikileaks does not leak something like the “Collateral Murder” video as a way of putting an end to that particular military tactic; that would be to target a specific leg of the hydra even as it grows two more. Instead, the idea is that increasing the porousness of the conspiracy’s information system will impede its functioning, that the conspiracy will turn against itself in self-defense, clamping down on its own information flows in ways that will then impede its own cognitive function. You destroy the conspiracy, in other words, by making it so paranoid of itself that it can no longer conspire:

This is obviously a totally different context – but it is interesting to see that one way to alter an organizations  is to change the way in which information flows around it. This was not – I suspect – the primary goal of the Treasury Board directive (it was a cost driven measure) but the above paragraph is an example of the unintended consequences. Less communication means the ability of the organization to function could be compromised.

Bureaucratic Directive’s as an Analog Denial of Service Attack

There is, of course, another more radical way of thinking about the Treasury Board directive. One of the key points I tried to make yesterday was that the directive was likely to increase the velocity of bureaucratic paperwork, tie up a larger amount of junior and, more preciously, senior resource time, all while actually allowing less work to be done.

Now if a government department were a computer, and I was able to make it send more requests that slowed its CPU (decision making capacity) and thus made other functions harder to perform – and in extreme cases actually prevented any work from happening – that would be something pretty similar to a Denial of Service attack.

Again, I’m not claiming that this was the intent, but it is a fun and interesting lens by which to look at the problem. More to explore here, I’m sure.

Hopefully this has bent a few minds and helped people see the world differently.