Category Archives: technology

If you are away from your cell phone… Awesome South African Online Ad

Was doing some research for a story I am writing over at TechPresident which had me visiting the site of Mxit, a social network built largely for mobile phones and used by urban youths in South Africa.

Check out the landing page for the site (note the red circle):

mobile-only-2

 

So, where I grew up “Never Let the Conversation End” meant porting the “conversation” over to a mobile device. In South Africa, for these young people, it is the opposite. Here’s a zoom in, for those who couldn’t read it…

It’s not that we’ve needed more evidence for understanding that the way access to the internet was going to come to emerging economies was not through one laptop per child or telecentres but via cellphones. That all said, it is still very striking when you see a manifestation of that logical conclusion spread out right in front of you.

Today there are over 500 million cell phones in Africa and, according to the Guardian, that number is growing fast. As the percentage of smart phones increases as well, I’ll expect more of these moments that seem foreign to me, someone who started out on the non-mobile internet.

Unstructured Thinking on Open Data: A response to Tom Slee

apologies for any typos, I’d like to look this over more, but I’ve got to get to other work.

Tom Slee has a very well written blog post with a critical perspective of open data. I encourage you to go and read it – but also to dive into the comments below it, which I think reflect one of the finer discussions I’ve seen below a blog post and articulate many of the critiques I would have had about Tom’s post, but in ways that are more articulate than I would have written (and frankly, how many times have you heard someone say that about comments).

I start with all this because a)  I think Tom and the team at Crooked Timber should be congratulated for fostering a fantastic online environment and b) before I dive in and disagree with Tom (which risks being confused with not liking or respecting him, which is definitely not the case – I don’t know him so can’t speak to his character but I sense nothing but honest and good intentions in his post and I definitely have a lot of respect for his mind). What I particularly respect is the calm and good natured way he responds to comments – especially when he concedes that a line of reasoning was flawed.

This is particularly refreshing given Tom’s original piece on this subject – the open data movement was a joke – of which this piece is a much more refined version – was fairly harsh and dismissive in its tone. That early piece lays bare some of the underlying assumptions that are also embedded in his newer piece. Tom has a very clear political perspective. He believes in a world where big companies are bad and smaller organizations are good even if that comes at the expense of a more effective or efficient service. Indeed, as one commenter notes (and I agree), Tom seems uncomfortable with the profit motive altogether. I don’t judge Tom’s for his perspective but they aren’t purely about open data – they are about a broader political agenda which open data may, or may not, end up serving. And to his credit, I think Tom is pretty clear about this. Open Data is not a problem in of itself to him, it is only a problem if it can be used to support an agenda that he believes trumps all others – enhancing freer and more open markets and/or putting the state’s role in certain activities at risk.

There is, I would like to note, however no discussion about the cost of closed data – or the powerful interests that mirror the open data free market doppelgänger’s – that like to keep it that way. No acknowledgement for the enormous inequalities embedded in the status quo where government data is controlled by government agents – and more often than not – sold to those who can pay for it. I have no doubt that open data will create new winners and losers – but let’s not pretend like the status quo doesn’t support many winners and create big losers either. Our starting point is not neutral. This sentiment came out with a rather unfortunantely terse statement from one commenter that, if one strips the emotion out of it, does raise a good point:

Um, so you don’t like open data, huh? At the very best, your analysis is radically incomplete for it doesn’t include any of the costs imposed by closed data. Anyone who has had to interact with the land record bureaucracy in places like India will tell you about the costs in time, bribes, lost work hours it takes to navigate a closed data environment. Also, there are plenty of stories in the Indian media of how farmers won disputes with authorities using GIS data and exposed land-grabs (because the data is open).

I remember Aneesh Chopra sharing a similar story (sorry I could find a better link). What I find frustrating is that open data advocates get accused of being techno-utopians, praising technology when things work and blaming officials when it goes wrong… but Slee seems to be doing the same in reverse. When people use bribery to hijack a GIS process to expropriate land, I blame corruption, not open data. And when the same GIS system allows a different set of poor farmers to securitize their land, get loans and stop using loan sharks – I praise the officials who implemented it and the legal system for being effective (and important per-requisite for open-data). Slee claims techno utopians fetishize open data, that’s true, some of them do (and I have been occasionally guilty). But he fetishizes the evils of private interests (and by extension, open data).

The more subtle and interesting approach Slee takes it to equate open data – and in particular standards – to culture creative products.

To maintain cultural diversity in the face of winner-take-all markets, governments in smaller countries have designed a toolbox of interventions. The contents include production subsidies, broadcast quotas, spending rules, national ownership, and competition policy. In general, such measures have received support from those with a left-leaning outlook.20

The reason to do this is that it allows Slee to link open data to a topic that I sense he really dislikes – globalization – and explicitly connect Open Data to industrial policy which should, from what I can gather he believes, protect local developers from companies like Google. This leads to a fairly interesting ideas like “if Google is not going to pay or negotiate with all those transit agencies (#40) then that’s fine by me: perhaps that will let some of those apps being developed by hobbyists at hackathons gain some usage within their own transit area.” It is worth noting that a hobbyist doesn’t, by definition, make a sustaining wage from their work, defeating the cultural products/local economy link. I’m personally aware of this as I’ve had a number of hobby open data projects.

But what’s worse is the scenario he describes is unlikely to emerge. I’m not sure citizens, or their transit authorities, will want to rely on a hobbyist for a service I know I (and many others) see as critical. More likely is that one or two companies will become the app maker of choice for most transit authorities (so Google doesn’t win, just some other new big company) or many transit authorities will build something in house that is, frankly, of highly varied quality since this is not their core competency. This world isn’t hard to imagine since it was the world we lived in before Google maps starting sharing transit results.

Moreover, Slee’s vision will probably not reward local firms or punish large companies – it just  rewards different ones. Indeed, it is part of their strategy. Apple just announced that it won’t be support the General Transit Feed in its news iOS6 map and will instead push users to an app. This should be great news for hobbyist developers. The likely outcome however – as Clay Johnson notes – is that transit authorities have less incentive to do open data since the can force users to use “their” app. Hard to see how local hobbyist benefit. In any of the scenarios outlined above it is harder still to see how citizens benefit. Again, our transit future could look a lot like 2005. Indeed, one need only look at a city like Vienna to see the future. So yes, Google hurts, but to whose benefit? And it is worth noting that with smart phone penetration in the Western world increasing – and higher still among Blacks and Hispanics than whites and rising fastest among those making less than 30K – it is not clear to me that is the wealthy and privileged who are paying the price as they drive home.

But from my reading of Slee’s article this is still a good outcome since local creators and always better than those who live far away. Indeed, he considers the price small, calling it mere loss consumer efficiency and not a loss of civic improvement. I couldn’t disagree more. If figuring out how to catch the bus becomes harder it has a material impact on my sense of the civic infrastructure, and definitely my capacity to utilize it. Indeed, when it comes to real time data like that in GTFS version two, research tells us people it actually attracts more people to use the bus.

But what’s frustrating is that the choice Slee presents is false. Yes, open data can help foster bigger players. Google is a giant. But doing open data (and more importantly, standardizing it) has also enabled local and vibrant competitors to emerge much more easily. Some are hobbyists, others are companies. Indeed the total range of innovation that is possible because of the standard is amazing. I would strongly encourage you to read David Turner’s rebuttles (such as this one) in the comments are a strong rebuttal (especially around standardization) and don’t receive the attention I think they warrant.

Again, I strongly encourage you to go read Slee’s piece. It is filled with important insights that open data advocates need to digest. His opening story around the Map Kibera project and the Dalit’s claim is an important case studies in how not to engage in public policy or run a technology project in the developing world. The caveat however, is that these lessons would be true whether the data in those projects was going to be open or closed. His broader point, that there are corporate, free-market players seeking to exploit the open data movement should be understood and grappled with as well. However, I’d invite him to consider the reverse too. There are powerful corporate interests that benefit from closed data. He has his doppelgängers too.

I also want to note that I’m comfortable with many of Slee’s critique’s of open data because I both share and don’t share his political goals. I definitely want to live in a world where we strive for equality of opportunity and where monopolies are hard to form or stay entrenched. I’m also concerned about the internet’s capacity to foster big winners that could become uncontrollable. I’m less interested however, in a world that seeks to support middle men who extract value while contributing very little. So I’m looking for places where open data can help serve both these goals. That many mean that there is data than never gets made open – and I’m comfortable with that. But more importantly  my sense is that – in many circumstances – the right way to deal with these problems is to keep the barriers to entry for new entrants as low as possible.

Help OpenNorth Raise 10K to Improve Democracy and Engagement thru Tech

Some of you may know that I sit on the board of directors for OpenNorth – a cool little non-profit that is building tools for citizens, governments and journalists to improve participation and, sometimes, just make it a little bit easier to be a citizen. Here’s great example of a simple tool they created that others are starting to use – Represent – a database that allows you to quickly figure out all the elected officials the serve the place where you are currently standing.

As a humble non-profit OpenNorth runs on a shoestring, with a lot of volunteer participation. With that in mind we’d like to raise $10,000 this Canada Day. I’ve already donated $100.

The reason?

To sponsor our next project – Citizen Writes – which, inspired by the successful Parliament Watch in Germany, would allow citizens to publicly ask questions both to candidates during elections and to representatives in office. The German site has, since 2004, posed over 140,000 questions from everyday citizens of which 80% been answered by politicians. More importantly such a tool could empower all back benchers, rebalancing power which is increasingly centralized in Canada.

I encourage you to check out our other projects too – I think you’ll find we are up to all sorts of goodness.

You can read more at the OpenNorth blog, or donate by going here.

Should we Start a Government as Platform Business Association

I have an idea.

I want to suggest starting a community of disruptive software companies that are trying to sell products to local and regional governments. I know we can make cities better, more participatory, more accessible, to say nothing of saving them money. But to be effective I think we need a common message – an association that conveys why this disruption is in government’s interest, and how it will help them.

Here’s why.

Last year, I along with some friends, incorporated a small company with an innovative approach to messaging that helps cities be smarter, communicate better and serve their citizens more effectively.  Where we deploy, citizens love us. It’s a blast to do. (You can read more about our company here – if you work for a local or regional government, I’d love to talk to you).

We also don’t think we are alone. Indeed we like to think we are part of a new breed of start ups – companies we respect like SeeClickFix, Azavea  and Citizenvestor – whose DNA is influenced by the likes of Tim O’Reilly and others who talk about government as a platform. Companies that believe there are hyper low cost ways to make local government and services more transparent, enable citizen participation and facilitate still more innovation.

Happily we’ve had a number of early successes, several cities have signed on with us as paying customers. This is no small feat in the municipal space – governments, especially local governments – tend to be risk averse. Selling software as a service (SaaS) for a product category that previously didn’t exist can be challenge. But it pales in comparison to the two real challenges we confront:

1)   Too Cheap to Buy

For many cities we are, weirdly, too cheap  to buy. Our solution tends to cost a couple of thousand dollars a year. I’m talking four digits. In many municipalities, this price breaks the procurement model, which is designed for large purchases like heavy equipment or a large IT implementation. We’re too expensive for petty cash, too cheap for a formal process. I’ve even had a couple experiences where a city has spent significantly more money in staff time talking to and evaluating us than it would have cost to simply deploy us for a year and try us out. We need a smarter context for talking to procurement specifically, and local government in general. That might be easier in a herd.

2)   Marketing

As company that tries to keep its product as cheap as possible we have a limited budget to invest in marketing and sales.  We could charge more to pay for that overhead, but we’d prefer to be cheaper, mostly because we don’t believe taxpayers should pay for the parts of our business that don’t really give them value. In short, we need a better way of letting cities know we exist, one that is cheap and allows our product to reflect its value, not an advertising budget.

As I look at our peer group of companies, I have to believe they share similar challenges. So why don’t we band together? A group of small companies could potentially do a virtual trade show that not only could attract more clients than any of us could on our own, but would attract the right clients: local governments that are hungry for next generation online services.

So who would I imagine being part of this association? I don’t think the criteria is complex, so here are some basic ideas that come to mind:

  • Software focused
  • Disruptively low or hyperlow-cost: here I imagine the cost is under 35 cents per citizen per year.
  • Following a SaaS or open source model
  • You keep the barriers to entry low – any operational data your system creates is open and available to staff and, if requested by the city, to citizens as well
  • Citizen-centric: In addition to open data, your service should, whenever relevant or possible, make it as easy as possible for citizens to use, engage or participate.

Is it the Government as Platform Business Association? Or the Gov 2.0 Software Association? Or maybe the League of Awesomely Lean Gov Start Ups. I don’t know. But I can imagine a shared branded site, maybe we pool money to do some joint marketing. I love the idea of a package deal – get Recollect, SeeClickFix and OpenTreeMap bundled at a discount! Maybe there is even a little logo that companies who meet the criteria and participate in the group could paste on their website (no worries I’m not a designer and am no attached to the mock up below).

Gov20-bia-logo

The larger point here is that if the next generation of civic start ups – companies that can do software much, much cheaper while enhancing the experience for city staff and residents – have an education challenge on our hands. Cities need to learn that there emerging radically small, lean solutions to some problems. I’m not sure this is the right answer. I know this proposal creates a lot of unanswered questions, but it is an idea I wanted to throw out there.

If you have a company that fits the bill I’d love to hear from you. And if you work for a local or regional government and think this would be helpful, I’d love to hear about that as well.

The End of the World: The State vs. the Internet

Last weekend at FooCamp, I co-hosted a session titled “The End of the World: Will the Internet Destroy the State, or Will the State Destroy the Internet?” What follows are the ideas I opened with during my intro to the session and some additional thoughts I’ve had and that others shared during the conversation. To avoid some confusion, I’d also like to clarify a) I don’t claim that these questions have never been raised before, I mostly hope that this framing can generate useful thought and debate; and b) that I don’t believe these are the only two or three possible outcomes; it was just a interesting way of framing some poles so as to generate good conversation.

Introduction

A while back, I thought I saw a tweet from Evgeny Morozov that said something to the effect: “You don’t just go from printing press to Renaissance to iPad; there are revolutions and wars in between you can’t ignore.” Since I can’t find the tweet, maybe he didn’t say it or I imagined it… but it sparked a line of thinking.

Technology and Change

Most often, when people think of the printing press, they think of its impact on the Catholic Church – about how it enabled Martin Luther’s complaints to go viral and how the localization of the Bible cut out the need of the middle man the priest to connect and engage with God. But if the printing press undermined the Catholic Church, it had the opposite impact on the state. To be fair, heads of state took a beating (see French Revolution et al.), but the state itself was nimbler and made good use of the technology. Indeed, it is worth noting that the modern notion of the nation state was not conceivable without the printing press. The press transformed the state – scaling up its capacity to demand control over loyalty from citizens and mobilize resources which, in turn, had an impact on how states related (and fought) with one another.

In his seminal book Imagined Communities, Benedict Anderson outlined how the printing press allowed the state to standardize language and history. In other words, someone growing up in Marseilles 100 years before the printing press probably had a very different sense of history and spoke a markedly different dialect of French than someone living in Paris during the same period. But the printing press (and more specifically, those who controlled it) allowed a dominant discourse to emerge (in this case, likely the Parisian one). Think standardized dictionaries, school textbooks and curricula, to say nothing of history and entertainment. This caused people who might never have met to share a common imagined history, language and discourse. Do not underestimate the impact this had on people’s identity. As this wonderful quote from the book states: “Ultimately it is this fraternity that makes it possible, over the past two centuries, for so many millions of people, not so much to kill, as willingly to die for such limited imaginings.” In other words, states could now fully dispense with feudal middle managers and harness the power of larger swaths of population directly – a population that might never actually meet, but could nonetheless feel connected to one another. The printing press thus helped create the modern nation state by providing a form of tribalism at scale: what we now call nationalism. This was, in turn, an important ingredient for the wars that dominated the late 19th and early 20th century – think World War I and World War II. This isn’t to say without the printing press, you don’t get war – we know that isn’t true – but the type of total war between 20th century nation states does have a direct line to the printing press.

So yes, the techno-utopian world of: printing press -> Renaissance -> iPad is not particularly accurate.

What you do get is: printing press -> Renaissance -> state evolution -> destabilization of international order -> significant bloodshed -> re-stabilization of international system -> iPad.

I raise all this because if this is the impact the printing press had on the state, it begs a new question: What will be the impact of the internet on the state? Will the internet be a technology the state can harness to extract more loyalty from its citizens… or will the internet destroy the imagined communities that make the state possible, replaced by a more nimble, disruptive organization better able to survive the internet era?

Some Scenarios

Note: again, these scenarios aren’t absolutes or the only possibilities, they are designed to raise questions and provoke thinking.

The State Destroys the Internet

One possibility is that the state is as adaptive as capitalism. I’m always amazed at how capitalism has evolved over the centuries. From mercantilism to free market to social market to state capitalism, as a meme it readily adapts  to new environments. One possibility is that the state is the same – sufficiently flexible to adapt to new conditions. Consequently, one can imagine that the state grabs sufficient control of the internet to turn it into a tool that at best enhances – and at worst, doesn’t threaten – citizens’ connection to it. Iran, with its attempt to build a state-managed internal network that will allow it to closely monitor its citizens’ every move, is a scary example of the former. China – with its great firewall – may be an example of the latter. But one not need pick on non-western states.

And a networked world will provide states – especially democratic ones – with lots of reasons to seize greater control of their citizens’ lives. From organized crime, to  terrorism, to identity theft, governments find lots of reasons to monitor their citizens. This is to say nothing of advanced persistent threats which create a state of continual online warfare – or sort of modern day phoney phishy war – between China, the United States, Iran and others. This may be the ultimate justification.

Indeed, as a result of these threats, the United States already has an extensive system for using the internet to monitor its own citizens and even my own country – Canada – tried to pass a law last year to significantly ramp up the monitoring of citizens online. The UK, of course, has just proposed a law whose monitoring provisions would make any authoritarian government squeal with glee. And just last week we found out that the UK government is preparing to cut a blank check for internet service providers to pay for installing the monitoring systems to record what its citizens do online.

Have no doubts, this is about the state trying to ensure the internet serves – or at least doesn’t threaten – its interests.

This is sadly, the easiest future to imagine since it conforms with the world we already know – one where states are ascendant. However, this future represents, in many ways, a linear projection of the future – and our world, especially our networked world, rarely behaves in a linear fashion. So we should be careful about confusing familiarity with probability.

The Internet Destroys the State

Another possibility is that the internet undermines our connection with the state. Online we become increasingly engaged with epistemic communities – be it social, like someone’s World of Warcraft guild, or professional, such as an association with a scientific community. Meanwhile, in the physical world, local communities – possibly at the regional level – become ascendant. In both cases, regulations and rules created by the state feel increasingly like an impediment to conducting our day to day lives, commerce and broader goals. Frustration flares, and increasingly someone in Florida feels less and less connection with someone in Washington state – and the common sense of identity, the imagined community, created by the state begins to erode.

This is, of course, hard for many people to imagine – especially Americans. But for many people in the world – including Canadians – the unity of the state is not a carefree assumption. There have been three referenda on breaking up Canada in my lifetime. More to the point, this process probably wouldn’t start in places where the state is strongest (such as in North America); rather, it would start in places where it is weakest. Think Somalia, Egypt (at the moment) or Belgium (which has basically functioned for two years without a government and no one seemed to really notice). Maybe this isn’t a world with no state – but lots of little states (which I think breaks with our mold of what we imagine the state to be to a certain degree) or maybe some new organizing mechanism, one which leverages local community identities, but can co-exist with a network of diffused but important transnational identities. Or maybe the organizing unit gets bigger, so that greater resources can be called upon to manage ne,w network-based threats.

I, like most people find this world harder to imagine. This is because so many of our assumptions suddenly disappear. If not the state, then what? Who or what protects and manages the internet infrastructure? What about other types of threats – corporate interests, organized and cyber-crime, etc.? This is true paradigm-shifting stuff (apologies for use of the word,) and frankly, I still find myself too stuck in my Newtonian world and the rules make it hard to imagine or even know what quantum mechanics will be like. Again, I want to separate imagining the future with its probability. The two are not always connected, and this is why thinking about this future, as uncomfortable and alienating as it may be, is probably an important exercise.

McWorldThe Internet Rewards the Corporation

One of the big assumptions I often find about people who write/talk about the internet is that it almost always assumes that the individual is the fundamental unit of analysis. There are good reasons for this – using social media, an individual’s capacity to be disruptive has generally increased. And, as Clay Shirky has outlined, the need for coordinating institutions and managers has greatly diminished. Indeed, Shirky’s blog post on the collapse of complex business models is (in addition to being a wonderful piece) a fantastic description of how a disruptive technology can undermine the capacity of larger complex players in a system and benefit smaller, simpler stakeholders. Of course, the smaller stakeholder in our system may not be the individual – it may be an actor that is smaller, nimbler than the state, that can foster an imagined community, and can adopt various forms of marshaling resources for self-organization to hierarchical management. Maybe it is the corporation.

During the conversation at FooCamp, Tim O’Reilly pressed this point with great effect. It could be that the corporation is actually the entity best positioned to adapt to the internet age. Small enough to leverage networks, big enough to generate a community that is actually loyal and engaged.

Indeed, it is easy to imagine a feedback loop that accelerates the ascendance of the corporation. If our imagined communities of nation states cannot withstand a world of multiple narratives and so become weaker, corporations would benefit not just from a greater capacity to adapt, but the great counterbalance to their power – state regulation and borders – might simultaneously erode. A world where more and more power – through information, money and human capital – gets concentrated in corporations is not hard to imagine. Indeed there are many who believe this is already our world. Of course, if the places (generally government bodies) where corporate conflicts – particularly those across sectors – cannot be mediated peacefully then corporations may turn much more aggressive. The need to be bigger, to marshal more resources, to have a security division to defend corporate interests, could lead to a growth in corporations as entities we barely imagine today. It’s a scary future, but not one that hasn’t been imagined several times in SciFi novels, and not one I would put beyond the realm of imagination.

The End of the World

The larger point of all this is that new technologies do change the way we imagine our communities. A second and third order impact of the printing press was its critical role in creating the modern nation-state. The bigger question is, what will be the second and third order impacts of the internet – on our communities (real and imagined), our identity and where power gets concentrated?

As different as the outcomes above are, they share one important thing in common. None represent the status quo. In each case, the nature of the state, and its relationship with citizens, shifts. Consequently, I find it hard to imagine a future where the internet does not continue to put a real strain on how we organize ourselves, and in turn the systems we have built to manage this organization. Consequently, it is not hard to imagine that as more and more of those institutions – including potentially the state itself – come under strain, it could very likely push systems – like the international state system – that are presently stable into a place of instability. It is worth noting that after the printing press, one of the first real nation states – France – wreaked havoc on Europe for almost a half century, using its enhanced resources to conquer pretty much everyone in its path.

While I am fascinated by technology and believe it can be harnessed to do good, I like to think that I am not – as Evgeny labels them – a techno-utopian. We need to remember that, looking back on our history, the second and third order effects of some technologies can be highly destabilizing, which carries with it real risks of generating significant bloodshed and conflict. Hence the title of this blog post and the FooCamp session: The End of the World.

This is not a call for a renewed Luddite manifesto. Quite the opposite – we are on a treadmill we cannot get off. Our technologies have improved our lives, but they also create new problems that, very often social innovations and other technologies will be needed to solve. Rather, I want to raise this because I believe it to be important that still more people – particularly those in the valley and other technology hubs (and not just military strategists) – be thinking critically about what the potential second and third order effects of the internet, the web and the tools they are creating, so that they can contribute to the thinking around potential technological, social and institutional responses that could hopefully mitigate against the worst outcomes.

I hope this helps prompt further thinking and discussion.

 

Open Postal Codes: A Public Response to Canada Post on how they undermine the public good

Earlier this week the Ottawa Citizen ran a story in which I’m quoted about a fight between Treasury Board and Canada Post officials over making postal code data open. Treasury Board officials would love to add it to data.gc.ca while Canada post officials are, to put it mildly, deeply opposed.

This is of course, unsurprising since Canada Post recently launched a frivolous law suit against a software developer who is – quite legally – recreating the postal code data set. For those new to this issue I blogged about this, why postal codes matter and cover the weakness (and incompetence) of Canada Post’s legal case here.

But this new Ottawa Citizen story had me rolling my eyes anew – especially after reading the quotes and text from Canada Post spokesperson. This is in no way an attack on the spokesperson, who I’m sure is a nice person. It is an attack on their employer whose position, sadly, is not just in opposition to the public interest because of the outcome in generates but because of the way it treats citizens. Let me break down Canada Posts platform of ignorance public statement line by line, in order to spell out how they are undermining both the public interest, public debate and accountability.

Keeping the information up-to-date is one of the main reasons why Canada Post needs to charge for it, said Anick Losier, a spokeswoman for the crown corporation, in an interview earlier this year. There are more than 250,000 new addresses and more than a million address changes every year and they need the revenue generated from selling the data to help keep the information up-to-date.

So what is interesting about this is that – as far as I understand – it is not Canada Post that actually generates most of this data. It is local governments that are responsible for creating address data and, ironically, they are required to share it for free with Canada Post. So Canada Post’s data set is itself built on data that it receives for free. It would be interesting for cities to suddenly claim that they needed to engage in “cost-recovery” as well and start charging Canada Post. At some point you recognize that a public asset is a public asset and that it is best leveraged when widely adopted – something Canada Post’s “cost-recovery” prevents. Indeed, what Canada Post is essentially saying is that it is okay for it to leverage the work of other governments for free, but it isn’t okay for the public to leverage its works for free. Ah, the irony.

“We need to ensure accuracy of the data just because if the data’s inaccurate it comes into the system and it adds more costs,” she said.

“We all want to make sure these addresses are maintained.”

So, of course, do I. That said, the statement makes it sound like there is a gap between Canada Post – which is interested in the accuracy of the data – and everyone else – who isn’t. I can tell you, as someone who has engaged with non-profits and companies that make use of public data, no one is more concerned about accuracy of data than those who reuse it. That’s because when you make use of public data and share the results with the public or customers, they blame you, not the government source from which you got the data, for any problems or mistakes. So invariable one thing that happens when you make data open is that you actually have more stakeholders with strong interests in ensuring the data is accurate.

But there is also something subtly misleading about Canada Posts statement. At the moment, the only reason there is inaccurate data out there is because people are trying to find cheaper ways of creating the postal code data set and so are willing to tolerate less accurate data in order to not have to pay Canada Post. If (and that is a big if) Canada Post’s main concern was accuracy, then making the data open would be the best protection as it would eliminate less accurate version of postal code data. Indeed, this suggests a failure of understanding economics. Canada states that other parts of its business become more expensive when postal code data is inaccurate. That would suggest that providing free data might help reduce those costs – incenting people to create inaccurate postal code data by charging for it may be hurting Canada Post more than any else. But we can’t assess that, for reason I outline below. And ultimately, I suspect Canada Post’s main interest in not accuracy – it is cost recovery – but that doesn’t sound nearly as good as talking about accuracy or quality, so they try to shoe horn those ideas into their argument.

She said the data are sold on a “cost-recovery” basis but declined to make available the amount of revenue it brings in or the amount of money it costs the Crown corporation to maintain the data.

This is my favourite part. Basically, a crown corporation, whose assets belong to the public, won’t reveal the cost of a process over which it has a monopoly. Let’s be really clear. This is not like other parts of their business where there are competative risk in releasing information – Canada Post is a monopoly provider. Instead, we are being patronized and essentially asked to buzz off. There is no accountability and there is no reasons why they could give us these numbers. Indeed, the total disdain for the public is so appalling it reminds me of why I opt out of junk mail and moved my bills to email and auto-pay ages ago.

This matters because the “cost-recovery” issue goes to the heart of the debate. As I noted above, Canada Post gets the underlying address data for free. That said, there is no doubt that it then creates some value to the data by adding postal codes. The question is, should that value best be recouped through cost-recovery at this point in the value chain, or at later stages through additional economy activity (and this greater tax revenue). This debate would be easier to have if we knew the scope of the costs. Does creating postal code data cost Canada Post $100,000 a year? A million? 10 million? We don’t know and they won’t tell us. There are real economic benefits to be had in a digital economy where postal code data is open, but Canada Post prevents us from having a meaningful debate since we can’t find out the tradeoffs.

In addition, it also means that we can’t assess if their are disruptive ways in which postal code data could be generated vastly more efficiently. Canada Post has no incentive (quite the opposite actually) to generate this data more efficiently and there for make the “cost-recovery” much, much lower. It may be that creating postal code data really is a $100,000 a year problem, with the right person and software working on it.

So in the end, a government owned Crown Corporation refuses to not only do something that might help spur Canada’s digital economy – make postal code data open – it refuses to even engage in a legitimate public policy debate. For an organization that is fighting to find its way in the 21st century it is a pretty ominous sign.

* As an aside, in the Citizen article it says that I’m an open government activist who is working with the federal government on the website’s development. The first part – on activism – is true. The latter half, that I work on the open government website’s development, is not. The confusion may arise from the fact that I sit on the Treasury Board’s Open Government Advisory Panel, for which I’m not paid, but am asked for feedback, criticism and suggestions – like making postal code data open – about the government’s open government and open data initiatives.

Control Your Content: Why SurveyMonkey Should Add a "Download Your Answers" Button

Let me start by saying, I really like SurveyMonkey.

By this I mean, I like SurveyMonkey specifically, but I also like online surveys in general. They are easy to ignore if I’m uninterested in the topic but – when the topic is relevant –  it is a great, simple service that allows me to share feedback, comments and opinions with whomever wants to solicit them.

Increasingly however, I find people and organizations are putting up more demanding surveys – surveys that necessitate thoughtful, and even occasionally long form, responses.

Take for example the Canadian Government. It used an online survey tool during its consultation on open government and open data.  The experience was pretty good. Rather than a clunky government website, there was a relatively easy form to fill out. Better still, since the form was long, it was wonderful that you could save your answers and come back to it later! This mattered since some of the form’s questions prompted me to write lengthy (and hopefully) insightful responses.

But therein lies the rub. In the jargon of the social media world I was “creating content.” This wasn’t just about clicking boxes. I was writing. And surprisingly many of my answers were causing me to develop new ideas. I was excited! I wanted to take the content I had created and turn it into a blog post.

Sadly, most survey tools make it very, very hard for you to capture the content you’ve created. It feels like it would be relatively easy to have a “download my answers” button at the end of a survey. I mean, if I’ve taken 10-120 minutes to complete a survey or public consultation shouldn’t we make it easy for me to keep a record of my responses? Instead, I’ve got to copy and paste the questions, and my answers, into a text document as I go. And of course, I’d better decide that I want to do that before I start since some survey tools don’t allow you to go back and see previous answers.

I ultimately did convert my answers into a blog post (you can see it here), but there was about 20 minutes of cutting, pasting, figuring things out, and reformatting. And there was some content (like Semantic Differential questions – where you rate statements) which were simply to hard to replicate.

There are, of course, other uses too. I had a similar experience last week after being invited to complete a survey posted by the Open Government Partnership Steering Committee on its Independent Reporting Mechanism. About half way through filling it out some colleagues suggested we compare answers to better understand one another’s advice. A download your answer tool would have convert a 15 minute into a 10 second task. All to access content I created.

I’m not claiming this is the be all, end all of online survey features, but it is the kind of simple thing that I survey company can do that will cause some users to really fall in love with the service. To its credit SurveyMonkey was at least willing to acknowledge the feedback – just what you’d hope for from a company that specializes in soliciting opinion online! With luck, maybe the idea will go somewhere.

sm-tweet

Lessons from Michigan's "Innovation Fund" for Government Software

So it was with great interest that several weeks ago a reader emailed me this news article coming out of Michigan. Turns out the state recently approved a $2.5 million dollar innovation fund that will be dispersed in $100,000 to $300,000 chunks to fund about 10 projects. As Government Technology reports:

The $2.5 million innovation fund was approved by the state Legislature in Michigan’s 2012 budget. The fund was made formal this week in a directive from Gov. Rick Snyder. The fund will be overseen by a five-person board that includes Michigan Department of Technology, Management and Budget (DTMB) Director John Nixon and state CIO David Behen.

There are lessons in this for other governments thinking about how to spur greater innovation in government while also reducing the cost of software.

First up: the idea of an innovation fund – particularly one that is designed to support software that works for multiple governments – is a laudable one. As I’ve written before, many governments overpay for software. I shudder to think of how many towns and counties in Michigan alone are paying to have the exact same software developed for them independently. Rather than writing the same piece of software over and over again for each town, getting a single version that is usable by 80% (or heck, even just 25%) of cities and counties would be a big win. We have to find a way to get governments innovating faster, and getting them back in the driver’s seat on  the software they need (as opposed to adapting stuff made for private companies) would be a fantastic start.

Going from this vision – of getting something that works in multiple cities – to reality, is not easy. Read the Executive Directive more closely. What’s particularly interesting (from my reading) is the flexibility of the program:

In addition to the Innovation Fund and Investment Board, the plan may include a full range of public, private, and non-profit collaborative innovation strategies, including resource sharing…

There is good news and bad news here.

The bad news is that all this money could end up as loans to mom and pop software shops that serve a single city or jurisdiction, because they were never designed from the beginning to be usable across multiple jurisdictions. In other words, the innovation fund could go to fund a bunch of vendors who already exist and who, at best, do okay, or at worse, do mediocre work and, in either case, will never be disruptive and blow up the marketplace with something that is both radically helpful and radically low cost.

What makes me particularly nervous about the directive is that there is no reference to open source license. If a government is going to directly fund the development of software, I think it should be open source; otherwise, taxpayers are acting as venture capitalists to develop software that they are also going to pay licenses to use. In other words, they’re absorbing the risk of a VC in order to have the limited rights of being a client; that doesn’t seem right. An open source requirement would be the surest way to ensure an ROI on the program’s money. It assures that Michigan governments that want access to what gets developed can get use it at the lowest possible cost. (To be clear, I’ve no problem with private vendors – I am one – but their software can be closed because they (should) be absorbing the risk of developing it themselves. If the government is giving out grants to develop software for government use, the resulting software should be licensed open.)

Which brings us to the good. My interest in the line of the executive directive cited above was piqued by the reference to public and non-profit “collaborative innovation strategies.” I read that and I immediately think of one of my favourite organizations: Kuali.

Many readers have heard me talk about Kuali, an organization in which a group of universities collectively set the specs for a piece of software they all need and then share in the costs of developing it. I’m a big believer that this model could work for local and even state level governments. This is particularly true for the enterprise management software packages (like financial management), for which cities usually buy over-engineered, feature rich bloatware from organizations like SAP. The savings in all this could be significant, particularly for the middle-sized cities for whom this type of software is overkill.

My real hope is that this is the goal of this fund – to help provide some seed capital to start 10 Kuali-like projects. Indeed, I have no idea if the governor and his CIO’s staff have heard of or talked to the Kuali team before signing this directive, but if they haven’t, they should now. (Note: It’s only a 5 hour drive from the capital, Lansing, Michigan to the home of Kuali in Bloomington, Indiana).

So, if you are a state, provincial or national government and you are thinking about replicating Michigan’s directive – what should you do? Here’s my advice:

  • Require that all the code created by any projects you fund be open source. This doesn’t mean anyone can control the specs – that can still reside in the hands of a small group of players, but it does mean that a variety of companies can get involved in implementation so that there is still competition and innovation. This was the genius of Kuali – in the space of a few months, 10 different companies emerged that serviced Kuali software – in other words, the universities created an entire industry niche that served them and their specific needs exclusively. Genius.
  • Only fund projects that have at least 3 jurisdictions signed up. Very few enterprise open source projects start off with a single entity. Normally they are spec’ed out with several players involved. This is because if just one player is driving the development, they will rationally always choose to take shortcuts that will work for them, but cut down on the likelihood the software will work for others. If, from the beginning, you have to balance lots of different needs, you end up architecting your solution to be flexible enough to work in a diverse range of environments. You need that if your software is going to work for several different governments.
  • Don’t provide the funds, provide matching funds. One way to ensure governments have skin in the game and will actually help develop software is to make them help pay for the development. If a city or government agency is devoting $100,000 towards helping develop a software solution, you’d better believe they are going to try to make it work. If the State of Michigan is paying for something that may work, maybe they’ll contribute and be helpful, or maybe they’ll sit back and see what happens. Ensure they do the former and not the latter – make sure the other parties have skin in the game.
  • Don’t just provide funds for development – provide funds to set up the organization that will coordinate the various participating governments and companies, set out the specs, and project manage the development. Again, to understand what that is like – just fork Kuali’s governance and institutional structure.
  • Ignore government agencies or jurisdictions that believe they are a special unique flower. One of the geniuses of Kuali is that they abstracted the process/workflow layer. That way universities could quickly and easily customize the software so that it worked for how their university does its thing. This was possible not because the universities recognized they were each a unique and special flower but because they recognized that for many areas (like library or financial management) their needs are virtually identical. Find partners that look for similarities, not those who are busy trying to argue they are different.

There is of course more, but I’ll stop there. I’m excited for Michigan. This innovation fund has real promise. I just hope that it gets used to be disruptive, and not to simply fund a few slow and steady (and stodgy) software incumbents that aren’t going to shake up the market and help change the way we do government procurement. We don’t need to spend $2.5 million to get software that is marginally better (or not even). Governments already spend billions every year for that. If we are going to spend a few million to innovate, let’s do it to be truly disruptive.

Public Policy: The Big Opportunity For Health Record Data

A few weeks ago Colin Hansen – a politician in the governing party in British Columbia (BC) – penned an op-ed in the Vancouver Sun entitled Unlocking our data to save lives. It’s a paper both the current government and opposition should read, as it is filled with some very promising ideas.

In it, he notes that BC has one of the best collections of health data anywhere in the world and that, data mining these records could yield patterns – like longitudinal adverse affects when drugs are combined or the correlations between diseases – that could save billions as well as improve health care outcomes.

He recommends that the province find ways to share this data with researchers and academics in ways that ensure the privacy of individuals are preserved. While I agree with the idea, one thing we’ve learned in the last 5 years is that, as good as academics are, the wider public is often much better in identifying patterns in large data sets. So I think we should think bolder. Much, much bolder.

Two years ago California based Heritage Provider Network, a company that runs hospitals, launched a $3 Million predictive health contest that will reward the team who, in three years, creates the algorithm that best predicts how many days a patient will spend in a hospital in the next year. Heritage believes that armed with such an algorithm, they can create strategies to reach patients before emergencies occur and thus reduce the number of hospital stays. As they put it: “This will result in increasing the health of patients while decreasing the cost of care.”

Of course, the algorithm that Heritage acquires through this contest will be proprietary. They will own it and I can choose who to share it with. But a similar contest run by BC (or say, the VA in the United States) could create a public asset. Why would we care if others made their healthcare system more efficient, as long as we got to as well. We could create a public good, as opposed to Heritage’s private asset. More importantly, we need not offer a prize of $3 million dollars. Several contests with prizes of $10,000 would likely yield a number of exciting results. Thus for very little money with might help revolutionize BC, and possibly Canada’s and even the world’s healthcare systems. It is an exciting opportunity.

Of course, the big concern in all of this is privacy. The Globe and Mail featured an article in response to Hansen’s oped (shockingly but unsurprisingly, it failed to link back to – why do newspaper behave that way?) that focused heavily on the privacy concerns but was pretty vague about the details. At no point was a specific concern by the privacy commissioner raised or cited. For example, the article could have talked about the real concern in this space, what is called de-anonymization. This is when an analyst can take records – like health records – that have been anonymized to protect individual’s identity and use alternative sources to figure out who’s records belong to who. In the cases where this occurs it is usually only only a handful of people whose records are identified, but even such limited de-anonymization is unacceptable. You can read more on this here.

As far as I can tell, no one has de-anonymized the Heritage Health Prize data. But we can take even more precautions. I recently connected with Rob James – a local epidemiologist who is excited about how opening up anonymized health care records could save lives and money. He shared with me an approach taking by the US census bureau which is even more radical than de-anonymization. As outlined in this (highly technical) research paper by Jennifer C. Huckett and Michael D. Larsen, the approach involves creating a parallel data set that has none of the features of the original but maintains all the relationships between the data points. Since it is the relationships, not the data, that is often important a great deal of research can take place with much lower risks. As Rob points out, there is a reasonably mature academic literature on these types of privacy protecting strategies.

The simple fact is, healthcare spending in Canada is on the rise. In many provinces it will eclipse 50% of all spending in the next few years. This path is unsustainable. Spending in the US is even worse. We need to get smarter and more efficient. Data mining is perhaps the most straightforward and accessible strategy at our disposal.

So the question is this: does BC want to be a leader in healthcare research and outcomes in an area the whole world is going to be interested in? The foundation – creating a high value data set – is already in place. The unknown is if can we foster a policy infrastructure and public mandate that allows us to think and act in big ways. It would be great if government officials, the privacy commissioner and some civil liberties representatives started to dialogue to find some common ground.  The benefits to British Columbians – and potentially to a much wider population – could be enormous, both in money and, more importantly, lives, saved.

Canada Post’s War on the 21st Century, Innovation & Productivity

The other week Canada Post announced it was suing Geocoder.ca – an alternative provider of postal code data. It’s a depressing statement on the status of the digital economy in Canada for a variety of reasons. The three that stand out are:

1) The Canadian Government has launched an open government initiative which includes a strong emphasis on open data and innovation. Guess which data set is the most requested data set by the public: Postal Code data.

2) This case risks calling into question the government’s commitment to (and understanding of) digital innovation, and

3) it is an indication – given the flimsiness of the case – of how little crown corporations understand the law (or worse, how willing they are to use the taxpayer funded litigation to bully others irrespective of the law).

Let me break down the situation into three parts. 1) Why this case matters to the digital economy (and why you should care), 2) Why the case is flimsy (and a ton of depressingly hilariously facts) and 3) What the Government could be doing about, but isn’t.

Why this case matters.

So… funny thing the humble postal code. One would have thought that, in a digital era, the lowly postal code would have lost its meaning.

The interesting truth however, is that the lowly postal code has, in many ways, never been more important. For better for worse, postal codes have become a core piece of data for both the analog and especially digital economy. These simple, easy to remember, six digit numbers, allow you to let a company, political party, or non-profit to figure out what neighborhood, MP riding or city you are in. And once we know where you are, there are all sorts of services the internet can offer you: is that game you wanted available anywhere near you? Who are your elected representatives (and how did they vote on that bill)? What social services are near you? Postal codes, quite simply, one of the easiest ways for us to identify where we are, so that governments, companies and others can better serve us. For example, after to speaking to Geocoder.ca founder Ervin Ruci, it turns out that federal government ministries are a major client of his, dozens of different departments using his service including… the Ministry of Justice.

Given how important postal code data is given it can enable companies, non-profits and government’s to be more efficient and productive (and thus competitive), one would think government would want to make it as widely available as possible. This is, of course, what several governments do.

But not Canada. Here postal code data is managed by Canada Post, which charges, I’m told, between $5,000-$50,000 dollars for access to the postal code database (depending on what you want). This means, in theory, every business (or government entity at the local, provincial or federal level) in Canada that wants to use postal code information to figure out where its customers are located must pay this fee, which, of course, it passes along to its customers. Worse, for others the fee is simple not affordable. For non-profits, charities and, of course, small businesses and start-ups, they either choose to be less efficient, or test their business model in a jurisdiction where this type of data is easier to access.

Why this case is flimsy

Of course, because postal codes are so important, Geocoder came up with an innovative solution to the problem. Rather than copy Canada Post’s postal code data base (which would have violated Canada Post’s terms of use) they did something ingenious… they got lots of people to help them manually recreate the data set. (There is a brief description of how here) As the Canadian Internet Policy and Public Interest Clinic (CIPPIC) brilliant argues in their defense of Geocoder: “The Copyright Act confers on copyright owners only limited rights in respect of particular works: it confers no monopoly on classes of works (only limited rights in respect of specific original works of authorship), nor any protection against independent creation. The Plaintiff (Canada Post) improperly seeks to use the Copyright Act to craft patent-like rights against competition from independently created postal code databases.”

And, of course, there are even deeper problems with Canada Post’s claims:

The first is that an address – including the postal code – is a fact. And facts cannot be copyrighted. And, of course, if Canada Post won, we’d all be hooped since writing a postal code down on say… an envelop, would violate Canada Post’s copyright.

The second, was pointed out to me by a mail list contributor who happened to work for a city. He pointed out that it is local governments that frequently create the address data and then share it with Canada Post. Can you imagine if cities tried to copyright their address data? The claim is laughable. Canada post claims that it must charge for the data to recoup the cost of creating it, but the data it gets from cities it gets for free – the creation of postal code data should not be an expensive proposition.

But most importantly… NON OF THIS SHOULD MATTER. In a world of where our government is pushing an open data strategy, the economic merits of making one of the most important open data sets public, should stand on their own without the fact that the law is on our side.

There is also a bonus 4th element which makes for fun reading in the CIPPIC defense that James McKinney pointed out:

“Contrary to the Plaintiff’s (Canada Post’s) assertion at paragraph 11 of the Statement of Claim that ‘Her Majesty’s copyright to the CPC Database was transferred to Canada Post’ under section 63 of the Canada Post Corporation, no section 63 of the current Canada Post Corporation Act  even exists. Neither does the Act that came into force in 1981 transfer such title.”

You can read the Canada Post Act on the Ministry of Justice’s website here and – as everyone except, apparently, Canada Post’s lawyers has observed – it has only 62 sections.

What Can Be Done.

Speaking of The Canada Post Act, while there is no section 63, there is a section 22, which appears under the header “Directives” and, intriguingly, reads:

22. (1) In the exercise of its powers and the performance of its duties, the Corporation shall comply with such directives as the Minister may give to it.

In other words… the government can compel Canada Post to make its Postal Code data open. Sections 22 (3), (4) and (5) suggest that the government may have to compensate Canada Post for the cost of implementing such a directive, but it is not clear that it must do so. Besides, it will be interesting to see how much money is actually at stake. As an aside, if Canada were to explore privatizing Canada Post, separating out the postal code function and folding it back into government would be a logical decision since you would want all players in the space (a private Canada Post, FedEx, Puralator, etc…) to all be able to use a single postal code system.

Either way, the government cannot claim that Canada Post’s crown corporation status prevents it from compelling the organization to apply an open license to its postal code data. The law is very clear that it can.

What appears to be increasingly obvious is that the era of closed postal code data will be coming to an end. It may be in a slow, expensive and wasteful lawsuit that costs both Canada Post, Canadian taxpayers and CIPPIC resources and energy they can ill afford, or it can come quickly through a Ministerial directive.

Let’s hope that latter prevails.

Indeed, the postal code has arguably become the system for physical organizing our society. Everything from the census to urban planning to figuring out where to build a Tim Horton’s or Starbucks will often use postal code data as the way to organize data about who we are and where we live. Indeed it is the humble postal code that frequently allows all these organizations – from governments to non-profits to companies – to be efficient about locating people and allocating resources. Oh. And it also really helps for shipping stuff quickly that you bought online.

It would be nice to live in a country that really understood how to support a digital economy. Sadly, last week, I was once again reminded of how frustrating it is to try to be 21st century company in Canada.

What happened?

Directives
  • 22. (1) In the exercise of its powers and the performance of its duties, the Corporation shall comply with such directives as the Minister may give to it.