Category Archives: commentary

Government Procurement Reform – It matters

Earlier this week I posted a slidecast on my talk to Canada’s Access to Information Commissioners about how, as they do their work, they need to look deeper into the government “stack.”

My core argument was how decisions about what information gets made accessible is no longer best managed at the end of a policy development or program delivery process but rather should be embedded in it. This means monkeying around and ensuring there is capacity to export government information and data from the tools (e.g. software) government uses every day. Logically, this means monkeying around in procurement policy (see slide below) since that is where the specs for the tools public servants use get set. Trying to bake “access” into processes after the software has been chosen is, well, often an expensive nightmare.

Gov stack

Privately, one participant from a police force, came up to me afterward and said that I was simply guiding people to another problem – procurement. He is right. I am. Almost everyone I talk to in government feels like procurement is broken. I’ve said as much myself in the past. Clay Johnson is someone who has thought about this more than others, here he is below at the Code for America Summit with a great slide (and talk) about how the current government procurement regime rewards all the wrong behaviours and often, all the wrong players.

Clay Risk profile

So yes, I’m pushing the RTI and open data community to think about procurement on purpose. Procurement is borked. Badly. Not just from a wasting tax dollars money perspective, or even just from a service delivery perspective, but also because it doesn’t serve the goals of transparency well. Quite the opposite. More importantly, it isn’t going to get fixed until more people start pointing out that it is broken and start contributing to solving this major bottle neck of a problem.

I highly, highly recommend reading Clay Johnson’s and Harper Reed’s opinion piece in today’s New York Times about procurement titled Why the Government Never Gets Tech Right.

All of this becomes more important if the White House’s (and other governments’ at all levels) have any hope of executing on their digital strategies (image below).  There is going to be a giant effort to digitize much of what governments do and a huge number of opportunities for finding efficiencies and improving services is going to come from this. However, if all of this depends on multi-million (or worse 10 or 100 million) dollar systems and websites we are, to put it frankly, screwed. The future of government isn’t to be (continue to be?) taken over by some massive SAP implementation that is so rigid and controlled it gives governments almost no opportunity to innovate. And this is the future our procurement policies steer us toward. A future with only a tiny handful of possible vendors, a high risk of project failure and highly rigid and frail systems that are expensive to adapt.

Worse there is no easy path here. I don’t see anyone doing procurement right. So we are going to have to dive into a thorny, tough problem. However, the more governments that try to tackle it in radical ways, the faster we can learn some new and interesting lessons.

Open Data WH

Why Journalists Should Support Putting Access to Information Requests Online Immediately

Here’s a headline you don’t often expect to see: “Open-Government Laws Fuel Hedge-Fund Profits.”

It’s a fascinating article that opens with a story about SAC Capital Advisors LP – a hedge fund. Last December SAC Capital used Freedom of Information Laws (FOIA) to request preliminary results on a Vertex Pharmaceuticals drug being tested by the US Food and Drug Administration. The request revealed there were no “adverse event reports,” increasing the odds the drug might be approved. SAC Capital used this information – according to the Wall Street Journal – to snatch up 15,000 shares and 25,000 options of Vertex. In December – when the request was made – the stock traded around $40. Eight months later it peaked at $89 and still trades today at around $75. Thus, clever usage of government access to information request potentially netted the company a cool ROI of 100% in 9 months and a profit of roughly 1.2 million dollars (assuming they sold around $80).

This is an interesting story. And I fear it says a lot about the future of access to information laws.

This is because it contrasts sharply with the vision of access to information the media likes to portray: Namely, that access requests are a tool used mainly by hardened journalists trying to uncover dirt about a government. This is absolutely the case… and an important use case. But it is not the only usage of access laws. Nor was it the only intended use of the law. Indeed, it is not even the main usage of the law.

In my work on open data I frequently get pulled into conversations about access to information laws and their future. I find these conversations are aggressively dominated by media representatives (e.g. reporters) who dislike alternative views. Indeed, the one-sided nature of the conversation – with some journalists simply assuming they are the main and privileged interpreters of the public interest around access laws – is deeply unhealthy. Access to information laws are an important piece of legislation. Improving and sustaining them requires a coalition of actors (particularly including citizens), not just journalists. Telling others that their interests are secondary is not a great way to build an effective coalition. Worse, I fear the dominance of a single group means the conversation is often shaped by a narrow view of the legislation and with a specific set of (media company) interests in mind.

For example, many governments – including government agencies in my own province of British Columbia – have posted responses to many access to information requests publicly. This enrages (and I use that word specifically) many journalists who see it as a threat. How can they get a scoop if anyone can see government responses to their requests at the same time? This has led journalists to demand – sometimes successfully – that the requestor have exclusive access to government responses for a period of time. Oy vey. This is dangerous.

For certain types of stories I can see how complete transparency of request responses could destroy a scoop. But most stories – particularly investigative stories – require sources and context and understanding. Such advantages, I suspect, are hard to replicate and are the real source of competitive advantage (and if they aren’t… shouldn’t they be?).

It also suggests that a savvy public – and the media community – won’t be able to figure out who always seems to be making the right requests and reward them accordingly. But let’s put issues of a reputation economy and the complexity of reporting on a story aside.

First, it is worth noting that it is actually in the public interest to have more reporters cover a story and share a piece of news – especially about the government. Second, access to information laws were not created to give specific journalists scoops – they were designed to maximize the public’s capacity to access government information. Protecting a media company’s business model is not the role of access laws. It isn’t even in the spirit of the law.

Third, and worst, this entire debate fails to discuss the risks of such an approach. Which brings me back to the Wall Street Journal article.

I have, for years, warned that if public publication of access to information requests results are delayed so that one party (say, a journalist) has exclusive access for a period of time, then the system will also be used by others in pursuit of interests that might not be in the public good. Specifically, it creates a strong incentive for companies and investors to start mining government to get “exclusive” rights to government information they can put to use in advancing their agenda – making money.

As the SAC Capital Case outlined above underscores, information is power. And if you have exclusive access to that information, you have an advantage over others. That advantage may be a scoop on a government spending scandal, but it can also be a stock tip about a company whose drug is going to clear a regulatory hurdle, or an indication that a juicy government contract is about to be signed, or that a weapons technology is likely to be shelved by the defence department. In other words – and what I have pointed out to my journalist friends – exclusivity in access to information risks transforming the whole system into a giant insider information generation machine. Great for journalists? Maybe. (I’ve my doubts – see above.) But great for companies? The Wall Street Journal article shows us it already is. Exclusivity would make it worse.

Indeed, in the United States, the private sector is already an enormous generator of access requests. Indeed one company, that serves as a clearing house for requests, accounts for 10% of requests on its own:

The precise number of requests from investors is impossible to tally because many come from third-party organizations that send requests on behalf of undisclosed clients—a thriving industry unto itself. One of them, FOI Services Inc., accounted for about 10% of the 50,000 information requests sent to the FDA during the period examined by the Journal. Marlene Bobka, a senior vice president at Washington-based FOI Services, says a “huge, huge reason people use our firm is to blind their requests.”

Imagine what would happen if those making requests had formal exclusive rights? The secondary market in government information could become huge. And again, not in a way that advances the public interest.

In fact, given the above-quoted paragraph, I’m puzzled by the fact that journalists don’t demand that every access to information request be made public immediately. All told, the resources of the private sector (to say nothing of the tens of thousands of requests made by citizens or NGOs) dwarf those of media companies. Private companies may start (or already are) making significantly more requests than journalists ever could. Free-riding on their work could probably be a full time job and a successful career for at least a dozen data journalists. In addition, by not duplicating this work, it frees up media companies’ capacity to focus on the most important problems that are in the public good.

All of this is to say… I fear for a world where many of the journalists I know – by demanding changes that are in their narrow self-interest – could help create a system that, as far as I can tell, could be deeply adverse to the public interest.

I’m sure I’m about to get yelled at (again). But when it comes to access to information requests, we are probably going to be better off in a world where they are truly digitized. That means requests can be made online (something that is somewhat arriving in Canada) and – equally importantly – where results are also published online for all to see. At the very minimum, it is a conversation that is worth having.

New Zealand: The World’s Lab for Progressive Tech Legislation?

Cross posted with TechPresident.

One of the nice advantage of having a large world with lots of diverse states is the range of experiments it offers us. Countries (or regions within them) can try out ideas, and if they work, others can copy them!

For example, in the world of drug policy, Portugal effectively decriminalized virtually all drugs. The result has been dramatic. And much of it positive. Some of the changes include a decline in both HIV diagnoses amongst drug users by 17% and drug use among adolescents (13-15 yrs). For those interested you can read more about this in a fantastic report by the Cato Institute written by Glenn Greenwald back in 2009 before he started exposing the unconstitutional and dangerous activities of the NSA. Now some 15 years later there have been increasing demands to decriminalize and even legalize drugs, especially in Latin America. But even the United States is changing, with both the states of Washington and Colorado opting to legalize marijuana. The lessons of Portugal have helped make the case, not by penetrating the public’s imagination per se, but by showing policy elites that decriminalization not only works but it saves lives and saves money. Little Portugal may one day be remembered for changing the world.

I wonder if we might see a similar paper written about New Zealand ten years from now about technology policy. It may be that a number of Kiwis will counter the arguments in this post by exposing all the reasons why I’m wrong (which I’d welcome!) but at a glance, New Zealand would probably be the place I’d send a public servant or politician wanting to know more about how to do technology policy right.

So why is that?

First, for those who missed it, this summer New Zealand banned software patents. This is a stunning and entirely sensible accomplishment. Software patents, and the legal morass and drag on innovation they create, are an enormous problem. The idea that Amazon can patent “1-click” (e.g. the idea that you pre-store someone’s credit card information so they can buy an item with a single click) is, well, a joke. This is a grand innovation that should be protected for years?

And yet, I can’t think of single other OECD member country that is likely to pass similar legislation. This means that it will be up to New Zealand to show that the software world will survive just fine without patents and the economy will not suddenly explode into flames. I also struggle to think of an OECD country where one of the most significant industry groups – the Institute of IT Professionals appeared – would not only both support such a measure but help push its passage:

The nearly unanimous passage of the Bill was also greeted by Institute of IT Professionals (IITP) chief executive Paul Matthews, who congratulated [Commerce Minister] Foss for listening to the IT industry and ensuring that software patents were excluded.

Did I mention that the bill passed almost unanimously?

Second, New Zealanders are further up the learning curve around the dangerous willingness their government – and foreign governments – have for illegally surveilling them online.

The arrest of Kim Dotcom over MegaUpload has sparked some investigations into how closely the country’s police and intelligence services follow the law. (For an excellent timeline of the Kim Dotcom saga, check out this link). This is because Kim Dotcom was illegally spied on by New Zealand’s intelligence services and police force, at the behest of the United States, which is now seeking to extradite him. The arrest and subsequent fall out has piqued public interest and lead to investigations including the Kitteridge report (PDF) which revealed that “as many as 88 individuals have been unlawfully spied on” by the country’s Government Communications Security Bureau.

I wonder if the Snowden documents and subsequent furor probably surprised New Zealanders less than many of their counterparts in other countries since it was less a bombshell than another data point on a trend line.

I don’t want to overplay the impact of the Kim Dotcom scandal. It has not, as far as I can tell, lead to a complete overhaul of the rules that govern intelligence gathering and online security. That said, I suspect, it has created a political climate that amy be more (healthily) distrustful of government intelligence services and the intelligence services of the United States. As a result, it is likely that politicians have been more sensitive to this matter for a year or two longer than elsewhere and that public servants are more accustomed at policies through the lens of its impact on rights and privacy of citizens than in many other countries.

Finally, (and this is somewhat related to the first point) New Zealand has, from what I can tell, a remarkably strong open source community. I’m not sure why this is the case, but suspect that people like Nat Torkington – and open source and open data advocate in New Zealand – and others like him play a role in it. More interestingly, this community has had influence across the political spectrum. The centre left labour party deserves much of the credit for the patent reform while the centre-right New Zealand National Party has embraced both open data. The country was among the first to embrace open source as a viable option when procuring software and in 2003 the government developed an official open source policy to help clear the path for greater use of open source software. This contrasts sharply with my experience in Canada where, as late as 2008, open source was still seen by many government officials as a dangerous (some might say cancerous?) option that needed to be banned and/or killed.

All this is to say that in both the public (e.g. civil society and the private sector) and within government there is greater expertise around thinking about open source solutions and so an ability to ask different questions about intellectual property and definitions of the public good. While I recognize that this exists in many countries now, it has existed longer in New Zealand than in most, which suggests that it enjoys greater acceptance in senior ranks and there is greater experience in thinking about and engaging these perspectives.

I share all this for two reasons:

First, I would keep my eye on New Zealand. This is clearly a place where something is happening in a way that may not be possible in other OECD countries. The small size of its economy (and so relative lack of importance to the major proprietary software vendors) combined with a sufficient policy agreement both among the public and elites enables the country to overcome both internal and external lobbying and pressure that would likely sink similar initiatives elsewhere. And while New Zealand’s influence may be limited, don’t underestimate the power of example. Portugal also has limited influence, but its example has helped show the world that the US -ed narrative on the “war on drugs” can be countered. In many ways this is often how it has to happen. Innovation, particularly in policy, often comes from the margins.

Second, if a policy maker, public servant or politician comes to me and asks me who to talk to around digital policy, I increasingly find myself looking at New Zealand as the place that is the most compelling. I have similar advice for PhD students. Indeed, if what I’m arguing is true, we need research to describe, better than I have, the conditions that lead to this outcome as well as the impact these policies are having on the economy, government and society. Sadly, I have no names to give to those I suggest this idea to, but I figure they’ll find someone in the government to talk to, since, as a bonus to all this, I’ve always found New Zealanders to be exceedingly friendly.

So keep an eye on New Zealand, it could be the place where some of the most progressive technology policies first get experimented with. It would be a shame if no one noticed.

(Again If some New Zealanders want to tell me I’m wrong, please do. Obviously, you know your country better than I do).

Beyond Property Rights: Thinking About Moral Definitions of Openness

“The more you move to the right the more radical you are. Because everywhere on the left you actually have to educate people about the law, which is currently unfair to the user, before you even introduce them to the alternatives. You aren’t even challenging the injustice in the law! On the right you are operating at a level that is liberated from identity and accountability. You are hacking identity.” – Sunil Abraham

I have a new piece up on TechPresident titled: Beyond Property Rights: Thinking About Moral Definitions of Openness.

This piece, as the really fun map I recreated is based on a conversation with Sunil Abraham (@sunil_abraham), the Executive Director of the Centre for Internet and Society in Bangalore.

If you find this map interesting… check the piece out here.

map of open

 

Some thoughts on the relaunched data.gc.ca

Yesterday, I talked about what I thought was the real story that got missed in the fanfare surrounding the relaunch of data.gc.ca. Today I’ll talk about the new data.gc.ca itself.

Before I begin, there is an important disclaimer to share (to be open!). Earlier this year Treasury Board asked me to chair five public consultations across Canada to gather feedback on both its open data program and data.gc.ca in particular. As such, I solicited peoples suggestions on how data.gc.ca could be improved – as well as shared my own – but I was not involved in the creation of data.gc.ca. Indeed the first time I saw the site was on Tuesday when it launched. My role was merely to gather feedback. For those curious you can read the report I wrote here

There is, I’m happy to say, much to commend about the new open data portal. Of course, aesthetically, it is much easier on the eye, but this is really trivial compared to a number of other changes.

The most important shift relates to the desire of the site to foster community. Users can now register with the site as well as rate and comment on data sets. There are also places like the Developers’ Corner which contains documentation that potential users might find helpful and a sort of app store where government agencies and citizens can posts applications they have created. This shift mirrors the evolution of data.govdata.gov.uk and DataBC which started out as data repositories but sought to foster and nurture a community of data users. The critical piece here is that simply creating the functionality will probably not be sufficient, in the US, UK and BC it has required dedicated community managers/engagers to help foster such a community. At present it is unclear if that exists behind the website at data.gc.ca.

The other two noteworthy improvements to the site are an improved search and the availability of API’s. While not perfect, the improved search is nonetheless helpful as previously it was basically impossible to find anything on the site. Today a search for “border time” and a border wait time data set is the top result. However, search for “border wait times” and “Biogeochemical exploration using Douglas-fir tree tops in the Mabel Lake area, southern British Columbia (NTS 82L09 and 10)” becomes the top hit with actual border wait time data set pushed down to fifth. That said the search is still a vast improvement and this alone could be a boon to policy wonks, researchers and developers who elect to make use of the site.

The introduction of APIs is another interesting development. For the uninitiated an API (application programming interface) provides continuous access to updated data, so rather than downloading a file, it is more like you are plugging into a socket that delivers data, rather than electricity. The aforementioned border wait time data set is a fantastic example. It is less of a “data set” than of a “data stream” providing the most recent updates of border wait times, like what you would see on the big signs across the highway as you approach the border. By providing it through the open data site it would not, for example, be impossible for Google Maps to scan this data set daily, understand how border wait times fluctuate and incorporate these delays in its predicted travel times. Indeed, it could even querry the API  in real time and tell you how long it will take to drive from Vancouver to Seattle, with border delays taken into account. The opportunity for developers and, equally intriguing, government employees and contractors, to build applications a top of these APIs is, in my mind, quite exciting. It is a much, much cheaper and flexible approach than how a lot of government software is currently built.

I also welcome the addition of the ability to search Access to Information (ATIP) requests summaries. That said, I’d like for there to be more than just the summaries, that actually responses would be nice, particularly given that ATIP requests likely represent information people have identified as important. In addition, the tool for exploring government expenditures is interesting, but it is weirdly more notable because, as far as I can tell, none of the data displayed in the tool can be downloaded, meaning it is not very open.

Finally, I will briefly note that the license is another welcome change. For more on that I recommend checking out Teresa Scassa’s blog post on it. Contrary to my above disclaimer I have been more active on this side of things, and hope to have more to share on that another time.

I’m sure, as I and others explore the site in the coming days we will discover more to like and dislike about it, but it is a helpful step forward and another signal that open data is, slowly, being baked into the public service as a core service.

 

The Real News Story about the Relaunch of data.gc.ca

As many of my open data friends know, yesterday the government launched its new open data portal to great fanfare. While there is much to talk about there – something I will dive into tomorrow – that was not the only thing that happened yesterday.

Indeed, I did a lot of media yesterday between flights and only after it was over did I notice that virtually all the questions focused on the relaunch of data.gc.ca. Yet it is increasingly clear that for me, the much, much bigger story of the portal relaunch was the Prime Minister announcing that Canada would adopt the Open Data Charter.

In other words, Canada just announced that it is moving towards making all government data open by default. Moreover, it even made commitments to make specific “high value” data sets open in the next couple of years.

As an aside, I don’t think the Prime Minister’s office has ever mentioned open data – as far as I can remember, so that was interesting in of itself. But what is still more interesting is what the Prime Minister committed Canada to. The open data charter commits the government to make data open by default as well as four other principles including:

  • Quality and Quantity
  • Useable by All
  • Releasing Data for Improved Governance
  • Releasing Data for Innovation

In some ways Canada has effectively agreed to implement the equivalent to Presidential Executive Order on Open Data the White House announced last month (and that I analyzed in this blog post). Indeed, the charter is more aggressive than the executive order since it goes on to layout the need to open up not just future data, but also current “high value” data sets. Included among these are data sets the Open Knowledge Foundation has been seeking to get opened via its open data census, as well as some data sets I and many others have argued should be made open, such as the company/business register. Other suggested high value data sets include data on crime, school performance, energy and environment pollution levels, energy consumption, government contracts, national budgets, health prescription data and many, many others. Also included on the list… postcodes – something we are presently struggling with here in Canada.

But the charter wasn’t all the government committed to. The final G8 communique contained many interesting tidbits that again, highlighted commitments to open up data and adhere to international data schemas.

Among these were:

  • Corporate Registry Data: There was a very interesting section on “Transparency of companies and legal arrangements” which is essentially on sharing data about who owns companies. As an advisory board member to OpenCorporates, this was music to my ears. However, the federal government already does this, the much, much bigger problem is with the provinces, like BC and Quebec that make it difficult or expensive to access this data.
  • Extractive Industries Transparency Initiative: A commitment that “Canada will launch consultations with stakeholders across Canada with a view to developing an equivalent mandatory reporting regime for extractive companies within the next two years.” This is something I fought to get included into our OGP commitment two years ago but failed to succeed at. Again, I’m thrilled to see this appear in the communique and look forward to the government’s action.
  • International Aid Transparency Initiative (IATI) and Busan Common Standard on Aid Transparency,: A commitment to make aid data more transparent and downloadable by 2015. Indeed, with all the G8 countries agreed to taking this step it may be possible to get greater transparency around who is spending what money, where on aid. This could help identify duplication as well as in assessments around effectiveness. Given how precious aid dollars are, this is a very welcome development. (h/t Michael Roberts of Acclar.org)

So lots of commitments, some on the more vague side (the open data charter) but some very explicit and precise. And that is the real story of yesterday, not that the country has a new open data portal, but that a lot more data is likely going to get put into that portal over then next 2-5 years. And a tsunami of data could end up in it over the next 10-25 years. Indeed, so much data, that I suspect a portal will no longer be a logical way to share it all.

And therein lies the deeper business and government story in all this. As I mentioned in my analysis of the White House Executive Order that made open data default, the big change here is in procurement. If implemented, this could have a dramatic impact on vendors and suppliers of equipement and computers that collect and store data for the government. Many vendors try to find ways to make their data difficult to export and share so as to lock the government in to their solution. Again, if (and this is a big if) the charter is implemented it will hopefully require a lot of companies to rethink what they offer to government. This is a potentially huge story as it could disrupt incumbents and lead to either big reductions in the costs of procurement (if done right) or big increases and the establishment of the same, or new, impossible to work with incumbents (if done incorrectly).

There is potentially a tremendous amount at stake in how the government handles the procurement side of all this, because whether it realizes it or not, it may have just completely shaken up the IT industry that serves it.

 

Postscript: One thing I found interesting about the G8 communique was how many times commitments about open data and open data sets occurred in the section that had nothing to do with open data. Will be interesting if that is a trend that continues at the next G8 meeting. Indeed, I wouldn’t be surprised is a specific open data section disappears and instead these references just become part of various issue related commitments.

 

 

 

Policy-Making in a Big Data World

For those interested I appeared on The Agenda with Steve Paikin the other week talking about Big Data and policy making.

There was a good discussion with a cast of character that included (not counting myself):

So much to dive into this space. There are, obviously, the dangers of thinking that data can solve all our problems, but I think the reverse is also true, that there is actually a real shortage of capacity within government (as in the private sector where these skills are highly sought after and compensated) to think critically about and effectively analyze data. Indeed, sadly, one of the few places in government that seems to understand and have the resources to work in this space is the security/intelligence apparatus.
It’s a great example of the growing stresses I think governments and their employees are going to be facing. One I hope we find ways to manage.

The Value of Open Data – Don’t Measure Growth, Measure Destruction

Alexander Howard – who, in my mind, is the best guy covering the Gov 2.0 space – pinged me the other night to ask “What’s the best evidence of open data leading to economic outcomes that you’ve seen?”

I’d like to hack the question because – I suspect – for many people, they will be looking to measure “economic outcomes” in ways that I don’t think will be so narrow as to be helpful. For example, if you are wondering what the big companies are going to be that come out of the open data movement and/or what are the big savings that are going to be found by government via sifting through the data, I think you are probably looking for the wrong indicators.

Why? Part of it is because the number of “big” examples is going to be small.

It’s not that I don’t think there won’t be any. For example several years ago I blogged about how FOIed (or, in Canada ATIPed) data that should have been open helped find $3.2B in evaded tax revenues channeled through illegal charities. It’s just that this is probably not where the wins will initially take place.

This is in part because most data for which there was likely to be an obvious and large economic impact (eg spawning a big company or saving a government millions) will have already been analyzed or sold by governments before the open data movement came along. On the analysis side of the question- if you are very confident a data set could yield tens or hundreds of millions in savings… well… you were probably willing to pay SAS or some other analytics firm 30-100K to analyze it. And you were probably willing to pay SAP a couple of million (a year?) to set up the infrastructure to just gather the data.

Meanwhile, on the “private sector company” side of the equation – if that data had value, there were probably eager buyers. In Canada for example, interest in census data – to help with planning where to locate stores or how to engage in marketing and advertising effectively – was sold because the private sector made it clear they were willing to pay to gain access to it. (Sadly, this was bad news for academics, non-profits and everybody else, for whom it should have been free, as it was in the US).

So my point is, that a great deal of the (again) obvious low hanging fruit has probably been picked long before the open data movement showed up, because governments – or companies – were willing to invest some modest amounts to create the benefits that picking those fruit would yield.

This is not to say I don’t think there are diamonds in the rough out there – data sets that will reveal significant savings – but I doubt they will be obvious or easy finds. Nor do I think that billion dollar companies are going to spring up around open datasets over night since –  by definition – open data has low barriers to entry to any company that adds value to them. One should remember it took Red Hat two decades to become a billion dollar company. Impressive, but it is still a tiny compared to many of its rivals.

And that is my main point.

The real impact of open data will likely not be in the economic wealth it generates, but rather in its destructive power. I think the real impact of open data is going to be in the value it destroys and so in the capital it frees up to do other things. Much like Red Hat is fraction of the size of Microsoft, Open Data is going to enable new players to disrupt established data players.

What do I mean by this?

Take SeeClickFix. Here is a company that, leveraging the Open311 standard, is able to provide many cities with a 311 solution that works pretty much out of the box. 20 years ago, this was a $10 million+ problem for a major city to solve, and wasn’t even something a small city could consider adopting – it was just prohibitively expensive. Today, SeeClickFix takes what was a 7 or 8 digit problem, and makes it a 5 or 6 digit problem. Indeed, I suspect SeeClickFix almost works better in a small to mid-sized government that doesn’t have complex work order software and so can just use SeeClickFix as a general solution. For this part of the market, it has crushed the cost out of implementing a solution.

Another example. And one I’m most excited. Look at CKAN and Socrata. Most people believe these are open data portal solutions. That is a mistake. These are data management companies that happen to have simply made “sharing (or “open”) a core design feature. You know who does data management? SAP. What Socrata and CKAN offer is a way to store, access, share and engage with data previously gathered and held by companies like SAP at a fraction of the cost. A SAP implementation is a 7 or 8 (or god forbid, 9) digit problem. And many city IT managers complain that doing anything with data stored in SAP takes time and it takes money. CKAN and Socrata may have only a fraction of the features, but they are dead simple to use, and make it dead simple to extract and share data. More importantly they make these costly 7 and 8 digital problems potentially become cheap 5 or 6 digit problems.

On the analysis side, again, I do hope there will be big wins – but what I really think open data is going to do is lower the costs of creating lots of small wins – crazy numbers of tiny efficiencies. If SAP and SAS were about solving the 5 problems that could create 10s of millions in operational savings for governments and companies then Socrata, CKAN and the open data movement is about finding the 1000 problems for which you can save between $20,000 and $1M in savings. For example, when you look at the work that Michael Flowers is doing in NYC, his analytics team is going to transform New York City’s budget. They aren’t finding $30 million dollars in operational savings, but they are generating a steady stream of very solid 6 to low 7 digit savings, project after project. (this is to say nothing of the lives they help save with their work on ambulances and fire safety inspections). Cumulatively  over time, these savings are going to add up to a lot. But there probably isn’t going to be a big bang. Rather, we are getting into the long tail of savings. Lots and lots of small stuff… that is going to add up to a very big number, while no one is looking.

So when I look at open data, yes, I think there is economic value. Lots and lots of economic value. Hell, tons of it.

But it isn’t necessarily going to happen in a big bang, and it may take place in the creative destruction it fosters and so the capital it frees up to spend on other things. That may make it potentially harder to measure (I’m hoping some economist much smarter than me is going tell me I’m wrong about that) but that’s what I think the change will look like.

Don’t look for the big bang, and don’t measure the growth in spending or new jobs. Rather let’s try to measure the destruction and cumulative impact of a thousand tiny wins. Cause that is where I think we’ll see it most.

Postscript: Apologies again for any typos – it’s late and I’m just desperate to get this out while it is burning in my brain. And thank you Alex for forcing me to put into words something I’ve been thinking about saying for months.

 

Canada Post and the War on Open Data, Innovation & Common Sense (continued, sadly)

Almost exactly a year ago I wrote a blog post on Canada Post’s War on the 21st Century, Innovation & Productivity. In it I highlighted how Canada Post launched a lawsuit against a company – Geocoder.ca – that recreates the postal code database via crowdsourcing. Canada Posts case was never strong, but then, that was not their goal. As a large, tax payer backed company the point wasn’t to be right, it was to use the law as a way to financial bankrupt a small innovator.

This case matters – especially to small start ups and non-profits. Open North – a non-profit on which I sit on the board of directors – recently explored what it would cost to use Canada Posts postal code data base on represent.opennorth.ca, a website that helps identify elected officials who serve a given address. The cost? $9,000 a year, nothing near what it could afford.

But that’s not it. There are several non-profits that use Represent to help inform donors and other users of their website about which elected officials represent geographies where they advocate for change. The licensing cost if you include all of these non-profits and academic groups? $50,000 a year.

This is not a trivial sum, and it is very significant for non-profits and academics. It is also a window into why Canada Post is trying to sue Geocoder.ca – which offers a version of its database for… free. That a private company can offers a similar service at a fraction of the cost (or for nothing) is, of couse, a threat.

Sadly, I wish I could report good news on the one year anniversary of the case. Indeed, I should be!

This is because what should have been the most important development was how the Federal Court of Appeal made it even more clear that data cannot be copyrighted. This probably made it Canada Post’s lawyers that they were not going to win and made it even more obvious to us in the public that the lawsuit against geocoder.ca – which has not been dropped-  was completely frivolous.

Sadly, Canada Post reaction to this erosion of its position was not to back off, but to double down. Recognizing that they likely won’t win a copyright case over postal code data, they have decided:

a) to assert that they hold trademark on the words ‘postal code’

b) to name Ervin Ruci – the opertator of Geocoder.ca – as a defendent in the case, as opposed to just his company.

The second part shows just how vindictive Canada Post’s lawyers are, and reveals the true nature of this lawsuit. This is not about protecting trademark. This is about sending a message about legal costs and fees. This is a predatory lawsuit, funded by you, the tax payer.

But part a is also sad. Having seen the writing on the wall around its capacity to win the case around data, Canada Post is suddenly decided – 88 years after it first started using “Postal Zones” and 43 years after it started using “Postal Codes” to assert a trade mark on the term? (You can read more on the history of postal codes in canada here).

Moreover the legal implications if Canada Post actually won the case would be fascinating. It is unclear that anyone would be allowed to solicit anybody’s postal code – at least if they mentioned the term “postal code” – on any form or website without Canada Posts express permission. It leads one to ask. Does the federal government have Canada Post’s express permission to solicit postal code information on tax forms? On Passport renewal forms? On any form they have ever published? Because if not, they are, I understand Canada Posts claim correctly, in violation of Canada Post trademark.

Given the current government’s goal to increase the use of government data and spur innovation, will they finally intervene in what is an absurd case that Canada Post cannot win, that is using tax payer dollars to snuff out innovators, increases the costs of academics to do geospatial oriented social research and that creates a great deal of uncertainty about how anyone online be they non-profits, companies, academics, or governments, can use postal codes.

I know of no other country in the world that has to deal with this kind of behaviour from their postal service. The United Kingdom compelled its postal service to make postal code information public years ago.In Canada, we handle the same situation by letting a tax payer subsidized monopoly hire expensive lawyers to launch frivolous lawsuits against innovators who are not breaking the law.

That is pretty telling.

You can read more about this this, and see the legal documents on Ervin Ruci’s blog has also done a good job covering this story at canada.com.

How Car2Go ruins Car2Go

So let me start by saying, in theory, I LOVE Car2Go. The service has helped prevent me from buying a car and has been indispensable in opening up more of Vancouver to me.

For those not familiar with Car2Go, it is a car sharing service where the cars can be parked virtually anywhere in the city, so when you need one, you just use a special card and pin number to access it, drive it to where you want to go and then log out of the car leaving it for the next person to use it. All this at the affordable rate of 38 cents a minute. It’s genius.

So what’s the problem?

Well, in practice, I’m having an increasingly worse experience with Car2Go, particularly when I’m most in need the service. What’s worse, the reasons are entirely within the control of Car2Go and specifically how it designed its app, its workflow and its security. My hope is there are lessons here for designers and anyone who is thinking about online services, particularly in the mobile space.

Let me explain.

Car2go-find-a-car-150x150First, understand that the Car2Go’s brand is built around convenience. Remember, the use case is that, at almost any time, you can find a car near you, access it, and get to where you want to go. Car2Go is not for people planning to use a car hours ahead (you don’t really want to be paying 38 cents a minute to “hold” a car for 3 hours until you need it. That would cost you $68!). Indeed the price point is designed to discourage long term use and encourage short, convenient trips. As a result ease of access is central to the service and the brand promise.

In theory here is what the process should look like.

  1. Fire up the Car2Go app on your smart phone and geolocate yourself
  2. Locate the nearest car (see screen shot to right)
  3. Reserve it (this allows you to lock the car down for 15 minutes)
  4. Walk to your car, access it using your Car2Go card and pin number
  5. Drive off!

Here is the problem. The process now regularly breaks down for me at step 3. At first blush, this may not seem like a big deal… I mean, if the car is only a few blocks away why not just walk over and grab it?

Alas, I do. But, often when you really want a car someone else does too! This is even more the case when say… it’s raining, or it’s the end of the business day. Indeed, many of the times when you would really like that car are times when someone else might also really want it. So being able to lock it down is important. Because if you can’t…? Well, the other week I walked 12 blocks in the rain trying to get to 4 different Car2Go cars that I could see in the app but couldn’t reserve. Why four? Because by  the time I got each of them, they were gone, scooped by another suer. After 30 minutes of walking around and getting wet, I gave up, abandoned my appointment (very suboptimal) and went home. This is not the first time this has happened.

The impact is that Car2Go is increasingly not a service I see myself relying on. Yes, I keep using it, but I no longer think of it as a service I can count on if I just cushion a little extra time. It’s just… kind of reliable because the split between really frustrating outcome and totally delight, is starting to be 40/60, and that’s not good.

Car2go-login-150x150But here is the killer part. Car2Go could fix this problem in a day. Tops.

The reason I can’t reserve a car is a because the Car2Go app forces you to log back in every once and a while. Why? I don’t know. Even if someone stole my phone and used it to reserve a car it would be useless. Let’s say they managed to also steal my wallet so had my Car2Go card. Even now it doesn’t help them since without my pin they couldn’t turn the car on. So having some rogue person with access to user’s account isn’t exactly putting Car2Go in any danger.

So maybe you’re thinking… well, just remember your password David! So here’s a big user moment.

I WISH I COULD.

But Car2Go has these insanely stupid, deeply unsafe password rules that require you to have at least one number, one letter and a capitalized letter (or a special character – god knows if I remember their rules) in your password. Since the multitude of default passwords I use don’t conform to their rules, I can never remember what my password is, leaving me locked out of my Car2Go app. And trust me, when you are late for a meeting, it’s raining and you’re getting soaked, the last thing you want to be doing is going through a password reset process on webpages built for desktop browsers that takes 10 t0 15 minutes to navigate and complete. Many a curse word has been directed at Car2Go in such moments.

What’s worse is there is evidence that shows that not only do these passwords rules create super crappy user experiences like the one I described above, they also user accounts less secure. Indeed, check out this Wired article on passwords and the tension between convenience and effectiveness:

Security specialists – and many websites – prompt us to use a combination of letters, numbers, and characters when selecting passwords. This results in suggestions to use passwords like “Pn3L!x8@H”, to cite a recent Wired article. But sorry, guys, you’re wrong: Unless that kind of password has some profound meaning for a user (and then he or she may need other help than password help), then guess what? We. Will. Forget. It.

It gets worse. Because you will you forget it, you’ll do something both logical and stupid. YOU’LL WRITE IT DOWN. Probably somewhere that will be easy to access. LIKE IN YOUR PHONE’S ADDRESS BOOK.

Stupid password rules don’t make users create smarter passwords. It makes them do dumb things that often make their accounts less secure.

The result? Car2Go’s design and workflow creates a process that suboptimizes the user experience, all in an effort to (I’m guessing) foster security but that, in reality, likely causes a number of Car2Go users to make terrible decisions and make their accounts more vulnerable.

So if you are creating an online service, I hope this cautionary tale about design, workflow is helpful and password authentication rules. Get them wrong and you can really screw up your product.

So please, don’t do to your service what Car2Go has done to theirs. As a potential user of your product, that would make me sad.