Monthly Archives: February 2011

Articles I'm Digesting: Feb 28th, 2011

Been a while since I’ve done one of these. A surprising amount of reading getting done in my life despite a hectic schedule. In addition to the articles below, I recently finished Shirky’s Cognitive Surplus (solid read) and am almost done Kevin Kelly’s What Technology Wants, which, is blowing my mind. More on both soon, I hope.

Why Blogs (Still) Aren’t Dead…No Matter What You’ve Heard by Kimberly Turner

I got to this via Mathew Ingram of GigaOM. A few months ago there was some talk about the decline of blogs. You could almost hear the newspaper people rubbing their hands with glee. Turns out it was all bogus. This article outlines some great stats on the issue and lays out where things are at, and why the rumor got started. The sooner than everyone, from the newspaper writer, to the professional blogger, to the amateur blogger to the everyday twitterer accepts/realizes they are on the same continuum and actually support one another, the happier I suspect we’re all going to be.

The Inside Story of How Facebook Responded to Tunisian Hacks by Alexis Madrigal

Totally fascinating and fairly self-explanatory:

By January 5, it was clear that an entire country’s worth of passwords were in the process of being stolen right in the midst of the greatest political upheaval in two decades. Sullivan and his team decided they needed a country-level solution — and fast…

…At Facebook, Sullivan’s team decided to take an apolitical approach to the problem. This was simply a hack that required a technical response. “At its core, from our standpoint, it’s a security issue around passwords and making sure that we protect the integrity of passwords and accounts,” he said. “It was very much a black and white security issue and less of a political issue.”

That’s pretty much the stand I’d like a software service to take.

Work on Stuff that Matters: First Principles by Tim O’Reilly

Basically, some good touch stones for work, and life, from someone I’ve got a ton of respect for.

Love and Hate on Twitter by Jeff Clark

Awesome visualizations of the use of the words love and hate on twitter. It is amazing that Justin Bieber always turns up high. More interesting are how brands and politicians get ranked.

The Neoformix blog is just fantastic. For hockey fans, be sure to check out this post.

Lazy Journalist Revealer. This. Is. Awesome.

Everybody keeps thinking that transparency and improved access to content is something that is only going to affect government, or, maybe some corporations.

I’ve tried to argue differently in places like this blog post and in Taylor and I’s chapter in The New Journalist.

Here’s a wonderful example of how new tools could start to lay more bare the poor performance of many newspapers in actually reporting news and not simple regurgiatating press releases.

Check out the site – called Churnalism.com – that allows you to compare any UK news story against a database of UK press releases. Brilliant!

Wish we had one of these here in North America.

Found this via the Future Journalism Project, which also links to a story on the Guardian website.

Saving Healthcare Billions: Let's fork the VA's Electronic Health Records System

Alternative title for this post: How our Government’s fear of Open Source Software is costing us Billions.

So, I’ve been meaning to blog this for several months now.

Back in November I remember coming across this great, but very short, interview in the Globe and Mail with Ken Kizer. Who, you might ask, is Ken Kizer? He’s a former Naval officer and emergency medicine physician who became the US Veteran’s Affair’s undersecretary for health in 1994.

While the list of changes he made is startling and impressive, what particularly caught my attention is that he accomplished what the Government of Ontario failed to do with $1Billion in spending: implementing an electronic medical record system that works. And, let’s be clear, it not only works, it is saving lives and controlling costs.

And while the VA has spent millions in time and energy developing that code, what is amazing is that it’s all been open sourced, so the cost of leveraging it is relatively low. Indeed, today, Ken Kizer heads up a company that implements the VA’s now open source solution – called VistA – in hospitals in the US. Consdier this extract from his interview:

You have headed a company that promoted “open-source” software for EHR, instead of a pricier proprietary system. Why do you think open source is better?

I believe the solution to health-care information technology lies in the open-source world that basically gives away the code. That is then adapted to local circumstances. With the proprietary model, you are always going back to the vendor for changes, and they decide whether to do them and how much they will cost. In Europe, open source EHR software is zooming. It’s the most widely deployed EHR system in the world, but not here.

Sometimes I wonder, do any Canadian government’s ever look at simply forking VistA and creating a Canadian version?

I wonder all the more after reading a Fortune Magazine article on the changes achieved in the VA during this period. The story is impressive, and VistA played a key role. Indeed, during Kizer’s tenure:

  • The VA saw the number of patents it treat almost doubke from 2.9 million 1996 to 5.4 million patients in 2006.
  • Customer satisfaction ratings within the VA system exceeded those of  private health care providers during many of those years.
  • All this has been achieved as the cost per patient has held steady at roughly $5,000. In contrast the rest of the US medical system saw costs rise 60 percent to $6,300.
  • And perhaps most importantly, in a time of crises the new system proved critical: while Hurricane Katrina destroyed untold numbers of cilivians (paper) healthcare records, VistA’s ensured that health records of veterans in the impacted areas could be called upon in a heartbeat.

This is a story that any Canadian province would be proud to tell its citizens. It would be fascinating to see some of the smaller provinces begin to jointly fund some e-health open source software initiatives, particularly one to create an electronic healthcare record system. Rather than relying on a single vendor with its coterie of expensive consultants, a variety of vendors, all serving the same platform could emerge, helping keep costs down.

It’s the kind of solution that seems custom built for Canada’s healthcare system. Funny how it took a US government agency to show us how to make it a reality.

Sharing Critical Information with the public: Lessons for Governments

Increasingly governments are looking for new and more impactful ways to communicate with citizens. There is a slow but growing awareness that traditional sources of outreach, such as TV stories and newspaper advertisements are either not reaching a significant portion of the population and/or have little impact on raising awareness of a given issue.

The exciting thing about this is that there is some real innovation taking place in governments as they grapple with this challenge. This blog post will look at one example from Canada and talk about why the innovation pioneered to date – while a worthy effort – falls far short of its potential. Specifically, I’m going to talk about how when governments share data, even when they use new technologies, they remain stuck in a government-centric approach that limits effectiveness. The real impact of new technology won’t come until governments begin to think more radically in terms of citizen-centric approaches.

The dilemma around reaching citizens is probably felt most acutely in areas where there is a greater sense of urgency around the information – like, say, in issues relating to health and safety. Consequently, in Canada, it is perhaps not surprising to see that some of the more innovative outreach work has thus been pioneered by the national agency responsible for many of these issues, Health Canada.

HC-Widgethc-appThe most cutting edge stuff I’ve seen is an effort by Health Canada to share advisories from Health Canada, Transport Canada and the Canadian Food Inspection Agency via three vehicles: an RSS feed, a mobile app available for Blackberry, iPhone (pictured far right) and Android, and finally as a widget (pictured near right) that anyone can install into their blog.

I think all of these are interesting ideas and have much to commend them. It is great to see information of a similar type, from three different agencies, being shared through a single vehicle – this is definitely a step forward from a user’s perspective. It’s also nice to see the government experiment with different vehicles for delivery (mobile and other parties’ websites).

But from a citizen-centric perspective, all these innovations share a common problem: They don’t fundamentally change the citizen’s experience with this information. In other words, they are simply efforts to find new ways to “broadcast” the information. As a result, I predict that these intiatives will have a minimal impact as currently structured. There are two reasons why:

The problem isn’t about access: These tools are predicated on the idea that the problem to conveying this information is about access to the information. It isn’t. The truth is, people don’t care. We can debate about whether they should care but the fact of the matter is, they don’t. Most people won’t pay attention to a product recall until someone dies. In this regard these tools are simply the modern day version of newspaper ads, which, historically, very few people actually paid attention to. We just couldn’t measure it, so we pretended like people read them.

The content misses the mark: Scrape a little deeper on these tools and you’ll notice something. They are all, in essence, press releases. All of these tools, the RSS feed, blog widget and mobile apps, are simply designed to deliver a marginally repackaged press release. Given that people tuned out of newspaper ads, pushing these ads onto them in another device will likely have a limited impact.

As a result, I suspect that those likely to pay attention to these innovations were probably those who were already paying attention. This is okay and even laudable. There is a small segment of people for whom these applications reduce the transactions costs of access. However, with regard to expanding the numbers of Canadians impacted my this information or changing behaviour in a broader sense, these tools have limited impact. To be blunt, no one is checking a mobile application before they buy a product, nor are they reading these types of widgets in a blog, nor is anyone subscribing to an RSS feed of recalls and safety warnings. Those who are, are either being paid to do so (it is a requirement of their job) or are fairly obsessive.

In short, this is a government-centric solution – it seeks to share information the government has, in a context that makes sense to government – it is not citizen-centric, sharing the information in a form that matters to citizens or relevant parties, in a context that makes sense to them.

Again, I want to state while I draw this conclusion I still applaud the people at Health Canada. At least they are trying to do something innovative and creative with their data and information.

So what would a citizen-centric approach look like? Interestingly, it would involve trying to reach out to citizens directly.

People are wrestling with a tsunami of information. We can’t simply broadcast them with information, nor can we expect them to consult a resource every time they are going to make a purchase.

What would make this data far more useful would be to structure it so that others could incorporate it into software and applications that could shape people’s behaviors and/or deliver the information in the right context.

Take this warning, for example: “CERTAIN FOOD HOUSE BRAND TAHINI OF SESAME MAY CONTAIN SALMONELLA BACTERIA” posted on Monday by the Canadian Food Inspection Agency. There is a ton of useful information in this press release including things like:

The geography impacted: Quebec

The product name, size and better still the UPC and LOT codes.

Product Size UPC Lot codes
Tahini of Sesame 400gr 6 210431 486128 Pro : 02/11/2010 and Exp : 01/11/2012
Tahini of Sesame 1000gr 6 210431 486302 Pro: 02/11/2010 and Exp: 01/11/2012
Premium Halawa 400gr 6 210431 466120 Pro: 02/11/2010 and Exp: 01/11/2012
Premium Halawa 1000gr 6 210431 466304 Pro: 02/11/2010 and Exp: 01/11/2012

However, all this information is buried in the text so is hard to parse and reuse.

If the data was structured and easily machine-readable (maybe available as an API, but even as a structured spreadsheet) here’s what I could imagine happening:

  1. Retailers could connect the bar code scanners they use on their shop floors to this data stream. If any cashier swipes this product at a check out counter they would be immediately notified and would prevent the product from being purchased. This we could do today and would be, in my mind, of high value – reducing the time and costs it takes to notify retailers as well as potentially saving lives.
  2. Mobile applications like RedLaser, which people use to scan bar codes and compare product prices could use this data to notify the user that the product they are looking at has been recalled. Apps like RedLaser still have a small user base, but they are growing. Probably not a game changer, but at least context sensitive.
  3. I could install a widget in my browser that, every time I’m on a website that displays that UPC and/or Lot code would notify me that I should not buy that product and that it’s been recalled. Here the potential is significant, especially as people buy more and more goods over the web.
  4. As we move towards having “smart” refrigerators that scan the RFID chips on products to determine what is in the fridge, they could simply notify me via a text message that I need to throw out my jar of Tahini of Sesame. This is a next generation use, but the government would be pushing private sector innovation in the space by providing the necessary and useful data. Every retailer is going to want to sell a “smart” fridge that doubles as a “safe” fridge, telling you when you’ve got a recalled item in it.

These are all far more citizen-centric, since they don’t require citizens to think, act or pay attention. In short, they aren’t broadcast-oriented, they feel customized, filtering information and delivering it where citizens need it, when they need it, sometimes without them even needing to know. (This is the same argument I made in my How Yelp Could Help Save Millions in Healthcare Costs). The most exciting thing about this is that Health Canada already has all the data to do this, it’s just a question of restructuring it so it is of greater use to various consumers of the data – from retailers, to app developers, to appliance manufactuers. This should not cost that much. (Health Canada, I know a guy…)

Another advantage of this approach is that it also gets the Government out of the business of trying to find ways to determine the best and most helpful way to share information. This appears to be a problem the UK government is also interested in solving. Richard A. sent me this excellent link in which a UK government agency appeals to the country’s developers to help imagine how it can better share information not unlike that being broadcast by Health Canada.

However, at the end of the day even this British example falls into the same problem – believing that the information is most helpfully shared through an app. The real benefit of this type of information (and open data in general) won’t be when you can create a single application with it, but when you can embed the information into systems and processes so that it can notify the right person at the right time.

That’s the challenge: abandoning a broadcast mentality and making things available for multiple contexts and easily embeddable. It’s a big culture shift, but for any government interested in truly exploring citizen-centric approach, it’s the key to success.

The State of Open Data in Canada: The Year of the License

Open Data now an established fact in a growing list of Canadian cities. Vancouver, Toronto, Edmonton, Ottawa have established portals, Montreal, Calgary, Hamilton and some other cities are looking into launching their own and a few provinces are rumored to be exploring open data portals as well.

This is great news and a significant accomplishment. While at the national level Canadian is falling further behind leaders such as England, the United States, Australia and New Zealand, at the local and potentially provincial/state level, Canada could position itself as an international leader.

There is however, one main obstacle: our licenses.

The current challenge:

So far most Open Data portals adopt what has been termed the Vancouver License (it was created by Vancouver for its open data portal and has subsequently been adopted, with occasional minor changes, by virtually every other jurisdiction).

The Vancouver license, however, suffers from a number of significant defects. As someone who was involved in its creation these “bugs” were a necessary tradeoff. If we were looking for a perfect license that satisfied all stakeholders, I suspect we’d still be arguing about it and there’d be no open data or data portal with the Vancouver license. Today, thanks in part to the existence of these portals our politicians, policy makers and government lawyers understanding of this issue has expanded. This fact, in combination with a growing number of complaints about the licenses from non-profits and businesses interested in using open data, has fostered growing interest in adjusting it.

This is encouraging. And we must capitalize on the moment. I wish to be clear: until Canadian governments get the licensing issue right, Open Data cannot advance in this country. Open Data released by governments will not enjoy significant reuse undermining one of the main reasons for doing Open Data.

There are a few things everyone agrees a new license needs to cover. It must establish there is no warranty to the data and that the government cannot be held liable for any reuse. So let’s focus on the parts that governments most often get wrong.

Here, there are 3 things a new license needs to get right.

1. No Attribution

NASCAR-2-300x199

Nascar Jeff Gordon #24 by Dan Raustadt licensed CC-NC-ND

We need a license that does not require attribution. First, attribution gets messy fast – all those cities logos crammed in on a map, on a mobile phone? It’s fine when you are using data from one or two cities, but what happens when you start using data from 10 different governments, or 50? Pretty soon you’ll have NASCAR apps, that will look ugly and be unusable.

More importantly, the goal of open data isn’t to create free advertising for governments, its to support innovation and reuse. These are different goals and I think we agree on which one is more important.

Finally, what government is actually going to police this part of the license? Don’t demand what you aren’t going to enforce – and no government should waste precious resources by paying someone to scour the internet to find websites and apps that don’t attribute.

2. No Share alike

One area the Vancouver license falls down is on the share is in this clause:

If you distribute or provide access to these datasets to any other person, whether in original or modified form, you agree to include a copy of, or this Uniform Resource Locator (URL) for, these Terms of Use and to ensure they agree to and are bound by them but without introducing any further restrictions of any kind.

The last phrase is particularly problematic as it makes the Vancouver license “viral.” Any new data created through a mash up that involves data with the Vancouver license must also use the Vancouver license. This will pretty much eliminate any private sector use of the data since any new data set a company creates they will want to be able to license in manner that is appropriate to their business model. It also has a chilling effect on those who would like to use the data but would need to keep the resulting work private, or restricted to a limited group of people. Richard Weait has an unfortunately named blog post that provides an excellent example of this problem.

Any new license should not be viral so as to encourage a variety or reuses of any data.

3. Standardized

The whole point of Open Data is to encourage the reuse of a public asset. So anything a government does that impedes this reuse will hamper innovation and undermine the very purpose of the initiative. Indeed, the open data movement has, in large part, come to life because one traditional impediment to using data has disappeared: data can now usually be downloaded and available in open formats that anyone can use. The barriers to use have declined so more and more people are interested.

But the other barrier to re-use is legal. If licenses are not easily understood then individuals and businesses will not reuse data, even when it is easily downloadable from a government’s website. Building a businesses or a new non-profit activity on a public asset to which your rights are unclear is simply not viable for many organizations. This is why you want every government should want its license to be easily understood – lowering the barriers to access means making data downloadable and reducing the legal barriers.

Most importantly, it is also why it is ideal if there is a single license in the whole country, as this would significantly reduce transaction and legal costs for all players. This is why I’ve been championing Canada’s leading cities to adopt a single common license.

So, there are two ways of doing this.

The easiest is for Canadian governments to align themselves with several of the international standardized open data licenses that already exist. There are a variety out there. My preference is the Open Commons’ Public Domain Dedication and License (PDDL), although they also publish the Open Database License (ODC-ODbL) and the Attribution License (ODC-By). There is also Creative Commons CC-0 license which Creative Commons suggests to use for open data (I actually recommend against all of these except the PDDL for governments, but more on that later).

These licenses has several advantages.

First, standardized licenses are generally well understood. This means people don’t have to educate themselves on the specifics of dozens of different licenses.

Second, they are stable. Because these licenses are managed by independent authorities and many people use them, they evolve cautiously, and balance the interest of consumers and sharers of data or information.

Third, these licenses balance interests responsible. The creators of these licenses are thought through all the issues that pertain to open data and so give both consumers of data and distributors of data comfort in knowing that they have a licenses that will work.

A second option is for governments in Canada to align around a self-generated common license. Indeed, this is one area where the Federal Government could show (some presently lacking) leadership.(although GeoGratis does have a very good license). This, for example appears to be happening in the UK, where the national government has created an Open Government Licence.

My hope is that, before the year is out, jurisdictions in Canada began to move towards a common licenses, or begin adopting some standard licenses.

Specifically, it would be great to see various Canadian jurisdictions either:

a) Adopt the PDDL (like the City of Surrey, BC). There are some reference to European Data Rights in the PDDL but these have no meaning in Canada and should not be an obstacle – and may even reassure foreign consumers of Canadian data. The PDDL is the most open and forward looking license.

b) Adopt the UK government’s Open Government Licence. This license is the best created by any government to date (with the exemption of simple making the data public domain, which, of course, is far more ideal.

c) Use a modified version of the Geogratis license that adjusts the “3.0 PROTECTION AND ACKNOWLEDGEMENT OF SOURCE” clause to prevent the NASCAR effect from taking place.

What I hope does not happen is that:

a) More and more jurisdictions continue to use the Vancouver License. There are better options and it is an opportunity to launch an open data policy and leapfrog the current leaders in the space.

b) Jurisdictions adopt a Creative Commons license. Creative Commons was created to help license copyrighted material. Since data cannot be copyrighted, the use of creative commons risks confusing the public about the inherent rights they have to data. This is, in part, a philosophical argument, but it matters, especially for governments. We – and our governments especially – cannot allow people to begin to believe that data can be copyrighted.

c) There is no change to the current licenses being used, or a new license, like Open Database License (ODC-ODbL) which goes against the attributes described above, is adopted.

Let’s hope we make progress on this front in 2011.

Open Knowledge Foundation Open Data Advocate

My colleagues over at the Open Knowledge Foundation have been thinking about recruiting an Open Data Advocate, someone who can coordinate a number of the activities they are up to in the open data space. I offered to think about what the role should entail and how that person could be effective. Consequently, in the interests of transparency, fleshing out my thinking and seeing if there might be feed back (feel free to comment openly, or email me personally if you wish to keep it private) I’m laying out my thinking below.

Context

These are exciting times for open government data advocates. Over the past few years a number of countries, cities and international organizations have launched open data portals and implemented open data policies. Many, many more are contemplating joining the fray. What makes this exciting is that some established players (e.g. United States, UK, World Bank) are continue to push forward and will, I suspect, be refining and augmenting their services in the coming months. At the same time there are still a number of laggards (e.g. Canada federally, Southern Europe, Asia) in which mobilizing local communities, engaging with public servants and providing policy support is still the order of the day.

This makes the role of an Open Data Advocate complex. Obviously, helping pull the laggards along is an important task. Alternatively (or in addition) they may need to also be thinking longer term. Where is open data going, what will second and third generation open data portals need to look like (and what policy infrastructure will be needed to support them).

These are two different goals and so either choosing, or balancing, between them will not be easy.

Key Challenges

Some of the key challenges spring quite obviously from that context. But there are also other challenges, I believe to be looming as well. So what do I suspect are the key challenges around open data over the next 1-5 years?

  1. Getting the laggards up and running
  2. Getting governments to use standardized licenses that are truly open (be it the PDDL, CC-0 or one of the other available licenses out there
  3. Cultivating/fostering an eco-system of external data users
  4. Cultivating/fostering an eco-system of internal government user (and vendors) for open data (this is what will really make open data sustainable)
  5. Pushing jurisdictions and vendors towards adopting standard structures for similar types of data (e.g. wouldn’t it be nice if restaurant inspection data from different jurisdictions were structured similarly?)
  6. Raising awareness about abuses of, and the politicization of, data. (e.g. this story about crime data out of New York which has not received nearly enough press)

The Tasks/Leverage Points

There are some basic things that the role will require including:

  1. Overseeing the Working Group on Open Government Data
  2. Managing opengovernmentdata.org
  3. Helping organize the Open Government Data Camp 2011, 2012 and beyond

But what the role will really have to do is figure out the key leverage points that can begin to shift the key challenges listed above in the right direction. The above mentioned tasks may be helpful in doing that… but they may not be. Success is going to be determined but figuring how to shift systems (government, vendor, non-profit, etc…) to advance the cause of open data. This will be no small task.

My sense is that some of these leverage points might include:

  1. Organizing open data hackathons – ideally ones that begin to involve key vendors (both to encourage API development, but also to get them using open data)
  2. Leveraging assets like Civic Commons to get open data policies up on online so that jurisdictions entertaining the issue can copy them
  3. Building open data communities in key countries around the world – particularly in key countries in such as Brasil and India where a combination of solid democratic institutions and a sizable developer community could help trigger changes that will have ramifications beyond their borders (I suspect there are also some key smaller countries – need to think more on that)
  4. I’m sure this list could be enhanced…

Metrics/Deliverables

Obviously resolving the above defined challenges in 1-5 years is probably not realistic. Indeed, resolving many of those issues is probably impossible – it will be a case of ensuring each time we peel back one layer of the onion we are well positioned to tackle the next layer.

Given this, some key metrics by which the Open Knowledge Foundation should evaluate the person in this role might be:

At a high level, possible some metrics might include:

  • Number of open data portals world wide? (number using CKAN?)
  • Number of groups, individuals, cities participating in Opendata hackathons
  • Number of applications/uses of open data
  • Awareness of CKAN and its mission in the public, developer space, government officials, media?
  • Number of government vendors offering open data as part of their solution

More additional deliverables, could include:

  • Running two Global OpenData Hackathons a year?
  • Developing an OKFN consulting arm specializing in open data services/implementation
  • Create an open data implementation policy “in a box” support materials for implementing an open data strategy in government
  • Develop a global network of OKFN chapters to push their local and national governments, share best practices
  • Run opendata bootcamps for public servants and/or activists
  • Create a local open data hackathon in a box kit (to enable local events)
  • Create a local “how to be an open data activist” site
  • Conduct some research on the benefits of open data  to advance the policy debate
  • Create a stronger feedback loop on CKAN’s benefits and weaknesses
  • Create a vehicle to connect VC’s and/or money with open data drive companies and app developers (or at least assess what barriers remain to use open data in business processes).

Okay, I’ll stop there, but if you have thoughts please send them or comment below. Hope this stimulates some thinking among fellow open data geeks.

The problem with the UBB debate

I reley dont wan to say this, but I have to now.

This debate isso esey!

- Axman13

Really, it is.

The back and forth for and against UBB has – for me – sadly so missed the mark on the real issue it is beyond frustrating. It’s been nice to see a voice or two like Michael Geist begin to note the real issue – lack of competition – but by and large, we continue to have the wrong debate and, more importantly, blame the wrong people.

This really struck home with me while reading David Beers’s piece in the Globe. The first half is fantastic, demanding greater transparency into the cost of delivering bandwidth and of developing a network; this is indeed needed. Sadly, the second half completely lost me, as it makes no argument, notably doesn’t call for foreign investment (though he wants the telcos’ oligarchy broken up – so it’s unclear where he thinks that competition is going to come from) and worse, the CRTC is blamed for the crisis.

It all prompted me – reluctantly – to outline what I think are the three problems with the debate we’ve been having, from least to most important.

1. It’s not about UBB; it’s about cost.

Are people mad about UBB? Maybe. However, I’m willing to wager that most people who have signed the petition about UBB don’t know what UBB is, or what it means. What they do know is that their Internet Service Provider has been charging a lot for internet access. Too much. And they are tired of it.

A more basic question is: do people hate the telcos? And the answer is yes. I know I dislike them all. Intensely. (You should see my cell phone bill; my American friends laugh at me as it is 4x theirs). The recent decision has simply allowed for that frustration to boil over. It has become yet another example of how a telecommunication oligarchy is allowed to charge whatever it wants for service that is substandard to what is often found elsewhere in the world. Of course Canadians are angry. But I suspect they were angry before UBB.

So, if getting gouged is the issue the problem of making the debate about UBB is we risk taking our eye off the real issue – the cost of getting online. Even if the CRTC reverses its decision, we will still be stuck with some of the highest rates in for internet access in the world. This is the real issue and should be the focus of the debate.

2. If the real issue is about price, the real solution is competition.

Here Geist’s piece, along with the Globe editorial, is worth reading. Geist states clearly that the root of the problem is a lack of competition. It may be that UBB – especially in a world of WiMax or other highspeed wireless solutions – could become the most effective way to charge for access and encourage investment. Why would we want to forestall such a business model from emerging?

I’m hoping, and am seeing hints, that that this part of the debate is beginning to come to the fore, but so long as the focus is on banning UBB, and not increasing competition, we’ll be stuck having the wrong conversation.

3. The real enemy is not the CRTC; it’s the Minister of Industry Canada.

This, of course, is both the most problematic part of this debate and the most ironic. The opponents to UBB have made the wrong people the enemy. While people may not agree with the CRTC’s decision, the real problem is not of their making. They can only act within the regulatory regime they have been given.

The truth of the matter is, after 40 years of the internet, Canada has no digital economy strategy. Given it is 2011 this is a stunning fact. Of course, we’ve been promised one but we’ve heard next to nothing about it since the consultation has been closed. Indeed – and I hope that I’ll be corrected here – we haven’t even heard when it will land.

The point, to make it clear, is that this is not a crisis or regulatory oversight. This is a crisis of policy mismanagement. So the real people to blame are the politicians – and in particular the Industry Minister who is in charge of this file. But since those in opposition to UBB have made it their goal to scream at the CRTC, the government has been all too happy to play along and scream at them as well. Indeed, the biggest irony of this all is that it has allowed the government to take a populist approach and look responsive to a crises that they are ultimately responsible for.

P.S. Left-wing bonus reason

So if you are a left leaning anti-UBB advocate – particularly one who publishes opinion pieces – the most ironic part about this debate is that you are the PMO’s favourite person. Not only have you deflected blame over this crisis away from the government and onto the CRTC you’ve created the perfect conditions for the government to demand an overhaul (or simply just the resignation) of key people on the CRTC board.

The only reson this is ironic is beacuase Konrad W. von Finckenstein (head of the CRTC) may be the main reason why Sun Media’s Category 1 application for its “Fox News North” channel was denied. There is probably nothing the PMO would like more than to brush Kinchenstein aside and be to reshape the CRTC so as to make this plan a reality.

Wouldn’t it be ironic if a left-leaning coalition enabled the Harper PMO and Sun Media to create their Fox News North?

And who said Canadian politics was boring?

P.P.S. If you haven’t figured out the spelling mistake easter eggs, I’ll make it more obvious: click here. It’s an example of why internet bandwidth consumption is climbing at double digits.