Category Archives: technology

The problem with the UBB debate

I reley dont wan to say this, but I have to now.

This debate isso esey!

– Axman13

Really, it is.

The back and forth for and against UBB has – for me – sadly so missed the mark on the real issue it is beyond frustrating. It’s been nice to see a voice or two like Michael Geist begin to note the real issue – lack of competition – but by and large, we continue to have the wrong debate and, more importantly, blame the wrong people.

This really struck home with me while reading David Beers’s piece in the Globe. The first half is fantastic, demanding greater transparency into the cost of delivering bandwidth and of developing a network; this is indeed needed. Sadly, the second half completely lost me, as it makes no argument, notably doesn’t call for foreign investment (though he wants the telcos’ oligarchy broken up – so it’s unclear where he thinks that competition is going to come from) and worse, the CRTC is blamed for the crisis.

It all prompted me – reluctantly – to outline what I think are the three problems with the debate we’ve been having, from least to most important.

1. It’s not about UBB; it’s about cost.

Are people mad about UBB? Maybe. However, I’m willing to wager that most people who have signed the petition about UBB don’t know what UBB is, or what it means. What they do know is that their Internet Service Provider has been charging a lot for internet access. Too much. And they are tired of it.

A more basic question is: do people hate the telcos? And the answer is yes. I know I dislike them all. Intensely. (You should see my cell phone bill; my American friends laugh at me as it is 4x theirs). The recent decision has simply allowed for that frustration to boil over. It has become yet another example of how a telecommunication oligarchy is allowed to charge whatever it wants for service that is substandard to what is often found elsewhere in the world. Of course Canadians are angry. But I suspect they were angry before UBB.

So, if getting gouged is the issue the problem of making the debate about UBB is we risk taking our eye off the real issue – the cost of getting online. Even if the CRTC reverses its decision, we will still be stuck with some of the highest rates in for internet access in the world. This is the real issue and should be the focus of the debate.

2. If the real issue is about price, the real solution is competition.

Here Geist’s piece, along with the Globe editorial, is worth reading. Geist states clearly that the root of the problem is a lack of competition. It may be that UBB – especially in a world of WiMax or other highspeed wireless solutions – could become the most effective way to charge for access and encourage investment. Why would we want to forestall such a business model from emerging?

I’m hoping, and am seeing hints, that that this part of the debate is beginning to come to the fore, but so long as the focus is on banning UBB, and not increasing competition, we’ll be stuck having the wrong conversation.

3. The real enemy is not the CRTC; it’s the Minister of Industry Canada.

This, of course, is both the most problematic part of this debate and the most ironic. The opponents to UBB have made the wrong people the enemy. While people may not agree with the CRTC’s decision, the real problem is not of their making. They can only act within the regulatory regime they have been given.

The truth of the matter is, after 40 years of the internet, Canada has no digital economy strategy. Given it is 2011 this is a stunning fact. Of course, we’ve been promised one but we’ve heard next to nothing about it since the consultation has been closed. Indeed – and I hope that I’ll be corrected here – we haven’t even heard when it will land.

The point, to make it clear, is that this is not a crisis or regulatory oversight. This is a crisis of policy mismanagement. So the real people to blame are the politicians – and in particular the Industry Minister who is in charge of this file. But since those in opposition to UBB have made it their goal to scream at the CRTC, the government has been all too happy to play along and scream at them as well. Indeed, the biggest irony of this all is that it has allowed the government to take a populist approach and look responsive to a crises that they are ultimately responsible for.

P.S. Left-wing bonus reason

So if you are a left leaning anti-UBB advocate – particularly one who publishes opinion pieces – the most ironic part about this debate is that you are the PMO’s favourite person. Not only have you deflected blame over this crisis away from the government and onto the CRTC you’ve created the perfect conditions for the government to demand an overhaul (or simply just the resignation) of key people on the CRTC board.

The only reson this is ironic is beacuase Konrad W. von Finckenstein (head of the CRTC) may be the main reason why Sun Media’s Category 1 application for its “Fox News North” channel was denied. There is probably nothing the PMO would like more than to brush Kinchenstein aside and be to reshape the CRTC so as to make this plan a reality.

Wouldn’t it be ironic if a left-leaning coalition enabled the Harper PMO and Sun Media to create their Fox News North?

And who said Canadian politics was boring?

P.P.S. If you haven’t figured out the spelling mistake easter eggs, I’ll make it more obvious: click here. It’s an example of why internet bandwidth consumption is climbing at double digits.

Why the CRTC was right on Usage-Based Billing

Up here in Canada (and I say that in the identity sense, since at the moment I’m in Santa Clara at the Strata Conference) a lot of fuss has been made about the CRTC’s decision regarding the approval of usage-based billing. So much fuss, in fact, that appears the government is going to over turn it.

One thing that has bothered me about these complaints is that they have generally come from people who also seem to oppose internet service providers throttling internet access. It’s unclear to me that you can have it both ways – you can’t (responsibly) be against both internet throttling and usage-based billing. As much as I wish it were the case there is not unlimited internet access in Canada. At some point this genuinely is a scarce resource and if you give people unlimited access at a fixed price at some point the system is going to collapse…

Indeed, what really concerns me is the incentive structure forbidding usage-based billing creates. There is a finite market for broadband access in Canada so the capital for increasing capacity can’t come exclusively from signing up new users. If you make it so that fixed bills are the only way to bill customers then what incentives do internet providers have to improve capacity? At best they will be incented only to provide a minimally viable service. I mean, why build out when you won’t be able to get a return on investment for the extra capacity?

I’d prefer to have an internet provider market where the players are building out their network in order to meet the needs of the most demanding users who are willing to pay for the extra bandwidth. Why? Because it will ensure that capacity keeps increasing as the large players continue to fight to meet the needs of that market. This means there is a financial incentive to increase bandwidth – which is ultimately what you want the incentives to be.

Besides, if – like me – you happen to believe that roads should be tolled then it’s unclear why you shouldn’t also feel like  consumers of large quantities of bandwidth should pay more than someone who barely consumes any at all. Why should low bandwidth users subsidize high-bandwidth users (or worse, that innovative services be made useless because the other solution is throttling).

I want to be clear. All of this isn’t to say that we shouldn’t regulate the ISP business or that we should treat the internet service providers as trustworthy. We are still in an oligarchy, something their behaviour reminds me of every day. I agree that the ISPs demands are in part an effort to make less attractive services like Netflix that threaten many of the ISPs other business – cable TV. So, if we are going to engage in usage-based billing then I’d expect a few things, including:

  • a generous baseline of fixed-fee internet usage a month. (In an ideal world I’d actually say a basic amount should be free – as I believe access to the internet, like access to books in a library, is increasingly becoming a necessary basic service of our society)
  • let’s have REAL usage-based billing. This means, let’s do usage-based billing that will make us more efficient. Charge me more at peak times, less during off peak times the way electricity companies do. That way I’ll bittorrent my files at night when it costs next to nothing, and be smarter about consumption during peak hours.
  • real transparency into how much the ISPs are investing into increasing their capacity.
  • bandwidth from certain IP addresses – like Parliament, Provincial Legislatures and City Halls should be unlimited. No one should be eating into their fix-priced limit or charged extra while engaging in their most basic democratic rights (so unlimited CSPAN video watching)
  • your network now must be neutral. One reason I like usage-based billing is that it destroys a major argument used to justify traffic shaping – that the network can’t handle the demand. Well, not you get rewarded for high demand – so satisfy it! If consumer advocates can’t oppose both usage-based billing and throttling, then telcos and cable companies can’t have both either.

I can imagine that this post will make some of my colleagues upset. Please fire away, tell me how I’ve got it all wrong. But please make sure that you’ve got an answer that addresses some of the concerns raised here. If you’ve been against throttling (and you know who you are), explain to me how it is that we can both (sustainably) have zero throttling and unlimited fixed fee internet access? In a world where online video is taking off, I’m just not sure I see it. Unless, of course, we think Google is going to provide the answer.

Finally, if you haven’t read it, Richard French has a very thoughtful piece in the Globe and Mail entitled Second-Guessing the CRTC Comes at a Price check it out. It certainly helped reaffirm some of my own thinking.

Egypt: Connected to revolution

This piece is cross-posted from the Opinion Page of the Toronto Star which was kind enough to publish it this morning.

Over the weekend something profound happened. The Egyptian government, confronted with growing public unrest, attempted to disconnect itself. It shut down its cellular and telephone networks and unplugged from the Internet.

It was a startling recognition of this single most powerful force driving change in our world: connectivity. Our world is increasingly divided between the connected and the disconnected, between open and closed. This could be the dominant struggle of the 21st century and it forces us to face important questions about our principles and the world we want to live in.

Why does connectivity matter? Because it allows for free association and self-expression, both of which can allow powerful narratives to emerge in a society beyond the control of any elite.

In Egypt, the protests do not appear driven by some organized cabal. The Muslim Brotherhood — so long held up as the dangerous alternative to the regime — was caught flat-footed by the protests. The National Coalition for Change, headed by Nobel laureate Mohamed ElBaradei, seems to have emerged as the protesters’ leader, not their instigator.

Instead, Egypt may simply have reached a tipping point. Its citizens, having witnessed the events in Tunisia, came to realize they were no longer atomized and uncoordinated in the face of a police state. They could self-organize, connect with one another, share stories and videos, organize meetings and protests. In short, they could tell their own narratives to one another, outside the government’s control.

These stories can be powerful.

In Egypt, a video of an unknown protester being shot and carried away has generated a significant viewership. In Iran, the video of Neda Agha-Soltan dying from a gunshot wound transformed her into a symbol. In Tunisia, videos of protestors being shot also helped mobilize the public.

Indeed, as the family of Mohamed Bouazizi — the man who by setting himself on fire out of frustration with local authorities, triggering the Tunisian protests — noted to an Al Jazeera reporter, people are protesting with “a rock in one hand, a cellphone in the other.”

This is what makes movements like this so hard to fight. There is no opposition group to blame, no subversive leadership to decapitate, no central broadcast authority to shut down. The only way to stop the protests is to eliminate the participants’ capacity to self-organize. During the Green Revolution in Iran, that meant shutting down some key websites; in Egypt, it appears to mean shutting down all communication.

Of course, this state of affairs cannot continue indefinitely. Too much of the Egyptian economy depends on people being able to connect. The network that makes possible a modern economy also makes possible a popular uprising.

At some point Egypt will have to decide: disconnect forever like North Korea, or reconnect and confront the reality of the connected world.

For those of us who believe in freedom, individuality, self-expression and democracy, connectivity is among our most powerful tools because it makes possible alternative narratives.

From East Germany to the Philippines, Iran to Tunisia, connectivity has played a key role in helping people organize against governments that would deny them their rights. It’s a tool democracies have often used, from broadcasts like Radio Free Europe during the Cold War to the U.S. government’s request that Twitter not conduct a planned upgrade to its website that would have disrupted its service during the recent Iranian Green Revolution.

But if we believe in openness, we must accept its full consequences. Our own governments have a desire to disconnect us from one another when they deem the information to be too dangerous.

Today most U.S. government departments, and some Canadian ministries, still deny their employees access to WikiLeak documents, disconnecting them from information that is widely available to the general public.

More darkly, the government pressured companies such as Amazon and Paypal to not offer their services to WikiLeaks — much like the Iranian government tried to disrupt Twitter’s service and the Tunisian government attempted to hijack Facebook’s. Nor is connectivity a panacea. In Iran, the regime uses photos and videos from social networks and websites to track down protestors. Connectivity does not guarantee freedom; it is simply a necessary ingredient.

The events in Egypt are a testament to the opportunity of the times we live in. Connectivity is changing our world, making us more powerful individually and collectively. But ultimately, if we wish to champion freedom and openness abroad — to serve as the best possible example for countries like Egypt — we must be prepared to do so at home.

David Eaves is a Vancouver-based public policy entrepreneur and adviser on open government and open data. He blogs at eaves.ca

How Yelp Could Help Save Millions in Health Care Costs

Okay, before I dive in, a few things.

1) Sorry for the lack of posts last week. Life’s been hectic. Between Code for America, a number of projects and a few articles I’m trying to get through, the blogging slipped. Sorry.

2) I’m presenting on Open Data and Open Government to the Canadian Parliament Access to Information, Privacy and Ethics Committee today – more on that later this week

3) I’m excited about this post

When it comes to opening up government data many of us focus on Governments: we cajole, we pressure, we try to persuade them to open up their data. It’s approach we will continue to have to take for a great deal of the data our tax dollars pay to collect and that government’s continue to not share. There is however another model.

Consider transit data. This data is sought after, intensely useful, and probably the category of data most experimented with by developers. Why is this? Because it has been standardized. Why has it been standardized. Because local government’s (responding to citizen demand) have been desperate to get their transit data integrated with Google Maps (See image).
Screen-shot-2011-01-30-at-10.45.00-PM

It turns out, to get your transit data into Google Maps, Google insists that you submit to them the transit data in a single structured format. Something that has come to be known as the General Transit Feed Specification (GTFS). The great thing about the GTFS is that it isn’t just google that can use it. Anyone can play with data converted into the GTFS. Better still, because the data structure us standardized an application someone develops, or analysis they conduct, can be ported to other cities that share their transit data in a GTFS format (like, say, my home town of Vancouver).

In short, what we have here is a powerful model both for creating open data and standardizing this data across thousands of jurisdictions.

So what does this have to do with Yelp! and Health Care Costs?

For those not in the know Yelp! is a mobile phone location based rating service. I’m particularly a fan of its restaurant locator: it will show you what is nearby and how it has been rated by other users. Handy stuff.

But think bigger.

Most cities in North America inspect restaurants for health violations. This is important stuff. Restaurants with more violations are more likely to transmit diseases and food born illnesses, give people food poisoning and god knows what else. Sadly, in most cases the results of these tests are posted in the most useless place imaginable. The local authorities website.

I’m willing to wager almost anything that the only time anyone visits a food inspection website is after they have been food poisoned. Why? Because they want to know if the jerks have already been cited.

No one checks these agencies websites before choosing a restaurant. Consequently, one of the biggest benefits of the inspection data – shifting market demand to more sanitary options – is lost. And of course, there is real evidence that shows restaurants will improve their sanitation, and people will discriminate against restaurants that get poor ratings from inspectors, when the data is conveniently available. Indeed, in the book Full Disclosure: The Perils and Promise of Transparency Fung, Graham and Weil noted that after Los Angeles required restaurants to post food inspection results, that “Researchers found significant effects in the form of revenue increases for restaurants with high grades and revenue decreases for C-graded (poorly rated) restaurants.” More importantly, the study Fung, Graham and Weil reference also suggested that making the rating system public positively impacted healthcare costs. Again, after inspection results in Los Angeles were posted on restaurant doors (not on some never visited website), the county experienced a reduction in emergency room visits, the most expensive point of contact in the system. As the study notes these were:

an 18.6 percent decline in 1998 (the first year of program operation), a 4.8 percent decline in 1999, and a 5.4 per- cent decline in 2000. This pattern was not observed in the rest of the state.

This is a stunning result.

So, now imagine that rather than just giving contributor generated reviews of restaurants Yelp! actually shared real food inspection data! Think of the impact this would have on the restaurant industry. Suddenly, everyone with a mobile phone and Yelp! (it’s free) could make an informed decision not just about the quality of a restaurant’s food, but also based on its sanitation. Think of the millions (100s of millions?) that could be saved in the United States alone.

All that needs to happen is for a simple first step, Yelp! needs approach one major city – say a New York, or a San Francisco – and work with them to develop a sensible way to share food inspection data. This is what happened with Google Maps and the GTSF, it all started with one city. Once Yelp! develops the feed, call it something generic, like the General Restaurant Inspection Data Feed (GRIDF) and tell the world you are looking for other cities to share the data in that format. If they do, you promise to include it in your platform. I’m willing to bet anything that once one major city has it, other cities will start to clamber to get their food inspection data shared in the GRIDF format. What makes it better still is that it wouldn’t just be Yelp! that could use the data. Any restaurant review website or phone app could use the data – be it Urban Spoon or the New York Times.

The opportunity here is huge. It’s also a win for everyone: Consumers, Health Insurers, Hospitals, Yelp!, Restaurant Inspection Agencies, even responsible Restaurant Owners. It would also be a huge win for Government as platform and open data. Hey Yelp. Call me if you are interested.

The Next International Open Data Hack Day – initial thoughts

Yesterday I got to meet up with Edward Ocampo-Gooding and Mary Beth Baker in Ottawa and we started talking about what the next international open data hackathon: when might be a good time to do it, what might it look like, etc…

One idea is to set a theme that might help inspire people and serve as something to weave the events together in stronger way. Edward proposed the theme of Mom’s and, since, in many, many, many countries, Mother’s day is in May, it seemed like a nice suggestion.

It also has two nice benefits:

  • it gets us away from an exclusive focus on government and might get people in the headspace of creating applications with tangible uses – something almost everyone can relate to
  • many people have mom’s! so getting into the shoes of a mom and imagining what might be interesting, engaging and/or helpful shouldn’t be impossible
  • it might engage new people in the open data movement and in the local events

In addition, another suggestion that was raised is the idea of focusing on a few projects that have already been speced out in advanced – much like Random Hacks of Kindness does with their hackathons. Think this could be fruitful to explore.

Finally, regarding timelines, I’m thinking May. It works thematically (if that theme gets used). More importantly, however, it’s far enough out to plan, near enough to be tangible and sets a nice pace of two global hackathons a year which feels sufficiently ambitious for a group of volunteers, doesn’t crowd out/compete with other hackathons or local events, and seems like a good check in timeline for volunteer driven projects… Also, it might give people a chance to use scrapperwiki in the interim to get data together for projects they want to work on.

Thoughts on all this? Please blog, post a comment below, or if you are feeling shy, drop me a note (david at eaves.ca or @daeaves with hastag #odhd on twitter). I’ve also created a page on the Open Data Day wiki to discuss this if people are more comfortable with that.

What I’m doing at Code for America

For the last two weeks – and for much of January – I’m in San Francisco helping out with Code for America. What’s Code for America? Think Teach for America, but rather than deploying people into classrooms to help provide positive experiences for students and teachers while attempting to shift the culture of school districts, Code for America has fellows work with cities to help develop reusable code to save cities money, make local government as accessible as your favorite website, and help shift the government’s culture around technology.

code-for-america1-150x112The whole affair is powered by a group of 20 amazing fellows and an equally awesome staff that has been working for months to make it all come together. My role – in comparison – is relatively minor, I head up the Code for America Institute – a month long educational program the fellows go through when they first arrive.  I wanted to write about what I’ve been trying to do both because of the openness ideals of Code for America and to share any lessons for others who might attempt a similar effort.

First, to understand what I’m doing, you have to understand the goal. On the surface, to an outsider, the Code for America change process might look something like this:

  1. Get together some crazy talented computer programers (hackers, if you want to make the government folks nervous)
  2. Unleash them on a partner city with a specific need
  3. Take resulting output and share across cities

Which of course, would mistakenly frame the problem as technical. However, Code for America is not about technology. It’s about culture change. The goal is about rethinking and reimagining  government as better, faster, cheaper and adaptive. It’s about helping think of the ways its culture can embrace government as a platform, as open and as highly responsive.

I’m helping (I think) because I’ve enjoyed some success in getting government’s to think differently. I’m not a computer developer and at their core, these successes were never technology problems. The challenge is understanding how the system works, identify the leverage points for making change, develop partners and collaborate to engage those leverage points, and do whatever it takes to ensure it all comes together.

So this is the message and the concept the speakers are trying to impart on the fellows. Or, in other words, my job is to help unleash the already vibrant change agents within the 20 awesome fellows and make them effective in the government context.

So what have we done so far?

We’ve focused on three areas:

1) Understand Government: Some of the fellows are new to government, so we’ve had presentations from local government experts like Jay Nath, Ed Reiskin and Peter Koht as well as the Mayor of Tuscon’s chief of staff (to give a political perspective). And of course, Tim O’Reilly has spoken about how he thinks government must evolve in the 21st century. The goal: understand the system as well as, understand and respect the actors within that system.

2) Initiate & Influence: Whether it is launching you own business (Eric Ries on startups), starting a project (Luke Closs on Vantrash) or understanding what happens when two cultures come together (Caterina Fake on Yahoo buying Flickr) or myself on negotiating, influence and collaboration, our main challenges will not be technical, they will be systems based and social. If we are to build projects and systems that are successful and sustainable we need to ask the right questions and engage with these systems respectfully as we try to shift them.

3) Plan & Focus: Finally, we’ve had experts in planning and organizing. People like Allen Gunn (Gunner) and the folks from Cooper Design, who’ve helped the fellows think about what they want, where they are going, and what they want to achieve. Know thyself, be prepared, have a plan.

The last two weeks will continue to pick up these themes but also give the fellows more time to (a) prepare for the work they will be doing with their partner cities; and (b) give them more opportunities to learn from one another. We’re half way through the institute at this point and I’m hoping the experience has been a rich – if sometimes overwhelming – one. Hopefully I’ll have an update again at the end of the month.

Honourable Mention! The Mozilla Visualization Challenge Update

Really pleased to share that Diederik and I earned an honourable mention for our submission to the Mozilla Open Data Competition.

For those who missed it – and who find opendata, open source and visualization interesting – you can read a description of and see images from our submission to the competition in this blog post I wrote a month ago.

A Response to a Ottawa Gov 2.0 Skeptic

So many, many months ago a Peter R. posted this comment (segments copied below) under a post I’d written titled: Prediction, The Digital Economy Strategy Will Fail if it Isn’t Drafted Collaboratively on GCPEDIA. At first blush Peter’s response felt aggressive. I flipped him an email to say hi and he responded in a very friendly manner. I’ve been meaning to respond for months but, life’s been busy.  However, over the break (and my quest to hit inbox 0) I finally carved out some time –  my fear is that this late response will sound like a counter attack – it isn’t intended as such but rather an effort to respond to a genuine question. I thought it would be valuable to post as many of the points may resonate with supporters and detractors of Gov 2.0 alike. Here’s my responses to the various charges:

The momentum, the energy and the excitement behind collaborative/networked/web 2.0/etc is only matched by the momentum, the energy and the excitement that was behind making money off of leveraged debt instruments in the US.

Agreed, there is a great deal of energy and excitement behind collaborative networks, although I don’t think this is sparked – as ensued, by something analogous to bogus debt instruments. People are excited because of the tangible results created by sharing and/or co-production networks like Wikipedia, Mozilla, Flickr Google search and Google Translate (your results improve based on users data) and Ushahidi inspire people because of the tremendous results they are able to achieve with a smaller footprint of resources. I think the question of what these types of networks and this type of collaboration means to government is an important question – that also means that as people experiment their will be failures – but to equate the entire concept of Gov 2.0 and the above cited organizations, tools and websites with financial instruments that repackaged subprime mortgages is, in my mind, fairly problematic.

David, the onus lies squarely with you to prove that policymakers across government are are incapable of finding good policy solutions WITHOUT letting everyone and his twitting brother chime in their two cents.

Actually the onus doesn’t lie squarely with me. This is a silly statement. In fact, for those of us who believe in collaborative technologies such as GCPEDIA or yammer this sentence is the single most revealing point in the Peter’s entire comment. I invite everyone and anyone to add to my rebuttal, or to Peter’s argument. Even those who argue against me would be proving my point – tools like blogs and GCPEDIA allow ideas and issues to be debated with a greater number of people and a wider set of perspectives. The whole notion that any thought or solution lies solely with one person is the type of thinking that leads to bad government (and pretty much bad anything). I personally believe that the best ideas emerge when they are debated and contested – honed by having flaws exposed and repaired. Moreover this has never been more important than today, when more and more issues cross ministerial divides. Indeed, the very fact that we are having this discussion on my blog, and that Peter deemed it worthy of comment, is a powerful counterpoint this statement.

Equally important, I never said policymakers across government are are incapable of finding good policy solutions. This is serious misinterpretation of what said. I did say that the Digital Economy Strategy would fail (and I’ll happily revise, and soften to say, will likely not be meaningful) unless written on GCPEDIA. I still believe this. I don’t believe you can have people writing policy about how to manage an economy who are outside of and/or don’t understand the tools of that economy. I actually think our public servants can find great policy solutions – if we let them. In fact, most public servants I know spend most of their time trying to track down public servants in other ministries or groups to consult them about the policy they are drafting. In short, they spend all their time trying to network, but using tools of the late 20th century (like email), mid 20th century (telephone), or mid 3rd century BC (the meeting) to do it. I just want to give them more efficient tools – digital tools, like those we use in a digital economy – so they can do what they are already doing.

For the last 8 years I’ve worked in government, I can tell you with absolute certainty (and credibility!) that good policy emerges from sound research and strategic instrument choice. Often (select) public consultations are required, but sometimes none at all. Take 3 simple and highly successful policy applications: seat belts laws, carbon tax, banking regulation. Small groups of policymakers have developed policies (or laws, regs, etc) to brilliant effect….sans web 2.0. So why do we need gcpedia now?

Because the world expects you to do more, faster and with less. I find this logic deeply concerning coming from a public servant. No doubts that government developed policies to brilliant effect before the wide adoption of the computer, or even the telephone. So should we get rid of them too? An increasing number of the world’s major corporations are, or are setting up an internal wiki/collaboration platform, a social networking site, even using microblogging services like Yammer to foster internal collaboration. Indeed, these things help us to do research and develop ideas faster, and I think, better. The question isn’t why do we need GCPEDIA now. The question is why aren’t we investing to make GCPEDIA a better platform? The rest of the world is.

I’ll put this another way: tons of excellent policy solutions are waiting in the shared drives of bureaucrats across all governments.

I agree. Let’s at least put it on a wiki where more people can read them, leverage them and, hopefully, implement them. You sitting on a great idea that three other people in the entire public service have read isn’t a recipe for getting it adopted. Nor is it a good use of Canadian tax dollars. Socializing it is. Hence, social media.

Politics — being what it is — doesn’t generate progressive out solutions for various ideological reasons (applies equally to ndp, lib, con). First, tell us what a “failed” digitial economy strategy (DES) looks like. Second, tell us what components need to be included in the DES for it be successful. Third, show us why gcpedia/wikis offer the only viable means to accumulate the necessary policy ingredients.

For the last part – see my earlier post and above. As for what a failed digital economy strategy looks like – it will be one that is irrelevant. It is one that will go ignored by the majority of people who actually work in the digital economy. Of course, an irrelevant policy will be better than a truly bad one which, which I suspect, is also a real risk based on the proceedings of Canada 3.0. (That conference seemed to be about “how do we save analog business that are about to be destroyed by the digital economy” – a link to one of my favourite posts). And of course I have other metrics that matter to me. That all said, after meeting the public servant in charge of the process at Canada 3.0, I was really, really encouraged – she is very smart and gets it.

She also found the idea of writing the policy on GCPEDIA intriguing. I have my doubts that that is how things are proceeding, but it gives me hope.

Canada's Secret Open Data Strategy?

Be prepared for the most boring sentence to an intriguing blog post.

The other night, I was, as one is wont to do, reading through a random Organization for Economic Coordination and Development report entitled Towards Recovery and Partnership with Citizens: The Call for Innovative and Open Government. The report was, in fact, a summary of its recent Ministerial Meeting of the OECD’s Public Governance Committee.

Naturally, I flipped to the section authored by Canada and, imagine the interest with which I read the following:

The Government of Canada currently makes a significant amount of open data available through various departmental websites. Fall 2010 will see the launch of a new portal to provide one-stop access to federal data sets by providing a “single-window” to government data. In addition to providing a common “front door” to government data, a searchable catalogue of available data, and one-touch data downloading, it will also encourage users to develop applications that re-use and combine government data to make it useful in new and unanticipated ways, creating new value for Canadians. Canada is also exploring the development of open data policies to regularise the publication of open data across government. The Government of Canada is also working on a strategy, with engagement and input from across the public service, developing short and longer-term strategies to fully incorporate Web 2.0 across the government.

In addition, Canada’s proactive disclosure initiatives represent an ongoing contribution to open and transparent government. These initiatives include the posting of travel and hospitality expenses, government contracts, and grants and contribution funding exceeding pre-set thresholds. Subsequent phases will involve the alignment of proactive disclosure activities with those of the Access to Information Act, which gives citizens the right to access information in federal government records.

Lots of interesting things packed into these two paragraphs, something I’m sure readers concerned with open data, open government and proactive, would agree with. So let’s look at the good, the bad and the ugly, of all of this, in that order.

The Good

So naturally the first sentence is debatable. I don’t think Canada makes a significant amount of its data available at all. Indeed, across every government website there is probably no more than 400 data sets available in machine readable format. That’s less that the city of Washington DC. It’s about (less than) 1% of what Britain or the United States disclose. But, okay,let’s put that unfortunate fact aside.

The good and really interesting thing here is that the Government is stating that it was going to launch an open data portal. This means the government is thinking seriously about open data. This means – in all likelihood – policies are being written, people are being consulted (internally), processes are being thought through. This is good news.

It is equally good news that the government is developing a strategy for deploying Web 2.0 technologies across the government. I hope this will be happening quickly as I’m hearing that in many departments this is still not embraced and, quite often, is banned outright. Of course, using social media tools to talk with the public is actually the wrong focus (Since the communications groups will own it all and likely not get it right for quite a while), the real hope is being allowed to use the tools internally.

The Bad

On the open data front, the bad is that the portal has not launched. We are now definitely passed the fall of 2010 and, as for whatever reason, there is no Canadian federal open data portal. This may mean that the policy (despite being announced publicly in the above document) is in peril or that it is simply delayed. Innumerable things can delay a project like this (especially on the open data front). Hopefully whatever the problem is, it can be overcome. More importantly, let us hope the government does something sensible around licensing and uses the PDDL and not some other license.

The Ugly

Possibly the heart stopping moment in this brief comes in the last paragraph where the government talks about posting travel and hospitality expenses. While these are often posted (such as here) they are almost never published in machine readable format and so have to be scrapped in order to be organized, mashed up or compared to other departments. Worse still, these files are scattered across literally hundreds of government websites and so are virtually impossible to track down. This guy has done just that, but of course now he has the data, it is more easily navigable but no more open then before. In addition, it takes him weeks (if not months) to do it, something the government could fix rather simply.

The government should be lauded for trying to make this information public. But if this is their notion of proactive disclosure and open data, then we are in for a bumpy, ugly ride.

One Simple Step to Figure out if your Government "gets" Information Technology

Chris Moore has a good post up on his blog at the moment that asks “Will Canadian Cities ever be Strategic?” In it (and it is very much worth reading) he hits on a theme I’ve focused on in many of my talks to government but that I think is also relevant to citizens who care about how technologically sophisticated their government is (which is a metric of how relevant you think you government is going to be in a few years).

If you want to know how serious your government or ministry is about technology there is a first simple step you can take: look at the org chart.

Locate the Chief Information Officer (CIO) or Chief Technology Officer (CTO). Simple question: Is he or she part of the senior executive team?

If yes – there is hope that your government (or the ministry you are looking at) may have some strategic vision for IT.

If no – and this would put you in the bucket with about 80% of local governments, and provincial/federal ministries – then your government almost certainly does not have a strong vision, and it isn’t going to be getting one any time soon.

As Chris also notes and I’ve been banging away at (such as during my keynote to MISA Ontario), in most governments in Canada the CIO/CTO does not sit at the executive table. At the federal level they are frequently Director Generals, (or lower), not Assistant Deputy Minister level roles. At the local level, they often report into someone at the C-level.

This is insanity.

I’m trying to think of a Fortune 500 company – particularly one which operates in the knowledge economy – with this type of reporting structure. The business of government is about managing information… to better regulate, manage resources and/or deliver services. You can’t be talking about how to do that without having the CIO/CTO being part of that conversation.

But that’s what happens almost every single day in many govenrment orgs.

Sadly, it gets worse.

In most organizations the CIO/CTO reports into the Chief Financial Officer (CFO). This really tells you what the organization thinks of technology: It is a cost centre that needs to be contained.

We aren’t going to reinvent government when the person in charge of the infrastructure upon which most of the work is done, the services are delivered and pretty much everything else the org does depends on, isn’t even part of the management team or part of the organizations strategic conversations.

It’s a sad state of affairs and indicative of why our government’s are so slow in engaging in new technology.