Category Archives: public policy

Why the CRTC was right on Usage-Based Billing

Up here in Canada (and I say that in the identity sense, since at the moment I’m in Santa Clara at the Strata Conference) a lot of fuss has been made about the CRTC’s decision regarding the approval of usage-based billing. So much fuss, in fact, that appears the government is going to over turn it.

One thing that has bothered me about these complaints is that they have generally come from people who also seem to oppose internet service providers throttling internet access. It’s unclear to me that you can have it both ways – you can’t (responsibly) be against both internet throttling and usage-based billing. As much as I wish it were the case there is not unlimited internet access in Canada. At some point this genuinely is a scarce resource and if you give people unlimited access at a fixed price at some point the system is going to collapse…

Indeed, what really concerns me is the incentive structure forbidding usage-based billing creates. There is a finite market for broadband access in Canada so the capital for increasing capacity can’t come exclusively from signing up new users. If you make it so that fixed bills are the only way to bill customers then what incentives do internet providers have to improve capacity? At best they will be incented only to provide a minimally viable service. I mean, why build out when you won’t be able to get a return on investment for the extra capacity?

I’d prefer to have an internet provider market where the players are building out their network in order to meet the needs of the most demanding users who are willing to pay for the extra bandwidth. Why? Because it will ensure that capacity keeps increasing as the large players continue to fight to meet the needs of that market. This means there is a financial incentive to increase bandwidth – which is ultimately what you want the incentives to be.

Besides, if – like me – you happen to believe that roads should be tolled then it’s unclear why you shouldn’t also feel like  consumers of large quantities of bandwidth should pay more than someone who barely consumes any at all. Why should low bandwidth users subsidize high-bandwidth users (or worse, that innovative services be made useless because the other solution is throttling).

I want to be clear. All of this isn’t to say that we shouldn’t regulate the ISP business or that we should treat the internet service providers as trustworthy. We are still in an oligarchy, something their behaviour reminds me of every day. I agree that the ISPs demands are in part an effort to make less attractive services like Netflix that threaten many of the ISPs other business – cable TV. So, if we are going to engage in usage-based billing then I’d expect a few things, including:

  • a generous baseline of fixed-fee internet usage a month. (In an ideal world I’d actually say a basic amount should be free – as I believe access to the internet, like access to books in a library, is increasingly becoming a necessary basic service of our society)
  • let’s have REAL usage-based billing. This means, let’s do usage-based billing that will make us more efficient. Charge me more at peak times, less during off peak times the way electricity companies do. That way I’ll bittorrent my files at night when it costs next to nothing, and be smarter about consumption during peak hours.
  • real transparency into how much the ISPs are investing into increasing their capacity.
  • bandwidth from certain IP addresses – like Parliament, Provincial Legislatures and City Halls should be unlimited. No one should be eating into their fix-priced limit or charged extra while engaging in their most basic democratic rights (so unlimited CSPAN video watching)
  • your network now must be neutral. One reason I like usage-based billing is that it destroys a major argument used to justify traffic shaping – that the network can’t handle the demand. Well, not you get rewarded for high demand – so satisfy it! If consumer advocates can’t oppose both usage-based billing and throttling, then telcos and cable companies can’t have both either.

I can imagine that this post will make some of my colleagues upset. Please fire away, tell me how I’ve got it all wrong. But please make sure that you’ve got an answer that addresses some of the concerns raised here. If you’ve been against throttling (and you know who you are), explain to me how it is that we can both (sustainably) have zero throttling and unlimited fixed fee internet access? In a world where online video is taking off, I’m just not sure I see it. Unless, of course, we think Google is going to provide the answer.

Finally, if you haven’t read it, Richard French has a very thoughtful piece in the Globe and Mail entitled Second-Guessing the CRTC Comes at a Price check it out. It certainly helped reaffirm some of my own thinking.

How Yelp Could Help Save Millions in Health Care Costs

Okay, before I dive in, a few things.

1) Sorry for the lack of posts last week. Life’s been hectic. Between Code for America, a number of projects and a few articles I’m trying to get through, the blogging slipped. Sorry.

2) I’m presenting on Open Data and Open Government to the Canadian Parliament Access to Information, Privacy and Ethics Committee today – more on that later this week

3) I’m excited about this post

When it comes to opening up government data many of us focus on Governments: we cajole, we pressure, we try to persuade them to open up their data. It’s approach we will continue to have to take for a great deal of the data our tax dollars pay to collect and that government’s continue to not share. There is however another model.

Consider transit data. This data is sought after, intensely useful, and probably the category of data most experimented with by developers. Why is this? Because it has been standardized. Why has it been standardized. Because local government’s (responding to citizen demand) have been desperate to get their transit data integrated with Google Maps (See image).
Screen-shot-2011-01-30-at-10.45.00-PM

It turns out, to get your transit data into Google Maps, Google insists that you submit to them the transit data in a single structured format. Something that has come to be known as the General Transit Feed Specification (GTFS). The great thing about the GTFS is that it isn’t just google that can use it. Anyone can play with data converted into the GTFS. Better still, because the data structure us standardized an application someone develops, or analysis they conduct, can be ported to other cities that share their transit data in a GTFS format (like, say, my home town of Vancouver).

In short, what we have here is a powerful model both for creating open data and standardizing this data across thousands of jurisdictions.

So what does this have to do with Yelp! and Health Care Costs?

For those not in the know Yelp! is a mobile phone location based rating service. I’m particularly a fan of its restaurant locator: it will show you what is nearby and how it has been rated by other users. Handy stuff.

But think bigger.

Most cities in North America inspect restaurants for health violations. This is important stuff. Restaurants with more violations are more likely to transmit diseases and food born illnesses, give people food poisoning and god knows what else. Sadly, in most cases the results of these tests are posted in the most useless place imaginable. The local authorities website.

I’m willing to wager almost anything that the only time anyone visits a food inspection website is after they have been food poisoned. Why? Because they want to know if the jerks have already been cited.

No one checks these agencies websites before choosing a restaurant. Consequently, one of the biggest benefits of the inspection data – shifting market demand to more sanitary options – is lost. And of course, there is real evidence that shows restaurants will improve their sanitation, and people will discriminate against restaurants that get poor ratings from inspectors, when the data is conveniently available. Indeed, in the book Full Disclosure: The Perils and Promise of Transparency Fung, Graham and Weil noted that after Los Angeles required restaurants to post food inspection results, that “Researchers found significant effects in the form of revenue increases for restaurants with high grades and revenue decreases for C-graded (poorly rated) restaurants.” More importantly, the study Fung, Graham and Weil reference also suggested that making the rating system public positively impacted healthcare costs. Again, after inspection results in Los Angeles were posted on restaurant doors (not on some never visited website), the county experienced a reduction in emergency room visits, the most expensive point of contact in the system. As the study notes these were:

an 18.6 percent decline in 1998 (the first year of program operation), a 4.8 percent decline in 1999, and a 5.4 per- cent decline in 2000. This pattern was not observed in the rest of the state.

This is a stunning result.

So, now imagine that rather than just giving contributor generated reviews of restaurants Yelp! actually shared real food inspection data! Think of the impact this would have on the restaurant industry. Suddenly, everyone with a mobile phone and Yelp! (it’s free) could make an informed decision not just about the quality of a restaurant’s food, but also based on its sanitation. Think of the millions (100s of millions?) that could be saved in the United States alone.

All that needs to happen is for a simple first step, Yelp! needs approach one major city – say a New York, or a San Francisco – and work with them to develop a sensible way to share food inspection data. This is what happened with Google Maps and the GTSF, it all started with one city. Once Yelp! develops the feed, call it something generic, like the General Restaurant Inspection Data Feed (GRIDF) and tell the world you are looking for other cities to share the data in that format. If they do, you promise to include it in your platform. I’m willing to bet anything that once one major city has it, other cities will start to clamber to get their food inspection data shared in the GRIDF format. What makes it better still is that it wouldn’t just be Yelp! that could use the data. Any restaurant review website or phone app could use the data – be it Urban Spoon or the New York Times.

The opportunity here is huge. It’s also a win for everyone: Consumers, Health Insurers, Hospitals, Yelp!, Restaurant Inspection Agencies, even responsible Restaurant Owners. It would also be a huge win for Government as platform and open data. Hey Yelp. Call me if you are interested.

What I’m doing at Code for America

For the last two weeks – and for much of January – I’m in San Francisco helping out with Code for America. What’s Code for America? Think Teach for America, but rather than deploying people into classrooms to help provide positive experiences for students and teachers while attempting to shift the culture of school districts, Code for America has fellows work with cities to help develop reusable code to save cities money, make local government as accessible as your favorite website, and help shift the government’s culture around technology.

code-for-america1-150x112The whole affair is powered by a group of 20 amazing fellows and an equally awesome staff that has been working for months to make it all come together. My role – in comparison – is relatively minor, I head up the Code for America Institute – a month long educational program the fellows go through when they first arrive.  I wanted to write about what I’ve been trying to do both because of the openness ideals of Code for America and to share any lessons for others who might attempt a similar effort.

First, to understand what I’m doing, you have to understand the goal. On the surface, to an outsider, the Code for America change process might look something like this:

  1. Get together some crazy talented computer programers (hackers, if you want to make the government folks nervous)
  2. Unleash them on a partner city with a specific need
  3. Take resulting output and share across cities

Which of course, would mistakenly frame the problem as technical. However, Code for America is not about technology. It’s about culture change. The goal is about rethinking and reimagining  government as better, faster, cheaper and adaptive. It’s about helping think of the ways its culture can embrace government as a platform, as open and as highly responsive.

I’m helping (I think) because I’ve enjoyed some success in getting government’s to think differently. I’m not a computer developer and at their core, these successes were never technology problems. The challenge is understanding how the system works, identify the leverage points for making change, develop partners and collaborate to engage those leverage points, and do whatever it takes to ensure it all comes together.

So this is the message and the concept the speakers are trying to impart on the fellows. Or, in other words, my job is to help unleash the already vibrant change agents within the 20 awesome fellows and make them effective in the government context.

So what have we done so far?

We’ve focused on three areas:

1) Understand Government: Some of the fellows are new to government, so we’ve had presentations from local government experts like Jay Nath, Ed Reiskin and Peter Koht as well as the Mayor of Tuscon’s chief of staff (to give a political perspective). And of course, Tim O’Reilly has spoken about how he thinks government must evolve in the 21st century. The goal: understand the system as well as, understand and respect the actors within that system.

2) Initiate & Influence: Whether it is launching you own business (Eric Ries on startups), starting a project (Luke Closs on Vantrash) or understanding what happens when two cultures come together (Caterina Fake on Yahoo buying Flickr) or myself on negotiating, influence and collaboration, our main challenges will not be technical, they will be systems based and social. If we are to build projects and systems that are successful and sustainable we need to ask the right questions and engage with these systems respectfully as we try to shift them.

3) Plan & Focus: Finally, we’ve had experts in planning and organizing. People like Allen Gunn (Gunner) and the folks from Cooper Design, who’ve helped the fellows think about what they want, where they are going, and what they want to achieve. Know thyself, be prepared, have a plan.

The last two weeks will continue to pick up these themes but also give the fellows more time to (a) prepare for the work they will be doing with their partner cities; and (b) give them more opportunities to learn from one another. We’re half way through the institute at this point and I’m hoping the experience has been a rich – if sometimes overwhelming – one. Hopefully I’ll have an update again at the end of the month.

One Simple Step to Figure out if your Government "gets" Information Technology

Chris Moore has a good post up on his blog at the moment that asks “Will Canadian Cities ever be Strategic?” In it (and it is very much worth reading) he hits on a theme I’ve focused on in many of my talks to government but that I think is also relevant to citizens who care about how technologically sophisticated their government is (which is a metric of how relevant you think you government is going to be in a few years).

If you want to know how serious your government or ministry is about technology there is a first simple step you can take: look at the org chart.

Locate the Chief Information Officer (CIO) or Chief Technology Officer (CTO). Simple question: Is he or she part of the senior executive team?

If yes – there is hope that your government (or the ministry you are looking at) may have some strategic vision for IT.

If no – and this would put you in the bucket with about 80% of local governments, and provincial/federal ministries – then your government almost certainly does not have a strong vision, and it isn’t going to be getting one any time soon.

As Chris also notes and I’ve been banging away at (such as during my keynote to MISA Ontario), in most governments in Canada the CIO/CTO does not sit at the executive table. At the federal level they are frequently Director Generals, (or lower), not Assistant Deputy Minister level roles. At the local level, they often report into someone at the C-level.

This is insanity.

I’m trying to think of a Fortune 500 company – particularly one which operates in the knowledge economy – with this type of reporting structure. The business of government is about managing information… to better regulate, manage resources and/or deliver services. You can’t be talking about how to do that without having the CIO/CTO being part of that conversation.

But that’s what happens almost every single day in many govenrment orgs.

Sadly, it gets worse.

In most organizations the CIO/CTO reports into the Chief Financial Officer (CFO). This really tells you what the organization thinks of technology: It is a cost centre that needs to be contained.

We aren’t going to reinvent government when the person in charge of the infrastructure upon which most of the work is done, the services are delivered and pretty much everything else the org does depends on, isn’t even part of the management team or part of the organizations strategic conversations.

It’s a sad state of affairs and indicative of why our government’s are so slow in engaging in new technology.

An Open Data Inspired Holiday Gift to Montrealers

It turns out that Santa, with the help of some terribly two clever elves over at Montreal Ouvert has created an Open Data inspired present for Montrealers.

What, you must ask could it be?

It’s PatinerMontreal.ca

It’s a genius little website created by two Montreal developers – James McKinney and Dan Mireault – that scrapes the City of Montreal’s data on ice rink status to display the location and condition of all the outdoor ice rinks in the city.

What more could a winter bound montrealer ask for? Well… actually… how about being able to download it as an Android app to use on your smart phone. Yes, you can do that too thanks to another Montreal software developer: Mudar Noufal.

Here’s a screen shot of the slick web version (more on the project below the fold)

Creating this unbelievably useful application was no small feat. It turns out that the City of Montreal publishes the state of the outdoor hockey rinks every day in PDF format. While it is nice that the city puts this information up on the web, sharing it via PDF is probably the most inaccessible way of meeting this goal. To create this site the developers have to “scrape” the data out of these PDF files every day. Creating the software to do this is not only tedious, it can also be frustrating and laborious. In reality, this data was created with tax dollars and is encouraging the use of city assets. Making it difficult to access is unnecessary and counterproductive.

This is because if you can get the data, the things you can create (like PatinerMontreal.ca) can be gorgeous and far superior to anything the city offers. The City’s PDFs conveys a lot of information in a difficult to decipher format – text. Visualizing this information and making it searchable allows the user to quickly see where rinks or located in the city, what types of rinks (skating versus hockey) are located where, and the status of said rinks (newly iced or not).

My hope – and the hope of Montreal Ouvert – is that projects like this show the City of Montreal (and other cities across Canada) the power of getting data out of PDFs and shared in a machine readable format on an open data portal. If Montreal had an Open Data portal (like Vancouver, Nanaimo, Edmonton, Toronto, Ottawa, and others) this application would have been much easier to create and Montrealers would enjoy the benefit of being able to better use the services their tax dollars works so hard to create.

Congratulations to James, Dan and Mudar on such a fantastic project.

Happy Holidays to Montreal Ouvert.

Happy Holidays Montreal. Hope you enjoy (and use) this gift.

The False choice: Bilingualism vs. Open Government (and accountability)

Last week a disturbing headline crossed my computer screen:

B.C. RCMP zaps old news releases from its website

2,500 releases deleted because they weren’t translated into French

1) The worst of all possible outcomes

This is a terrible outcome for accountability and open government. When we erase history we diminish accountability and erode our capacity to learn. As of today, Canadians have a poorer sense of what the RCMP has stood for, what it has claimed and what it has tried to do in British Columbia.

Consider this. The Vancouver Airport is a bilingual designated detachment. As of today, all press releases that were not translated were pulled down. This means that any press release related to the national scandal that erupted after Robert Dziekański – the polish immigrant who was tasered five times by the (RCMP) – is now no longer online. Given the shockingly poor performance the RCMP had in managing (and telling the truth about) this issue – this concerns me.

Indeed, I can’t think that anyone thinks this is a good idea.

The BC RCMP does not appear to think it is a good idea. Consider their press officer’s line: “We didn’t have a choice, we weren’t compliant.”

I don’t think there are any BC residents who believe they are better served by this policy.

Nor do I think my fellow francophone citizens believe they are better served by this decision. Now no one – either francophone or anglophone can find these press releases online. (More on this below)

I would be appalled if a similar outcome occurred in Quebec or a francophone community in Manitoba. If the RCMP pulled down all French press releases because they didn’t happen to have English translations, I’d be outraged – even if I didn’t speak French.

That’s because the one thing worse than not having the document in both official languages, is not having access to the document at all. (And having it hidden in some binder in a barracks that I have to call or visit doesn’t event hint of being accessible in the 21st century).

Indeed, I’m willing to bet almost anything that Graham Fraser, the Official Languages Commissioner – who is himself a former journalist – would be deeply troubled by this decision.

2) Guided by Yesterday, Not Preparing for Tomorrow

Of course, what should really anger the Official Languages Commissioner is an attempt to pit open and accountable government against bilingualism. This is a false choice.

I suspect that the current narrative in government is that translating these documents is too expensive. If one relies on government translators, this is probably true. The point is, we no longer have to.

My friend and colleague Luke C. pinged me after I tweeted this story saying “I’d help them automate translating those news releases into french using myGengo. Would be easy.”

Yes, mygengo would make it cheap at 5 cents a word (or 15 if you really want to overkill it). But even smarter would be to approach Google. Google translate – especially between French and English – has become shockingly good. Perfect… no. Of course, this is what the smart and practical people on the ground at the RCMP were doing until the higher ups got scared by a French CBC story that was critical of the practice. A practice that was ended even though it did not violate any policies.

The problem is there isn’t going to be more money to do translation – not in a world of multi-billion dollar deficits and in a province that boasts 63,000 french speakers. But Google translate? It is going to keep getting better and better. Indeed, the more it translates, the better it gets. If the RCMP (or Canadian government) started putting more documents through Google translate and correcting them it would become still more accurate. The best part is… it’s free. I’m willing to bet that if you ran all 2500 of the press releases through Google translate right now… 99% of them would come out legible and of a standard that would be good enough to share. (again, not perfect, but serviceable). Perhaps the CBC won’t be perfectly happy. But I’m not sure the current outcome makes them happy either. And at least we’ll be building a future in which they will be happy tomorrow.

The point here is that this decision reaffirms a false binary: one based on a 20th century assumptions where translations were expensive and laborious. It holds us back and makes our government less effective and more expensive. But worse, it ignores an option that embraces a world of possibilities – the reality of tomorrow. By continuing to automatically translate these documents today we’d continue to learn how to use and integrate this technology now, and push it to get better, faster. Such a choice would serve the interests of both open and accountable governments as well as bilingualism.

Sadly, no one at the head office of the RCMP – or in the federal government – appears to have that vision. So today we are a little more language, information and government poor.

Three asides:

1) I find it fascinating that the media can get mailed a press release that isn’t translated but the public is not allowed to access it on a website until it is – this is a really interesting form of discrimination, one that supports a specific business model and has zero grounding in the law, and indeed may even be illegal given that the media has no special status in Canadian law.

2) Still more fascinating is how the RCMP appears to be completely unresponsive to news stories about inappropriate behavior in its ranks, like say the illegal funding of false research to defend the war on drugs, but one story about language politics causes the organization to change practices that aren’t even in violation of its policies. It us sad to see more evidence still that the RCMP is one of the most broken agencies in the Federal government.

3) Thank you to Vancouver Sun Reporter Chad Skelton for updating me on the Google aspect of this story.

The best moment in Canadian democracy in 2010?: the census debate

Over at Samara, my friend Alison Loat is asking people to answer the question “What was the best moment in Canadian democracy in 2010?” In what I think was a good decision, they’ve defined the terms pretty broadly, stating:

The moment could be one that took place inside or outside of Parliament or other legislative chambers.  It could have happened at the federal, provincial, territorial or municipal level.  It could include any number of things, such as an election with a historic turnout, a stimulating public debate, a rally or protest, a critical piece of news analysis, the creation of a new digital application, or an important Parliamentary motion or decision.

If you’ve got an idea I encourage you to hear over there and write it up and submit it! The Samara people are great and are up to good work, so definitely worth checking out.

I’ve got one answer the question myself – what follows is my write up. I think I may even have one more in me… but here’s my first effort:

The Census Debate as Canada’s 2010 democratic moment.

In a functioning democracy disagreement is necessary and healthy. But at its core there most be some basic agreement – some shared understanding of who we are, as a people and as a society. This shared understanding not only serves as the basic facts that must inform our debates but also the basis of our shared identity that keeps us together even when we disagree.

This is why the census is so important, and why it is my choice for the best moment in Canadian democracy for 2010. The census binds us together by creating a shared understanding of who we are. Even the most marginalized Canadians stand up and are counted and thus can be reflected and heard in our national discourse.

That’s why at a time when Canadian political coverage tries to cleave the country’s citizens into different, competing groups – rural versus urban, French versus English, left versus right – I think the best moment in Canadian Democracy was seeing over 500 groups including all levels of government, non-profits from across the country, business organizations, rural communities, and virtually all the major religious organizations come together and challenge the government with one voice.

What a great democratic moment that so many organizations, that often disagree on so many issues, can collectively agree on a core shared interest: that a functioning democracy and an effective government is built on a foundation of some basic information about who we are. Even more so when the government tried to make the decision in secret, announcing it quietly on a friday, during a long weekend in the middle of summer.

The decision and the process surrounding it may be one of the year’s darkest moments for Canadian democracy but the country’s reaction was definitely one of our brightest.

London, UK, Open Government Data Camp Keynote – Nov 18, 2010

Here is my opening keynote to the Open Government Data Camp held in London earlier this year (2010) on Nov 18th. Pasted below is the day two keynote by the always excellent Tom Steinberg of mysociety.org.

Hope these are thought provoking for novice and veteran tech and open government types, as well as those just curious about how our world is changing.

My speech gets all very serious after the initial 12-year old boy moment. It was a great audience at the camp – really wonderful to have such an engaged group of people.

David Eaves from Open Knowledge Foundation on Vimeo.

Tim Berners-Lee, Prof. Nigel Shadbolt & Tom Steinberg from Open Knowledge Foundation on Vimeo.

An Open Letter on Open Government to the Access to Information, Privacy & Ethics Parliamentary Committee

The other week I received an invitation from the Canadian Standing Parliamentary Committee on Access to Information, Privacy & Ethics to come and testify about open government and open data on February 1st.

The Committee has talked a great deal about its efforts to engage in a study of open government and since February 1st is quite a bit away and I’d like to be helpful before my testimony, I thought I draft up some thoughts and suggestion for the committee’s strategy. I know these are unsolicited but I hope they are helpful and, if not, that they at least spark some helpful thoughts.

1. Establish a common understanding of the current state of affairs

First off, the biggest risk at the moment is that the Committee’s work might actually slow down efforts of the government to launch an open data strategy. The Committee’s work, and the drafting of its report, is bound to take several months, it would be a shame if the government were to hold back launching any initiatives in anticipation of this report.

Consequently, my hope is that the committee, at is earliest possible convenience, request to speak to the Chief Information Officer of the Government of Canada to get an update regarding the current status of any open government and open data initiatives, should they exist. This would a) create a common understanding regarding the current state of affairs for both committee members and witnesses; b) allow subsequent testimony and recommendations to take into consideration the work already done and c) allow the committee to structure its work so as to not slow down any current efforts that might be already underway.

2. Transform the Committee into a Government 2.0 Taskforce – similar to the Australian effort

Frankly, my favourite approach in this space has been the British. Two Government’s, one Labour, one Conservative have aggressive pursued an open data and open government strategy. This, would be my hope for Canada. However, it does not appear that is is presently the case. So, another model should be adopted. Fortunately, such a model exists.

Last year, under the leadership of Nicholas Gruen, the Australian government launched a Government 2.0 taskforce on which I had the pleasure of serving on the International Reference Group. The Australian Taskforce was non-partisan and was made up of policy and technical experts and entrepreneurs from government, business, academia, and cultural institutions. More importantly, the overwhelming majority of its recommendations were adopted.

To replicate its success in Canada I believe the Committee should copy the best parts of the Australian taskforce. The topic of Canadians access to their government is of central importance to all Canadians – to non-profits, to business interests, to public servants and, of course, to everyday citizens. Rather than non-partisan, I would suggest that a Canadian taskforce should be pan-partisan – which the Committee already is. However, like the Australian Taskforce it should include a number of policy and technical experts from outside government. This fill committee would this represent both a political cross-section and substantive knowledge in the emerging field of government 2.0. It could thus, as a whole, effectively and quickly draft recommendations to Parliament.

Best of all, because of step #1, this work could proceed in parallel to any projects (if any) already initiated by the government and possibly even inform such work by providing interim updates.

I concede such an approach may be too radical, but I hope it is at least a starting point for an interesting approach.

3. Lead by Example

There is one arena where politicians need not wait on the government to make plans: Parliament itself. Over the past year, while in conversations with the Parliamentary IT staff as well as the Speaker of the House, I have worked to have Parliament make more data about its own operations open. Starting in January, the Parliamentary website will begin releasing the Hansard in XML – this will make it much easier for software developers like the creators of Openparliament.ca as and howdtheyvote.ca to run their sites and for students, researchers and reporters to search and analyze our country’s most important public discussions. In short, by making the Hansard more accessible the Speaker and his IT staff are making parliament more accessible. But this is only the beginning of what parliamentarians could do to make for a truly Open Parliament. The House and Senate’s schedules and agendas, along with committee calendars should all be open. So to should both chambers seating arrangement. Member’s photos and bios should be shared with an unrestricted license as should the videos of parliament.

Leadership in this space would send a powerful message to both the government and the public service that Canada’s politicians are serious about making government more open and accessible to those who elect it. In addition, it could also influence provincial legislature’s and even municipal governments, prompting them to do the same and so enhance our democracy at every level.

4. Finally, understand your task: You are creating a Knowledge Government for a Knowledge Society

One reason I advise the Committee to take on external members is because, laudably, many admit this topic is new to them. But I also want the committee members to understand the gravity of their task. Open Government, Open Data and/or Government 2.0 are important first steps in a much larger project.

What you are really wrestling with here is what government is going to look like in an knowledge economy and a knowledge society. How is going to function with knowledge workers as employees? And, most importantly, how is it going to engage with knowledge citizens, many of whom can and want to make real contributions beyond the taxes they pay and don’t need government to self-organize?

In short, what is a knowledge based government going to look like?

At the centre of that question is how we manage and share information. The basic building block of a knowledge driven society.

Look around, and you can see how the digital world is transforming how we do everything. Few of us can imagine living today without access to the internet and the abundance of information it brings to us. Indeed, we have already become so used to the internet we forget how much it has radically changed whole swaths of our life and economy from the travel and music industry to the post to political fund-raising and to journalism.

If today our government still broadly looks and feels like an institution shaped by the printing press it is because, well it is. Deputy Ministers and Ministers still receive giant briefing binders filled with paper. This is a reflection of how we deal within information and knowledge in government, we move it around (for good reasons) in siloes, operating as though networks, advance search, and other innovations don’t exist (even though they already do).

How our government deals with information is at the heart of your task. I’m not saying you have to re-invent government or dismantle all the silos and ministries. Quite the contrary, I believe small changes can be made that will yield significant benefits, efficiencies and savings while enhancing our democracy. But you will be confronting decades, if not centuries of tradition, culture and process in an institution that is about to go through the biggest change since the invention of the printing press. You don’t have to do it all, but even some small first steps will not come easily. I share this because I want you going into the task with eyes wide open.

At the very least we aren’t going first, our cousins both across the Atlantic, the Pacific and our southern border have already taken the plunge. But this should add urgency to our task. We cannot afford to stand by while others renew their democratic institutions while simultaneously enhancing an emerging and critical pillar of a new knowledge economy and knowledge society.

Wikileaks and the coming conflict between closed and open

I’ve been thinking about wikileaks ever since the story broke. Most of the stories – like those written by good friends like Taylor Owen and Scott Gilmore are pieces very much worth reading but I think miss the point about wikileaks and/or assess it on their own terms and thus fail to understand what wikileaks is actually about and what it is trying to do. We need to be clear in our understanding, and thus the choices we are about to confront.

However, before you read anything I write there are smarter people out there – two in particular – who have said things that I’m not reading anywhere else. The first is Jay Rosen (key excerpt below) whose 15 minutes Pressthink late night video on the subject is brilliant and the second is by zunguzungu piece Julian Assange and the Computer Conspiracy; “To destroy this invisible government” (key excerpt further below) is a cool and calculated dissection of wikileaks goals and its intentions. I’ve some thoughts below, but these two pieces are, in my mind, the most important things you can read on the subject and strongly inform my own piece (much, much further below). I know that this is all very long, and that many of you won’t have the patience, but I hope that what I’ve written and shared below is compelling enough to hold your attention, I certainly think it is important enough.

Jay Rosen:

While we have what purports to be a “watchdog press” we also have, laid out in front of us, the clear record of the watchdog press’s failure to do what is says it can do, which is to provide a check on power when it tries to conceal its deeds and its purpose. So I think it is a mistake to reckon with Wikileaks without including in the frame the spectacular failures of the watchdog press over the last 10, 20, 40 years, but especially recently. And so, without this legitimacy crisis in mainstream American journalism, the leakers might not be so inclined to trust Julian Assange and a shadowy organization like Wikileaks. When the United States is able to go to war behind a phony case, when something like that happens and the Congress is fooled and a fake case is presented to the United Nations and war follows and 100,000s of people die and the stated rationale turns out to be false, the legitimacy crisis extends from the Bush government itself to the American state as a whole and the American press and the international system because all of them failed at one of the most important things that government by consent can do: which is reason giving. I think these kind of huge cataclysmic events within the legitimacy regime lie in the background of the Wikileaks case, because if wasn’t for those things Wikileaks wouldn’t have the supporters it has, the leakers wouldn’t collaborate the way that they do and the moral force behind exposing what this government is doing just wouldn’t be there.

This is one of the things that makes it hard for our journalists to grapple with Wikileaks. On the one hand they are getting amazing revelations. I mean the diplomatic cables tell stories of what it is like to be inside the government and inside international diplomacy that anyone who tries to understand government would want to know. And so it is easy to understand why the big news organizations like the New York Times and The Guardian are collaborating with Wikileaks. On the other hand they are very nervous about it because it doesn’t obey the laws of the state and it isn’t a creature of a given nation and it is inserting itself in-between the sources and the press. But I think the main reason why Wikileaks causes so much insecurity with our journalists is because they haven’t fully faced the fact that the watchdog press they treasure so much died under George W. Bush. It failed. And instead of rushing to analyze this failure and prevent it from happening ever again – instead of a truth and reconciliation commission-style effort that could look at “how did this happen” – mostly what our journalists did, with a few exceptions, is they moved on to the next story. The watchdog press died. And what we have is Wikileaks instead. Is that good or is that bad? I don’t know, because I’m still trying to understand what it is.

Zunguzungu:

But, to summarize, he (Assange) begins by describing a state like the US as essentially an authoritarian conspiracy, and then reasons that the practical strategy for combating that conspiracy is to degrade its ability to conspire, to hinder its ability to “think” as a conspiratorial mind. The metaphor of a computing network is mostly implicit, but utterly crucial: he seeks to oppose the power of the state by treating it like a computer and tossing sand in its diodes…

…The more secretive or unjust an organization is, the more leaks induce fear and paranoia in its leadership and planning coterie. This must result in minimization of efficient internal communications mechanisms (an increase in cognitive “secrecy tax”) and consequent system-wide cognitive decline resulting in decreased ability to hold onto power as the environment demands adaption. Hence in a world where leaking is easy, secretive or unjust systems are nonlinearly hit relative to open, just systems. Since unjust systems, by their nature induce opponents, and in many places barely have the upper hand, mass leaking leaves them exquisitely vulnerable to those who seek to replace them with more open forms of governance.

– zunguzungu

Almost all the media about wikileaks has, to date, focused on the revelations about what our government actually thinks versus what it states publicly. The bigger the gap between internal truth and external positions, the bigger the story.

This is, of course, interesting stuff. But less discussed and more interesting is our collective reaction to wikileaks. Wikileaks is drawing a line, exposing a fissure in the open community between those who believe in overturning current “system(s)” (government and international) and those who believe that the current system can function but simply needs greater transparency and reform.

This is why placing pieces like Taylor Owen and Scott Gilmore‘s against zunguzungu’s is so interesting. Ultimately both Owen and Gilmore believe in the core of the current system – Scott explicitly so, arguing how secrecy in the current system allows for human right injustices to be tackled. Implicit in this, of course, is the message that this is how they should be tackled. Consequently they both see wikileaks as a failure as they (correctly) argue that its radical transparency will lead to a more closed and ineffective governments. Assange would likely counter that Scott’s effort address systems and not cause and may even reinforce the international structures that help foster hunan rights abuses. Consequently Assange’s core value of transparency, which at a basic level Owen and Gilmore would normally identify with, becomes a problem.

This is interesting. Owen and Scott believe in reform, they want the world to be a better place and fight (hard) to make it so. I love them both for it. But they aren’t up for a complete assault on the world’s core operating rules and structures. In a way this ultimately groups them (and possibly me – this is not a critique of Scott and Taylor whose concerns I think are well founded) on the same side of a dividing line as people like Tom Flanagan (the former adviser to the Canadian Prime Minister who half-jokingly called for Assange to be assassinated) and Joe Liberman (who called on companies that host material related to wikileaks to sever their ties with them). I want to be clear, they do not believe Assange should be assassinated but they (and possibly myself) do seem to agree that his tactics are a direct threat to the functioning of system that I think they are arguing needs to be reformed but preserved – and so see wikileaks as counterproductive.

My point here is that I want to make explicit the choices wikileaks is forcing us to make. Status quo with incremental non-structural reform versus whole hog structural change. Owen and Gilmore can label wikileaks a failure but in accepting that analysis we have to recognize that they view it from a position that believes in incremental reform. This means you believe in some other vehicle. And here, I think we have some tough questions to ask ourselves. What indeed is that vehicle?

This is why I think Jay Rosen’s piece is so damn important. One of the key ingredients for change has been the existence of the “watchdog” press. But, as he puts it (repeated from above):

I think it is a mistake to reckon with Wikileaks without including in the frame the spectacular failures of the watchdog press over the last 10, 20, 40 years, but especially recently. And so, without this legitimacy crisis in mainstream American journalism, the leakers might not be so inclined to trust Julian Assange and a shadowy organization like Wikileaks. When the United States is able to go to war behind a phony case, when something like that happens and the Congress is fooled and a fake case is presented to the United Nations and war follows and 100,000s of people die and the stated rationale turns out to be false, the legitimacy crisis extends from the Bush government itself to the American state as a whole and the American press and the international system because all of them failed at one of the most important things that government by consent can do: which is reason giving.

the logical conclusion of Rosen’s thesis is a direct challenge to those of us who are privileged enough to benefit from the current system. As ugly and imperfect as the current system may be Liberman, Flanagan, Owen and Gilmore and, to be explicit, myself, benefit from that system. We benefit from the status quo. Significantly. Dismantling the world we know carries with it significant risks, both for global stability, but also personally. So if we believe that Assange has the wrong strategy and tactics we need to make the case, both to ourselves, to his supporters, to those who leak to wiki leaks and to those on the short end of the stick in the international system about how it is the reform will work and how it is that secrecy and power will be managed for the public good.

In this regard the release of wikileak documents is not a terrorist event, but it is as much an attack on the international system as 9/11 was. It is a clear effort to destabilize and paralyze the international system. It also comes at a time when confidence in our institutions is sliding – indeed Rosen argues that this eroding confidence feeds wikileaks.

So what matters is how we react. To carry forward (the dangerous) 9/11 analogy, we cannot repeat the mistakes of the Bush administration. Then our response corrupted the very system we sought to defend, further eroded the confidence in institutions that needed support and enhanced our enemies – we attacked human rights, civil liberties, freedom of speech and prosecuted a war that killed 100,000s of innocent lives on the premise of manufactured evidence.

Consequently, our response to the current crises can’t be to close up governments and increase secrecy. This will strengthen the hands of those who run wikileaks and cause more public servants and citizens to fear the institutions wikileaks and look for alternatives… many of whoe will side with wikileaks and help imped the capacity of the most important institution in our society to respond to everyday challenges.

As a believer in open government and open data the only working option to us to do the opposite. To continue to open up these institutions as the only acceptable and viable path to making them more credible. This is not to say that ALL information should be made open. Any institution needs some private place to debate ideas and test unpopular theses. But at the moment our governments – more through design and evolution than conspiracy – enjoy far more privacy and secrecy than the need. Having a real and meaningful debate about how to change that is our best response. In my country, I don’t see that debate happening. In the United States, I see it moving forward, but now it has more urgency. Needless to say, I think all of this gives new weigh to my own testimony I’ll be making before the parliamentary Standing Committee on Access to Information, Privacy and Ethics.

I still hope the emerging conflict between open and closed can be won without having to resort to the types of tactics adopted by wikileaks. But for those of use who believe it, we had better start making the case persuasively. The responses of people like Flanagan and Liberman remind me of Bush after 9/11 “you are either with us, or with the terrorists.” Whether intentionally or unintentionally, an analogous response will create a world in which power and information are further removed from the public and will lead to the type of destabilizing change Assange wants.

I’m bound to write more on this – especially around wikileaks, open data and transparency that I think some authors unhelpfully conflate but this post is already long enough and I’m sure most people haven’t even reached a place where they’ll be reading this.