Tag Archives: free culture

Beautiful Maps – Open Street Map in Water Colours

You know, really never know what the web is going to throw at you next. The great people over at Stamen Design (if you’ve never heard of Stamen you are really missing out – they are probably the best data visualization company I know) have created a watercolor version of Open Street Maps.

Why?

Because they can.

It’s a wonderful example of how you, with the web, you can build on what others have done. Pictured below my home town of Vancouver – I suggest zooming out a little as the city really comes into focus when you can see more of its geography.

Some Bonus Awesomeness Facts about all this Stamen goodness:

  • Stamen has a number of Creative Commons licensed map templates that you can use here (and links to GitHub repos)
  • Stamen housed Code for America in its early days. So they don’t just make cool stuff. The pitch in and help out with cool stuff too.
  • Former Code for America fellow Michael Evans works there now.

 

International Open Data Hackathon 2011: Better Tools, More Data, Bigger Fun

Last year, with only a month of notice, a small group passionate people announced we’d like to do an international open data hackathon and invited the world to participate.

We were thinking small but fun. Maybe 5 or 6 cities.

We got it wrong.

In the end people from over 75 cities around the world offered to host an event. Better still we definitively heard from people in over 40. It was an exciting day.

Last week, after locating a few of the city organizers email addresses, I asked them if we should do it again. Every one of them came back and said: yes.

So it is official. This time we have 2 months notice. December 3rd will be Open Data Day.

I want to be clear, our goal isn’t to be bigger this year. That might be nice if it happens. But maybe we’ll only have 6-7 cities. I don’t know. What I do want is for people to have fun, to learn, and to engage those who are still wrestling with the opportunities around open data. There is a world of possibilities out there. Can we seize on some of them?

Why.

Great question.

First off. We’ve got more data. Thanks to more and more enlightened governments in more and more places, there’s a greater amount of data to play with. Whether it is Switzerland, Kenya, or Chicago there’s never been more data available to use.

Second, we’ve got better tools. With a number of governments using Socrata there are more API’s out there for us to leverage. Scrapperwiki has gotten better and new tools like Buzzdata, TheDataHub and Google’s Fusion Tables are emerging every day.

And finally, there is growing interest in making “openess” a core part of how we measure governments. Open data has a role to play in driving this debate. Done right, we could make the first Saturday in December “Open Data Day.” A chance to explain, demo and invite to play, the policy makers, citizens, businesses and non-profits who don’t yet understand the potential. Let’s raise the world’s data literacy and have some fun. I can’t think of a better way than with another global open data hackathon – an maker’s fair like opportunity for people to celebrate open data by creating visualizations, writing up analyses, building apps or doing what ever they want with data.

Of course, like last time, hopefully we can make the world a little better as well. (more on that coming soon)

How.

The basic premises for the event would be simple, relying on 5 basic principles.

1. Together. It can be as big or as small, as long or as short, as you’d like it, but we’ll be doing it together on Saturday, December 3rd, 2011.

2. It should be open. Around the world I’ve seen hackathons filled with different types of people, exchanging ideas, trying out new technologies and starting new projects. Let’s be open to new ideas and new people. Chris Thorpe in the UK has done amazing work getting young and diverse group hacking. I love Nat Torkington’s words on the subject. Our movement is stronger when it is broader.

3. Anyone can organize a local event. If you are keen help organize one in your city and/or just participate add your name to the relevant city on this wiki page. Where ever possible, try to keep it to one per city, let’s build some community and get new people together. Which city or cities you share with is up to you as it how you do it. But let’s share.

4. You can work on anything that involves open data. That could be a local or global app, a visualization, proposing a standard for common data sets, scraping data from a government website to make it available for others in buzzdata.

It would be great to have a few projects people can work on around the world – building stuff that is core infrastructure to future projects. That’s why I’m hoping someone in each country will create a local version of MySociety’s Mapit web service for their country. It will give us one common project, and raise the profile of a great organization and a great project.

We also hope to be working with Random Hacks of Kindness, who’ve always been so supportive, ideally supplying data that they will need to run their applications.

5. Let’s share ideas across cities on the day. Each city’s hackathon should do at least one demo, brainstorm, proposal, or anything that it shares in an interactive way with at members of a hackathon in at least one other city. This could be via video stream, skype, by chat… anything but let’s get to know one another and share the cool projects or ideas we are hacking on. There are some significant challenges to making this work: timezones, languages, culture, technology… but who cares, we are problem solvers, let’s figure out a way to make it work.

Like last year, let’s not try to boil the ocean. Let’s have a bunch of events, where people care enough to organize them, and try to link them together with a simple short connection/presentation.Above all let’s raise some awareness, build something and have some fun.

What next?

1. If you are interested, sign up on the wiki. We’ll move to something more substantive once we have the numbers.

2. Reach out and connect with others in your city on the wiki. Start thinking about the logistics. And be inclusive. Someone new shows up, let them help too.

3. Share with me your thoughts. What’s got you excited about it? If you love this idea, let me know, and blog/tweet/status update about it. Conversely, tell me what’s wrong with any or all of the above. What’s got you worried? I want to feel positive about this, but I also want to know how we can make it better.

4. Localization. If there is bandwidth locally, I’d love for people to translate this blog post and repost it locally. (let me know as I’ll try cross posting it here, or at least link to it). It is important that this not be an english language only event.

5. If people want a place to chat with other about this, feel free to post comments below. Also the Open Knowledge Foundation’s Open Data Day mailing list will be the place where people can share news and help one another out.

Once again, I hope this will sound like fun to a few committed people. Let me know what you think.

The Geopolitics of the Open Government Partnership: the beginning of Open vs. Closed

Aside from one or two notable exceptions, there hasn’t been a ton of press about the Open Government Partnership (OGP). This is hardly surprising. The press likes to talk about corruption and bad government, people getting together to talk about actually address these things in far less sexy.

But even where good coverage exists analysts and journalists are, I think, misunderstanding the nature of the partnership and its broader implications should it take hold. Presently it is generally seen as a do good project, one that will help fight corruption and hopefully lead to some better governance (both of which I hope will be true). However, the Open Government Partnership isn’t just about doing good, it has real strategic and geopolitical purposes.

In fact, the OGP is, in part, about a 21st century containment strategy.

For those unfamiliar with 20th century containment, a brief refresher. Containment refers to a strategy outlined by a US diplomat – George Kennan – who while posted in Moscow wrote the famous The Long Telegram in which he outlined the need for a more aggressive policy to deal with an expansionist post-WWII Soviet Union. He argued that such a policy would need to seek to isolate the USSR politically and strategically, in part by positioning the United States as a example in the world that other countries would want to work with. While discussions of “containment” often focus on its military aspects and the eventual arms race, it was equally influential in prompting the ideological battle between the USA and USSR as they sought to demonstrate whose “system” was superior.

So I repeat. The OGP is part of a 21st century containment policy. And I’d go further, it is a effort to forge a new axis around which America specifically, and a broader democratic camp more generally, may seek to organize allies and rally its camp. It abandons the now outdated free-market/democratic vs. state-controlled/communist axis in favour of a more subtle, but more appropriate, open vs. closed.

The former axis makes little sense in a world where authoritarian governments often embrace (quasi) free-market to reign, and even have some of the basic the trappings of a democracy. The Open Government Partnership is part of an effort to redefine and shift the goal posts around what makes for a free-market democracy. Elections and a market place clearly no longer suffice and the OGP essentially sets a new bar in which a state must (in theory) allow itself to be transparent enough to provide its citizens with information (and thus power), in short: it is a state can’t simple have some of the trappings of a democracy, it must be democratic and open.

But that also leaves the larger question. Who is being contained? To find out that answer take a look at the list of OGP participants. And then consider who isn’t, and likely never could be, invited to the party.

OGP members Notably Absent
Albania
Azerbaijan
Brazil
Bulgaria
Canada
Chile
Colombia
Croatia
Czech Republic
Dominican Republic
El Salvador
Estonia
Georgia
Ghana
Guatemala
Honduras
Indonesia
Israel
Italy
Jordon
Kenya
Korea
Latvia
Liberia
Lithuania
Macedonia
Malta
Mexico
Moldova
Mongolia
Montenegro
Netherlands
Norway
Peru
Philippines
Romania
Slovak Republic
South Africa
Spain
Sweden
Tanzania
Turkey
Ukraine
United Kingdom
United States
Uruguay
ChinaIran

Russia

Saudi Arabia

(Indeed much of the middle East)

Pakistan

*India is not part of the OGP but was involved in much of initial work and while it has withdrawn (for domestic political reasons) I suspect it will stay involved tangentially.

So first, what you have here is a group of countries that are broadly democratic. Indeed, if you were going to have a democratic caucus in the United Nations, it might look something like this (there are some players in that list that are struggling, but for them the OGP is another opportunity to consolidate and reinforce the gains they’ve made as well as push for new ones).

In this regards, the OGP should be seen as an effort by the United States and some allies to find some common ground as well as a philosophical touch point that not only separates them from rivals, but that makes their camp more attractive to deal with. It’s no trivial coincidence that on the day of the OGP launch the President announced the United States first fulfilled commitment would be its decision to join the Extractive Industries Transparency Initiative (EITI). The EITI commits the American oil, gas and mining companies to disclose payments made to foreign governments, which would make corruption much more difficult.

This is America essentially signalling to African people and their leaders – do business with us, and we will help prevent corruption in your country. We will let you know if officials get paid off by our corporations. The obvious counter point to this is… the Chinese won’t.

It’s also why Brazil is a co-chair, and the idea was prompted during a meeting with India. This is an effort to bring the most important BRIC countries into the fold.

But even outside the BRICs, the second thing you’ll notice about the list is the number of Latin American, and in particular African countries included. Between the OGP, the fact that the UK is making government transparency a criteria for its foreign aid, and that World Bank is increasingly moving in the same direction, the forces for “open” are laying out one path for development and aid in Africa. One that rewards governance and – ideally – creates opportunities for African citizens. Again, the obvious counter point is… the Chinese won’t.

It may sounds hard to believe but the OGP is much more than a simple pact designed to make heads of state look good. I believe it has real geopolitical aims and may be the first overt, ideological salvo in the what I believe will be the geopolitical axis of Open versus Closed. This is about finding ways to compete for the hearts and minds of the world in a way that China, Russia, Iran and others simple cannot. And, while I agree we can debate the “openness” of the various the signing countries, I like the idea of world in which states compete to be more open. We could do worse.

Access to Information is Fatally Broken… You Just Don’t Know it Yet

I’ve been doing a lot of thinking about access to information, and am working on a longer analysis, but in the short term I wanted to share two graphs – graphs that outline why Access to Information (Freedom of Information in the United States) is unsustainable and will, eventually, need to be radically rethought.

First, this analysis is made possible by the enormous generosity of the Canadian Federal Information Commissioners Office which several weeks ago sent me a tremendous amount of useful data regarding access to information requests over the past 15 years at the Treasury Board Secretariat (TBS).

The first figure I created shows both the absolute number of Access to Information Requests (ATIP) since 1996 as well as the running year on year percentage increase. The dotted line represents the average percentage increase over this time. As you can see the number of ATIP requests has almost tripled in this time period. This is very significant growth – the kind you’d want to see in a well run company. Alas, for those processing ATIP requests, I suspect it represents a significant headache.

That’s because, of course, such growth is likely unmanageable. It might be manageable if say, the costs of handling each requests was dropping rapidly. If such efficiencies were being wrestled out of the system of routing and sorting requests then we could simply ignore the chart above. Sadly, as the next chart I created demonstrates this is not the case.

ATIPcosts

In fact the costs of managing these transactions has not tripled. It has more than quadrupled. This means that not only are the number of transactions increasing at about 8% a year, the cost of fulfilling each of those transactions is itself rising at a rate above inflation.

Now remember, I’m not event talking about the effectiveness of ATIP. I’m not talking about how quickly requests are turned around (as the Information Commissioner has discussed, it is broadly getting worse) nor am I discussing less information is being restricted (it’s not, things are getting worse). These are important – and difficult to assess – metrics.

I am, instead, merely looking at the economics of ATIP and the situation looks grim. Basically two interrelated problems threaten the current system.

1) As the number of ATIP requests increase, the manpower required to answer them also appears to be increasing. At some point the hours required to fulfill all requests sent to a ministry will equal the total hours of manpower at that ministry’s  disposal. Yes that day may be far off, but they day where it hits some meaningful percentage – say 1%, 3% or 5% of total hours worked at Treasury Board, may not be that far off. That’s a significant drag on efficiency. I recall talking to a foreign service officer who mentioned that during the Afghan prisoner scandal an entire department of foreign service officers – some 60 people in all – were working full time on assessing access to information requests. That’s an enormous amount of time, energy and money.

2) Even more problematic than the number of work hours is the cost. According to the data I received, Access to Information requests costs The Treasury Board $47,196,030 last year. Yes, that’s 47 with a “million” behind it. And remember, this is just one ministry. Multiply that by 25 (let’s pretend that’s the number of ministries, there are actually many more, but I’m trying to be really conservative with my assumptions) and it means last year the government may have spent over $1.175 Billion fulfilling ATIP requests. That is a staggering number. And its growing.

Transparency, apparently, is very, very expensive. At some point, it risks becoming too expensive.

Indeed, ATIP reminds me of healthcare. It’s completely unsustainable, and absolutely necessary.

To be clear, I’m not saying we should get rid of ATIP. That, I believe, to be folly. It is and remains a powerful tool for holding government accountable. Nor do I believe that requesters should pay for ATIP requests as a way to offset costs (like BC Ferries does) – this creates a barrier that punishes the most marginalized and threatened, while enabling only the wealthy or well financed to hold government accountable.

I do think it suggests that governments need to radical rethink how manage ATIP. More importantly I think it suggests that government needs to rethink how it manages information. Open data, digital documents are all part of a strategy that, I hope, can lighten the load. I’ve also felt that if/as government’s move their work onto online platforms like GCPEDIA, we should simply make non-classified pages open to the public on something like a 5 year timeline. This could also help reduce requests.

I’ve more ideas, but at its core we need a system rethink. ATIP is broken. You may not know it yet, but it is. The question is, what are we going to do before it peels off the cliff? Can we invent something new and better in time?

Lessons from fashion's free culture: Johanna Blakley on TED.com

This TEDx talk by Johanna Blakley is pure gold (thank you Jonathan Brun for passing it along). It’s a wonderful dissection – all while using the fashion industry as a case study – of how patents and licenses are not only unnecessary for innovation but can actually impede it.

What I found particularly fascinating is Johanna’s claim that long ago the US courts decided that clothing was “too utilitarian” to have copyright and patents applied to it. Of course, we could say that of a number of industries today – the software industry coming to mind right off the bat (can anyone imagine a world without software?).

The presentation seems to confirm another thought I’ve held – weaker copyright and patents protections do not reduce or eliminate peoples incentive to innovate. Quite the opposite. It both liberates innovation and increases its rate as others are able to copy and reuse one another. In addition, it makes brands stronger, not weaker. In a world where anybody can copy anybody, innovation and the capacity to execute matters. Indeed, it is the only thing that matters.

It would be nice if, here in Canada, the Ministers of Heritage (James Moore) and Industry (Tony Clement) would watch and learn from this video – and the feedback they received from ordinary Canadians. If we want industries as vibrant and profitable as the fashion industry, it may require us to think a little differently about copyright reform.

Minister Moore and the Myth of Market Forces

Last week was a bad week for the government on the copyright front. The government recently tabled legislation to reform copyright and the man in charge of the file, Heritage Minister James Moore, gave a speech at the International Chamber of Commerce in which he decried those who questioned the bill as “radical extremists.” The comment was a none-too-veiled attack at people like University of Ottawa Professor Michael Geist who have championed for reasonable copyright reform and who, like many Canadians, are concerned about some aspects of the proposed bill.

Unfortunately for the Minister, things got worse from there.

First, the Minister denied making the comment in messages to two different individuals who inquired about it:

Still worse, the Minister got into a online debate with Cory Doctorow, a bestselling writer (he won the Ontario White Pine Award for best book last year and his current novel For the Win is on the Canadian bestseller lists) and the type of person whose interests the Heritage Minister is supposed to engage and advocate on behalf of, not get into fights with.

In a confusing 140 character back and forth that lasted a few minutes, the minister oddly defended Apple and insulted Google (I’ve captured the whole debate here thanks to the excellent people at bettween). But unnoticed in the debate is an astonishing fact: the Minister seems unaware of both the task at hand and the implications of the legislation.

The following innocuous tweet summed up his position:

Indeed, in the Minister’s 22 tweets in the conversation he uses the term “market forces” six times and the theme of “letting the market or consumers decide” is in over half his tweets.

I too believe that consumers should choose what they want. But if the Minister were a true free market advocate he wouldn’t believe in copyright reform. Indeed, he wouldn’t believe in copyright at all. In a true free market, there’d be no copyright legislation because the market would decide how to deal with intellectual property.

Copyright law exists in order to regulate and shape a market because we don’t think market forces work. In short, the Minister’s legislation is creating the marketplace. Normally I would celebrate his claims of being in favour of “letting consumers decide” since this legislation will determine what these choices will and won’t be. However, the Twitter debate should leave Canadians concerned since this legislation limits consumer choices long before products reach the shelves.

Indeed, as Doctorow points out, the proposed legislation actually kills concepts created by the marketplace – like Creative Commons – that give creators control over how their works can be shared and re-used:

But advocates like Cory Doctorow and Michael Geist aren’t just concerned about the Minister’s internal contradictions in defending his own legislation. They have practical concerns that the bill narrows the choice for both consumers and creators.

Specifically, they are concerned with the legislation’s handling of what are called “digital locks.” Digital locks are software embedded into a DVD of your favourite movie or a music file you buy from iTunes that prevents you from making a copy. Previously it was legal for you to make a backup copy of your favourite tape or CD, but with a digital lock, this not only becomes practically more difficult, it becomes illegal.

Cory Doctorow outlines his concerns with digital locks in this excellent blog post:

They [digital locks] transfer power to technology firms at the expense of copyright holders. The proposed Canadian rules on digital locks mirror the US version in that they ban breaking a digital lock for virtually any reason. So even if you’re trying to do something legal (say, ripping a CD to put it on your MP3 player), you’re still on the wrong side of the law if you break a digital lock to do it.

But it gets worse. Digital locks don’t just harm content consumers (the very people people Minister Moore says he is trying to provide with “choice”); they harm content creators even more:

Here’s what that means for creators: if Apple, or Microsoft, or Google, or TiVo, or any other tech company happens to sell my works with a digital lock, only they can give you permission to take the digital lock off. The person who created the work and the company that published it have no say in the matter.

So that’s Minister Moore’s version of “author’s rights” — any tech company that happens to load my books on their device or in their software ends up usurping my copyrights. I may have written the book, sweated over it, poured my heart into it — but all my rights are as nothing alongside the rights that Apple, Microsoft, Sony and the other DRM tech-giants get merely by assembling some electronics in a Chinese sweatshop.

That’s the “creativity” that the new Canadian copyright law rewards: writing an ebook reader, designing a tablet, building a phone. Those “creators” get more say in the destiny of Canadian artists’ copyrights than the artists themselves.

In short, the digital lock provisions reward neither consumers nor creators. Instead, they give the greatest rights and rewards to the one group of people in the equation whose rights are least important: distributors.

That a Heritage Minister doesn’t understand this is troubling. That he would accuse those who seek to point out this fact and raise awareness to it as “radical extremists” is scandalous. Canadians have entrusted in this person the responsibility for creating a marketplace that rewards creativity, content creation and innovation while protecting the rights of consumers. At the moment, we have a minister who shuts out the very two groups he claims to protect while wrapping himself in a false cloak of the “free market.” It is an ominous start for the debate over copyright reform and the minister has only himself to blame.

Canada's Digital Economy Strategy: Two quick actions you can take

For those interested – or better still, up till now uninterested – in Canada’s digital economy strategy I wanted to write a quick post about some things you can do to help ensure the country moves in the right direction.

First, there are a few proposals on the digital economy strategy consultation website that could do with your vote. If you have time I encourage you to go and read them and, if swayed, to vote for them. They include:

  • Open Access to Canada’s Public Sector Information and Data – Essentially calling for open data at the federal level
  • Government Use and Participation in Open Source – A call for government to save taxpayers money by engaging with and leveraging the opportunity of open source software
  • Improved access to publicly-funded data – I’m actually on the fence on this one. I agree that data from publicly funded research should be made available, however, this is not open government data and I fear that the government will adopt this recommendation and then claim that is does “open data” as the UK and the US. This option would, in fact, be something far, far short of such a claim. Indeed, the first option above is broader and encompasses this recommendation.

Second, go read Michael Geist’s piece Opening Up Canada’s Digital Economy Strategy. It is bang on and I hope to write something shortly that builds upon it.

Finally, and this is on a completely different tack, but if you are up for “clicking your mouse for change,” please also consider joining the facebook group I recently created that encourages people to opt out of receiving the yellow pages. It gives instructions what to do and, the more people who join bigger a message it sends to Yellow Pages – and the people that advertise in them – that this wasteful medium is no longer of interest to consumers (and never gets used anyways).

Learning from Libraries: The Literacy Challenge of Open Data

We didn’t build libraries for a literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have public policy literate citizens, we build them so that citizens may become literate in public policy.

Yesterday, in a brilliant article on The Guardian website, Charles Arthur argued that a global flood of government data is being opened up to the public (sadly, not in Canada) and that we are going to need an army of people to make it understandable.

I agree. We need a data-literate citizenry, not just a small elite of hackers and policy wonks. And the best way to cultivate that broad-based literacy is not to release in small or measured quantities, but to flood us with data. To provide thousands of niches that will interest people in learning, playing and working with open data. But more than this we also need to think about cultivating communities where citizens can exchange ideas as well as involve educators to help provide support and increase people’s ability to move up the learning curve.

Interestingly, this is not new territory.  We have a model for how to make this happen – one from which we can draw lessons or foresee problems. What model? Consider a process similar in scale and scope that happened just over a century ago: the library revolution.

In the late 19th and early 20th century, governments and philanthropists across the western world suddenly became obsessed with building libraries – lots of them. Everything from large ones like the New York Main Library to small ones like the thousands of tiny, one-room county libraries that dot the countryside. Big or small, these institutions quickly became treasured and important parts of any city or town. At the core of this project was that literate citizens would be both more productive and more effective citizens.

But like open data, this project was not without controversy. It is worth noting that at the time some people argued libraries were dangerous. Libraries could spread subversive ideas – especially about sexuality and politics – and that giving citizens access to knowledge out of context would render them dangerous to themselves and society at large.  Remember, ideas are a dangerous thing. And libraries are full of them.

Cora McAndrews Moellendick, a Masters of Library Studies student who draws on the work of Geller sums up the challenge beautifully:

…for a period of time, censorship was a key responsibility of the librarian, along with trying to persuade the public that reading was not frivolous or harmful… many were concerned that this money could have been used elsewhere to better serve people. Lord Rodenberry claimed that “reading would destroy independent thinking.” Librarians were also coming under attack because they could not prove that libraries were having any impact on reducing crime, improving happiness, or assisting economic growth, areas of keen importance during this period… (Geller, 1984)

Today when I talk to public servants, think tank leaders and others, most grasp the benefit of “open data” – of having the government sharing the data it collects. A few however, talk about the problem of just handing data over to the public. Some questions whether the activity is “frivolous or harmful.” They ask “what will people do with the data?” “They might misunderstand it” or “They might misuse it.” Ultimately they argue we can only release this data “in context”. Data after all, is a dangerous thing. And governments produce a lot of it.

As in the 19th century, these arguments must not prevail. Indeed, we must do the exact opposite. Charges of “frivolousness” or a desire to ensure data is only released “in context” are code to obstruct or shape data portals to ensure that they only support what public institutions or politicians deem “acceptable”. Again, we need a flood of data, not only because it is good for democracy and government, but because it increases the likelihood of more people taking interest and becoming literate.

It is worth remembering: We didn’t build libraries for an already literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have a data or public policy literate citizenry, we build them so that citizens may become literate in data, visualization, coding and public policy.

This is why coders in cities like Vancouver and Ottawa come together for open data hackathons, to share ideas and skills on how to use and engage with open data.

But smart governments should not only rely on small groups of developers to make use of open data. Forward-looking governments – those that want an engaged citizenry, a 21st-century workforce and a creative, knowledge-based economy in their jurisdiction – will reach out to universities, colleges and schools and encourage them to get their students using, visualizing, writing about and generally engaging with open data. Not only to help others understand its significance, but to foster a sense of empowerment and sense of opportunity among a generation that could create the public policy hacks that will save lives, make public resources more efficient and effective and make communities more livable and fun. The recent paper published by the University of British Columbia students who used open data to analyze graffiti trends in Vancouver is a perfect early example of this phenomenon.

When we think of libraries, we often just think of a building with books.  But 19th century mattered not only because they had books, but because they offered literacy programs, books clubs, and other resources to help citizens become literate and thus, more engaged and productive. Open data catalogs need to learn the same lesson. While they won’t require the same centralized and costly approach as the 19th century, governments that help foster communities around open data, that encourage their school system to use it as a basis for teaching, and then support their citizens’ efforts to write and suggest their own public policy ideas will, I suspect, benefit from happier and more engaged citizens, along with better services and stronger economies.

So what is your government/university/community doing to create its citizen army of open data analysts?

Canada 3.0 & The Collapse of Complex Business Models

If you haven’t already, I strongly encourage everyone to go read Clay Shirky’s The Collapse of Complex Business Models. I just read it while finishing up this piece and it articulates much of what underpins it in the usual brilliant Shirky manner.

I’ve been reflecting a lot on Canada 3.0 (think SXSWi meets government and big business) since the conference’s end. I want to open by saying there were a number of positive highlights. I came away with renewed respect and confidence in the CRTC. My sense is net neutrality and other core internet issues are well understood and respected by the people I spoke with. Moreover, I was encouraged by what some public servants had to say regarding their vision for Canada’s digital economy. In many corners there were some key people who seemed to understand what policy, legal and physical infrastructure needs to be in place to ensure Canada’s future success.

But these moments aside, the more I reflect on the conference the more troubled I feel. I can’t claim to have attended every session but I did attend a number and my main conclusion is striking: Canada 3.0 was not a conference primarily about Canada’s digital future. Canada 3.0 was a conference about Canada’s digital commercial future. Worse, this meant the conference failed on two levels. Firstly, it failed because people weren’t trying to imagine a digital future that would serve Canadians as creators, citizens and contributors to the internet and what this would mean to commerce, democracy and technology. Instead, my sense was that the digital future largely being contemplated was one where Canadians consumed services over the internet. This, frankly, is the least important and interesting part of the internet. Designing a digital strategy for companies is very different than designing one for Canadians.

But, secondly, even when judged in commercial terms, the conference, in my mind, failed. This is not because the wrong people were there, or that the organizers and participants were not well-intentioned. Far from it. Many good and many necessary people were in attendance (at least as one could expect when hosting it in Stratford).

No, the conference’s main problem was that, at the core of many conversations lay an untested assumption: That we can manage the transition of broadcast media (by this I mean movies, books, newspaper & magazines, television) as well as other industries from an (a) broadcast economy to a (b) networked/digital economy. Consequently, the central business and policy challenge is how do we help these businesses survive this transitionary period and get “b” happening asap so that the new business models work.

But the key assumption is that the institutions – private and public – that were relevant in the broadcast economy can transition. Or that the future will allow for a media industry that we could even recognize. While I’m open to the possibility that some entities may make it, I’m more convinced that most will not. Indeed, it isn’t even clear that a single traditional business model, even radically adapted, can adjust to a network world.

What no one wants to suggest is that we may not be managing a transition. We may be managing death.

The result: a conference that doesn’t let those who have let go of the past roam freely. Instead they must lug around all the old modes like a ball and chain.

Indeed, one case in point was listening to managers of the Government of Canada’s multimedia fund share how, to get funding, a creator would need to partner with a traditional broadcaster. To be clear, if you want to kill content, give it to a broadcaster, they’ll play it once or twice, then put it in a vault and one will ever see it again. Furthermore, a broadcaster has all the infrastructure, processes and overhead that make them unworkable and unprofitable in the online era. Why saddle someone new with all this? Ultimately this is a program designed to create failures and worse, pollute the minds of emerging multimedia artists with all sorts of broadcast baggage. All in the belief that it will help bridge the transition. It won’t.

The ugly truth is that just like the big horse buggy makers didn’t survive the transition to the automobile, or that many of the creators of large complex mainframe computers didn’t survive the arrival of the personal computer, our traditional media environment is loaded with the walking dead. Letting them control the conversation, influence policy and shape the agenda is akin to asking horse drawn carriage makers write the rules for the automobile era. But this is exactly what we are doing. The copyright law, the pillar of this next economy, is being written not by the PMO, but by the losers of the last economy. Expect it to slow our development down dramatically.

And that’s why Canada 3.0 isn’t about planning for 3.0 at all. More like trying to save 1.0.

Open Government interview and panel on TVO's The Agenda with Steve Paikin

My interview on TVO’s The Agenda with Steve Paikin has been uploaded to Youtube (BTW, it is fantastic that The Agenda has a YouTube channel where it posts all its interviews. Kudos!). If you live outside Ontario, or were wrapped up in the Senators-Pens playoff game that was on at the same time (which obviously we destroyed in the ratings), I thought I’d throw it up here as a post in case it is of interest. The first clip is a one on one interview between myself and Paikin. The second clip is the discussion panel that occurred afterward with myself, senior editor of Reason magazine Katherine Mangu-Ward , American Prospect executive editor Mark Schmitt and the Sunlight Foundation’s Policy Director John Wonderlich.

Hope you enjoy!

One on one interview with Paikin:

Panel Discussion: