Category Archives: free culture

Lessons from fashion's free culture: Johanna Blakley on TED.com

This TEDx talk by Johanna Blakley is pure gold (thank you Jonathan Brun for passing it along). It’s a wonderful dissection – all while using the fashion industry as a case study – of how patents and licenses are not only unnecessary for innovation but can actually impede it.

What I found particularly fascinating is Johanna’s claim that long ago the US courts decided that clothing was “too utilitarian” to have copyright and patents applied to it. Of course, we could say that of a number of industries today – the software industry coming to mind right off the bat (can anyone imagine a world without software?).

The presentation seems to confirm another thought I’ve held – weaker copyright and patents protections do not reduce or eliminate peoples incentive to innovate. Quite the opposite. It both liberates innovation and increases its rate as others are able to copy and reuse one another. In addition, it makes brands stronger, not weaker. In a world where anybody can copy anybody, innovation and the capacity to execute matters. Indeed, it is the only thing that matters.

It would be nice if, here in Canada, the Ministers of Heritage (James Moore) and Industry (Tony Clement) would watch and learn from this video – and the feedback they received from ordinary Canadians. If we want industries as vibrant and profitable as the fashion industry, it may require us to think a little differently about copyright reform.

Three stories of change from the International Open Data Hackathon

Over the past few weeks people have been in touch with me about what happened in their city during the open data hackathon. I wanted to share some of their stories so that people can see the potential around the event.

Here are a few that really struck me:

If you get a moment a Ton Zijlstra’s blog post about the open data hackathon in Enschede, in the Netherlands. It pretty much sums up everything we wanted to have happen during the hackathon:

  • Data sets released: Because of the hackathon the City of Enshede got motivated and released 25 data sets for participants to work with. This alone made me happy as this was a big part of why we wanted to do the hackathon – get governments to act!
  • Good cross section of participation: Local entrepreneurs, students and civil servants, including a civil servant with an IT background on hand all day to help out and a departmental head dropping by to see what was going on
  • Education: Interested government officials from neighboring cities dropped by to learn more
  • Tangible Outputs: As a result of the hackathon’s efforts two prototypes were built, a map overview of all building permit requests and the underlying plans (wish we had this in Vancouver) and a map overview of local business bankruptcies
  • Connectivity: They had a video session with the groups in Helsinki, Vienna to share lessons about event and show and tell the prototypes.

Meanwhile from Bangalore, I got the email from the local organizer Vasanth B:

We have not found a place to host our app yet. Unfortunate as it may seem. We are hoping to get it up in another 3 days. wanted to thank you for coming up with this novel concept. We all are convinced that open data is crucial and hence we will create a website which will be a one stop place to get the data of our country’s parliament!
I will send you the link of our site soon. Once again thanks to this event, we learned a lot and hope to be part of this in the coming days.
It’s great to see people:
  • Civic Engagement: Here is a group of developers that hadn’t thought much about Open Data but became interested because of the event and have developed a passion for using their skills to help make democratic information more available.
  • Tangible Outcome: They created an app that allows you to see public statements made by the leaders of India’s political parties at the national and state level. (Demo can be seen here)
And in Thailand, Keng organized an amazing hackathon in two weeks. Here one of the big outputs was scraping the Thailand’s Member of House of Representative Website. What was great about this output is:
  • Created Open Data: In many jurisdictions there is little available machine readable open data. The great thing about the work of the Bangkok team is that they now made it possible for others to create applications using data from Thailand’s House of Representatives
  • Learned new skills/tools: After the hackathon KenG sent the creators of Scraperwiki a really nice note explaining how great a tool it was. The fact that a bunch of people got familiar with scraperwiki is itself a big win as each time someone uses it, they create more open data for others to leverage. Indeed, Justin Houk, who participated in the Open Data Hackathon on the other side of the world in Portland Oregon, has written a great blog post explaining why they used scraperwiki.
Finally, in Oxford, Tim Davies has this excellent recap of what occurred at the Hackathon there with a number of great lessons learned. Again, some of what I loved there was:
  • Civic Engagement: As with Enschede, developers mainly worked on things that they thought would make their community better. Hackathons are about getting people involved in and better understanding their community.
  • More tangible outcomes(!): See Tim’s list…
I also got a great email from Iain Emsley who described exactly why Open Data can lead to public engagement.
I started on playing with Arts Council of England funding data from this region for last year but we got so enthused that a few of us downloaded the entire dataset of 5 years worth of funding! Anyhow, just thought I’d ping you with the URL of the stuff that we started playing with and I went off and started redeveloping .

Glad you organised it and looking forward to future days. I’m even thinking of trying to organise a literature hackday now…

Again this is not all the events that happened, there was lots more activity, just some highlights that I read and wanted to share.

To see a list of many of the artifacts produced during the hackathon take a look at the Open Data Hackathon wiki.

An Open Data Inspired Holiday Gift to Montrealers

It turns out that Santa, with the help of some terribly two clever elves over at Montreal Ouvert has created an Open Data inspired present for Montrealers.

What, you must ask could it be?

It’s PatinerMontreal.ca

It’s a genius little website created by two Montreal developers – James McKinney and Dan Mireault – that scrapes the City of Montreal’s data on ice rink status to display the location and condition of all the outdoor ice rinks in the city.

What more could a winter bound montrealer ask for? Well… actually… how about being able to download it as an Android app to use on your smart phone. Yes, you can do that too thanks to another Montreal software developer: Mudar Noufal.

Here’s a screen shot of the slick web version (more on the project below the fold)

Creating this unbelievably useful application was no small feat. It turns out that the City of Montreal publishes the state of the outdoor hockey rinks every day in PDF format. While it is nice that the city puts this information up on the web, sharing it via PDF is probably the most inaccessible way of meeting this goal. To create this site the developers have to “scrape” the data out of these PDF files every day. Creating the software to do this is not only tedious, it can also be frustrating and laborious. In reality, this data was created with tax dollars and is encouraging the use of city assets. Making it difficult to access is unnecessary and counterproductive.

This is because if you can get the data, the things you can create (like PatinerMontreal.ca) can be gorgeous and far superior to anything the city offers. The City’s PDFs conveys a lot of information in a difficult to decipher format – text. Visualizing this information and making it searchable allows the user to quickly see where rinks or located in the city, what types of rinks (skating versus hockey) are located where, and the status of said rinks (newly iced or not).

My hope – and the hope of Montreal Ouvert – is that projects like this show the City of Montreal (and other cities across Canada) the power of getting data out of PDFs and shared in a machine readable format on an open data portal. If Montreal had an Open Data portal (like Vancouver, Nanaimo, Edmonton, Toronto, Ottawa, and others) this application would have been much easier to create and Montrealers would enjoy the benefit of being able to better use the services their tax dollars works so hard to create.

Congratulations to James, Dan and Mudar on such a fantastic project.

Happy Holidays to Montreal Ouvert.

Happy Holidays Montreal. Hope you enjoy (and use) this gift.

Open Data planning session at BarCamp Vancouver

With the International Open Data Hackathon a little more the 2 weeks away a lot has happened.

On the organizing wiki people in over 50 cities in 21 countries and 4 continents have offered to organize local events. Open data sets that people can use have been posted to a specially created page, a few nascent app ideas have been shared, as has advice on how to run a hackathon. (on twitter, the event hashtag is #odhd)

In Vancouver, the local BarCamp will be taking place this weekend. I’m not in town, however, Aaron Gladders, local hacker with a ton of experience working with and opening up data sets, contacted me to let me know he’d like to do a planning session for the hackathon at Barcamp. If you’re in Vancouver I hope you can attend.

Why? Because this is a great opportunity. And it has lessons for the hackathons around the world.

I love it because it means people can share ideas and projects they would like to hack on, recruit others, as well as hear feedback about challenges, obstacles, alternative approaches, and think about all of this  for two weeks before the hackathon. A planning session also has  has an even bigger benefit. It means more people are likely to arrive on the day with something specific ready to work on. I want the hackathons to be social. But they can’t be exclusively so. It is important that we actually try to create some real products that are useful to us and/or our fellow citizens.

For those elsewhere in the world who are also thinking about December 4th I hope that some of us will start reaching out to one another and thinking about how we will spend the day. A few thoughts on this:

1. Take a look at the data sets that are out there before Dec 4th. People have been putting together a pretty good list here.

2. Localization. I think some of the best wins will be around localizing successful apps from other places. For example, I’ve been encouraging the team in Bangalore to consider localizing Michael Mulley’s OpenParliament.ca application (the source code for which is here). If you have an application you think others might want to localize, add it to the application page on the wiki. If there is an app out there you’d like to localize, write its author/developer team. Ask them if they might be willing to share the code.

3. Get together with 2-3 friends and come up with a plan. What do you want to accomplish on the 4th?

4. If you are looking for a project, let people know on the wiki, leave a twitter handle or some way for people with idea to contact you before the 4th.

Okay, that’s it for now. I’m really excited about how much progress we’ve made in a few short weeks. Ideally at the end of the 4th I’d love for some cities to be able to showcase some apps to the world that they’ve created. We have an opportunity to show the media, politicians, public servants, our fellow citizens, but most importantly, each other, just want is possible with open data.


					

Rethinking Wikipedia contributions rates

About a year ago news stories began to surface that wikipedia was losing more contributors that it was gaining. These stories were based on the research of Felipe Ortega who had downloaded and analyzed millions the data of contributors.

This is a question of importance to all of us. Crowdsourcing has been a powerful and disruptive force socially and economically in the short history of the web. Organizations like Wikipedia and Mozilla (at the large end of the scale) and millions of much smaller examples have destroyed old business models, spawned new industries and redefined the idea about how we can work together. Understand how the communities grow and evolve is of paramount importance.

In response to Ortega’s research Wikipedia posted a response on its blog that challenged the methodology and offered some clarity:

First, it’s important to note that Dr. Ortega’s study of editing patterns defines as an editor anyone who has made a single edit, however experimental. This results in a total count of three million editors across all languages.  In our own analytics, we choose to define editors as people who have made at least 5 edits. By our narrower definition, just under a million people can be counted as editors across all languages combined.  Both numbers include both active and inactive editors.  It’s not yet clear how the patterns observed in Dr. Ortega’s analysis could change if focused only on editors who have moved past initial experimentation.

This is actually quite fair. But the specifics are less interesting then the overall trend described by the Wikmedia Foundation. It’s worth noting that no open source or peer production project can grow infinitely. There is (a) a finite number of people in the world and (b) a finite amount of work that any system can absorb. At some point participation must stabilize. I’ve tried to illustrate this trend in the graphic below.

Open-Source-Lifecyclev2.0021-1024x606

As luck would have it, my friend Diederik Van Liere was recently hired by the Wikimedia Foundation to help them get a better understanding of editor patterns on Wikipedia – how many editors are joining and leaving the community at any given moment, and over time.

I’ve been thinking about Diederik’s research and three things have come to mind to me when I look at the above chart:

1. The question isn’t how do you ensure continued growth, nor is it always how do you stop decline. It’s about ensuring the continuity of the project.

Rapid growth should probably be expected of an open source or peer production project in the early stage that has LOTS of buzz around it (like Wikipedia was back in 2005). There’s lots of work to be done (so many articles HAVEN’T been written).

Decline may also be reasonable after the initial burst. I suspect many open source lose developers after the product moves out of beta. Indeed, some research Diederik and I have done of the Firefox community suggests this is the case.

Consequently, it might be worth inverting his research question. In addition to figuring out participation rates, figure out what is the minimum critical mass of contributors needed to sustain the project. For example, how many editors does wikipedia need to at a minimum (a) prevent vandals from destroying the current article inventory and/or at the maximum (b) sustain an article update and growth rate that sustains the current rate of traffic rate (which notably continues to grow significantly). The purpose of wikipedia is not to have many or few editors, it is to maintain the world’s most comprehensive and accurate encyclopedia.

I’ve represented this minimum critical mass in the graphic above with a “Maintenance threshold” line. Figuring out the metric for that feels like it may be more important than participation rates independently as such as metric could form the basis for a dashboard that would tell you a lot about the health of the project.

2. There might be an interesting equation describing participation rates

Another thing that struck me was that each open source project may have a participation quotient. A number that describes the amount of participation required to sustain a given unit of work in the project. For example, in wikipedia, it may be that every new page that is added needs 0.000001 new editors in order to be sustained. If page growth exceeds editors (or the community shrinks) at a certain point the project size outstrips the capacity of the community to sustain it. I can think of a few variables that might help ascertain this quotient – and I accept it wouldn’t be a fixed number. Change the technologies or rules around participation and you might make increase the effectiveness of a given participant (lowering the quotient) or you might make it harder to sustain work (raising the quotient). Indeed, the trend of a participation quotient would itself be interesting to monitor… projects will have to continue to find innovative ways to keep it constant even as the projects article archive or code base gets more complex.

3. Finding a test case – study a wiki or open source project in the decline phase

One things about open source projects is that they rarely die. Indeed, there are lots of open source projects out there that are the walking zombies. A small, dedicated community struggles to keep a code base intact and functioning that is much too large for it to manage. My sense is that peer production/open source projects can collapse (would MySpace count as an example?) but the rarely collapse and die.

Diederik suggested that maybe one should study a wiki or open source project that has died. The fact that they rarely do is actually a good thing from a research perspective as it means that the infrastructure (and thus the data about the history of participation) is often still intact – ready to be downloaded and analyzed. By finding such a community we might be able to (a) ascertain what “maintenance threshold” of the project was at its peak, (b) see how its “participation quotient” evolved (or didn’t evolve) over time and, most importantly (c) see if there are subtle clues or actions that could serve as predictors of decline or collapse. Obviously, in some cases these might be exogenous forces (e.g. new technologies or processes made the project obsolete) but these could probably be controlled for.

Anyways, hopefully there is lots here for metric geeks and community managers to chew on. These are only some preliminary thoughts so I hope to flesh them out some more with friends.

Minister Moore and the Myth of Market Forces

Last week was a bad week for the government on the copyright front. The government recently tabled legislation to reform copyright and the man in charge of the file, Heritage Minister James Moore, gave a speech at the International Chamber of Commerce in which he decried those who questioned the bill as “radical extremists.” The comment was a none-too-veiled attack at people like University of Ottawa Professor Michael Geist who have championed for reasonable copyright reform and who, like many Canadians, are concerned about some aspects of the proposed bill.

Unfortunately for the Minister, things got worse from there.

First, the Minister denied making the comment in messages to two different individuals who inquired about it:

Still worse, the Minister got into a online debate with Cory Doctorow, a bestselling writer (he won the Ontario White Pine Award for best book last year and his current novel For the Win is on the Canadian bestseller lists) and the type of person whose interests the Heritage Minister is supposed to engage and advocate on behalf of, not get into fights with.

In a confusing 140 character back and forth that lasted a few minutes, the minister oddly defended Apple and insulted Google (I’ve captured the whole debate here thanks to the excellent people at bettween). But unnoticed in the debate is an astonishing fact: the Minister seems unaware of both the task at hand and the implications of the legislation.

The following innocuous tweet summed up his position:

Indeed, in the Minister’s 22 tweets in the conversation he uses the term “market forces” six times and the theme of “letting the market or consumers decide” is in over half his tweets.

I too believe that consumers should choose what they want. But if the Minister were a true free market advocate he wouldn’t believe in copyright reform. Indeed, he wouldn’t believe in copyright at all. In a true free market, there’d be no copyright legislation because the market would decide how to deal with intellectual property.

Copyright law exists in order to regulate and shape a market because we don’t think market forces work. In short, the Minister’s legislation is creating the marketplace. Normally I would celebrate his claims of being in favour of “letting consumers decide” since this legislation will determine what these choices will and won’t be. However, the Twitter debate should leave Canadians concerned since this legislation limits consumer choices long before products reach the shelves.

Indeed, as Doctorow points out, the proposed legislation actually kills concepts created by the marketplace – like Creative Commons – that give creators control over how their works can be shared and re-used:

But advocates like Cory Doctorow and Michael Geist aren’t just concerned about the Minister’s internal contradictions in defending his own legislation. They have practical concerns that the bill narrows the choice for both consumers and creators.

Specifically, they are concerned with the legislation’s handling of what are called “digital locks.” Digital locks are software embedded into a DVD of your favourite movie or a music file you buy from iTunes that prevents you from making a copy. Previously it was legal for you to make a backup copy of your favourite tape or CD, but with a digital lock, this not only becomes practically more difficult, it becomes illegal.

Cory Doctorow outlines his concerns with digital locks in this excellent blog post:

They [digital locks] transfer power to technology firms at the expense of copyright holders. The proposed Canadian rules on digital locks mirror the US version in that they ban breaking a digital lock for virtually any reason. So even if you’re trying to do something legal (say, ripping a CD to put it on your MP3 player), you’re still on the wrong side of the law if you break a digital lock to do it.

But it gets worse. Digital locks don’t just harm content consumers (the very people people Minister Moore says he is trying to provide with “choice”); they harm content creators even more:

Here’s what that means for creators: if Apple, or Microsoft, or Google, or TiVo, or any other tech company happens to sell my works with a digital lock, only they can give you permission to take the digital lock off. The person who created the work and the company that published it have no say in the matter.

So that’s Minister Moore’s version of “author’s rights” — any tech company that happens to load my books on their device or in their software ends up usurping my copyrights. I may have written the book, sweated over it, poured my heart into it — but all my rights are as nothing alongside the rights that Apple, Microsoft, Sony and the other DRM tech-giants get merely by assembling some electronics in a Chinese sweatshop.

That’s the “creativity” that the new Canadian copyright law rewards: writing an ebook reader, designing a tablet, building a phone. Those “creators” get more say in the destiny of Canadian artists’ copyrights than the artists themselves.

In short, the digital lock provisions reward neither consumers nor creators. Instead, they give the greatest rights and rewards to the one group of people in the equation whose rights are least important: distributors.

That a Heritage Minister doesn’t understand this is troubling. That he would accuse those who seek to point out this fact and raise awareness to it as “radical extremists” is scandalous. Canadians have entrusted in this person the responsibility for creating a marketplace that rewards creativity, content creation and innovation while protecting the rights of consumers. At the moment, we have a minister who shuts out the very two groups he claims to protect while wrapping himself in a false cloak of the “free market.” It is an ominous start for the debate over copyright reform and the minister has only himself to blame.

Canada's Digital Economy Strategy: Two quick actions you can take

For those interested – or better still, up till now uninterested – in Canada’s digital economy strategy I wanted to write a quick post about some things you can do to help ensure the country moves in the right direction.

First, there are a few proposals on the digital economy strategy consultation website that could do with your vote. If you have time I encourage you to go and read them and, if swayed, to vote for them. They include:

  • Open Access to Canada’s Public Sector Information and Data – Essentially calling for open data at the federal level
  • Government Use and Participation in Open Source – A call for government to save taxpayers money by engaging with and leveraging the opportunity of open source software
  • Improved access to publicly-funded data – I’m actually on the fence on this one. I agree that data from publicly funded research should be made available, however, this is not open government data and I fear that the government will adopt this recommendation and then claim that is does “open data” as the UK and the US. This option would, in fact, be something far, far short of such a claim. Indeed, the first option above is broader and encompasses this recommendation.

Second, go read Michael Geist’s piece Opening Up Canada’s Digital Economy Strategy. It is bang on and I hope to write something shortly that builds upon it.

Finally, and this is on a completely different tack, but if you are up for “clicking your mouse for change,” please also consider joining the facebook group I recently created that encourages people to opt out of receiving the yellow pages. It gives instructions what to do and, the more people who join bigger a message it sends to Yellow Pages – and the people that advertise in them – that this wasteful medium is no longer of interest to consumers (and never gets used anyways).

Learning from Libraries: The Literacy Challenge of Open Data

We didn’t build libraries for a literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have public policy literate citizens, we build them so that citizens may become literate in public policy.

Yesterday, in a brilliant article on The Guardian website, Charles Arthur argued that a global flood of government data is being opened up to the public (sadly, not in Canada) and that we are going to need an army of people to make it understandable.

I agree. We need a data-literate citizenry, not just a small elite of hackers and policy wonks. And the best way to cultivate that broad-based literacy is not to release in small or measured quantities, but to flood us with data. To provide thousands of niches that will interest people in learning, playing and working with open data. But more than this we also need to think about cultivating communities where citizens can exchange ideas as well as involve educators to help provide support and increase people’s ability to move up the learning curve.

Interestingly, this is not new territory.  We have a model for how to make this happen – one from which we can draw lessons or foresee problems. What model? Consider a process similar in scale and scope that happened just over a century ago: the library revolution.

In the late 19th and early 20th century, governments and philanthropists across the western world suddenly became obsessed with building libraries – lots of them. Everything from large ones like the New York Main Library to small ones like the thousands of tiny, one-room county libraries that dot the countryside. Big or small, these institutions quickly became treasured and important parts of any city or town. At the core of this project was that literate citizens would be both more productive and more effective citizens.

But like open data, this project was not without controversy. It is worth noting that at the time some people argued libraries were dangerous. Libraries could spread subversive ideas – especially about sexuality and politics – and that giving citizens access to knowledge out of context would render them dangerous to themselves and society at large.  Remember, ideas are a dangerous thing. And libraries are full of them.

Cora McAndrews Moellendick, a Masters of Library Studies student who draws on the work of Geller sums up the challenge beautifully:

…for a period of time, censorship was a key responsibility of the librarian, along with trying to persuade the public that reading was not frivolous or harmful… many were concerned that this money could have been used elsewhere to better serve people. Lord Rodenberry claimed that “reading would destroy independent thinking.” Librarians were also coming under attack because they could not prove that libraries were having any impact on reducing crime, improving happiness, or assisting economic growth, areas of keen importance during this period… (Geller, 1984)

Today when I talk to public servants, think tank leaders and others, most grasp the benefit of “open data” – of having the government sharing the data it collects. A few however, talk about the problem of just handing data over to the public. Some questions whether the activity is “frivolous or harmful.” They ask “what will people do with the data?” “They might misunderstand it” or “They might misuse it.” Ultimately they argue we can only release this data “in context”. Data after all, is a dangerous thing. And governments produce a lot of it.

As in the 19th century, these arguments must not prevail. Indeed, we must do the exact opposite. Charges of “frivolousness” or a desire to ensure data is only released “in context” are code to obstruct or shape data portals to ensure that they only support what public institutions or politicians deem “acceptable”. Again, we need a flood of data, not only because it is good for democracy and government, but because it increases the likelihood of more people taking interest and becoming literate.

It is worth remembering: We didn’t build libraries for an already literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have a data or public policy literate citizenry, we build them so that citizens may become literate in data, visualization, coding and public policy.

This is why coders in cities like Vancouver and Ottawa come together for open data hackathons, to share ideas and skills on how to use and engage with open data.

But smart governments should not only rely on small groups of developers to make use of open data. Forward-looking governments – those that want an engaged citizenry, a 21st-century workforce and a creative, knowledge-based economy in their jurisdiction – will reach out to universities, colleges and schools and encourage them to get their students using, visualizing, writing about and generally engaging with open data. Not only to help others understand its significance, but to foster a sense of empowerment and sense of opportunity among a generation that could create the public policy hacks that will save lives, make public resources more efficient and effective and make communities more livable and fun. The recent paper published by the University of British Columbia students who used open data to analyze graffiti trends in Vancouver is a perfect early example of this phenomenon.

When we think of libraries, we often just think of a building with books.  But 19th century mattered not only because they had books, but because they offered literacy programs, books clubs, and other resources to help citizens become literate and thus, more engaged and productive. Open data catalogs need to learn the same lesson. While they won’t require the same centralized and costly approach as the 19th century, governments that help foster communities around open data, that encourage their school system to use it as a basis for teaching, and then support their citizens’ efforts to write and suggest their own public policy ideas will, I suspect, benefit from happier and more engaged citizens, along with better services and stronger economies.

So what is your government/university/community doing to create its citizen army of open data analysts?

Mick Jagger & why copyright doesn't always help artists

I recently read this wonderful interview with Mick Jagger on the BBC website which had this fantastic extract about the impact of the internet on the music industry. What I love about this interview is that Mick Jagger is, of course, about as old a legend as you can find in the music industry.

…I’m talking about the internet.

But that’s just one facet of the technology of music. Music has been aligned with technology for a long time. The model of records and record selling is a very complex subject and quite boring, to be honest.

But your view is valid because you have a huge catalogue, which is worth a lot of money, and you’ve been in the business a long time, so you have perspective.

Well, it’s all changed in the last couple of years. We’ve gone through a period where everyone downloaded everything for nothing and we’ve gone into a grey period it’s much easier to pay for things – assuming you’ve got any money.

Are you quite relaxed about it?

I am quite relaxed about it. But, you know, it is a massive change and it does alter the fact that people don’t make as much money out of records.

But I have a take on that – people only made money out of records for a very, very small time. When The Rolling Stones started out, we didn’t make any money out of records because record companies wouldn’t pay you! They didn’t pay anyone!

Then, there was a small period from 1970 to 1997, where people did get paid, and they got paid very handsomely and everyone made money. But now that period has gone.

So if you look at the history of recorded music from 1900 to now, there was a 25 year period where artists did very well, but the rest of the time they didn’t.

So what does this have to do with copyright? Well, remember, the record labels and other content distributors (not creators!) keep saying how artists will starve unless there is copyright. But understand that for the entire 110-year period that Mick Jagger is referencing there was copyright… and yet artists were paid to record LPs and records for only a small fraction (less than a quarter) of that period. During the rest of the time, the way they made money was by performing. There is nothing about a stronger copyright regime that ensures artists (the creators!) will receive for more money or compensation.

So when the record labels say that without stricter copyright legislation artists will suffer, what they really mean to say is one specific business model – one that requires distributors and that they happen to do well by – will suffer. Artists, who traditionally never received much from the labels (and even during this 25 year period only a tiny few profited handsomely) have no guarantees that with stricter copyright they will see more revenue. No, rather, the distributors will simply own their content for longer and have greater control over its use.

This country is about to go into a dark, dark place with the new copyright legislation. I suspect we will end up stalled for 30 years and cultural innovation will shift to other parts of the world where creativity, remix culture and forms of artistic expression are kept more free.

Again, as Lessig says:

  • Creativity and innovation always builds on the past.
  • The past always tries to control the creativity that builds upon it.
  • Free societies enable the future by limiting this power of the past.
  • Ours is less and less a free society.

Welcome to copyright reform. A Canada where the past controls the creativity that gets built upon it.

Canada 3.0 & The Collapse of Complex Business Models

If you haven’t already, I strongly encourage everyone to go read Clay Shirky’s The Collapse of Complex Business Models. I just read it while finishing up this piece and it articulates much of what underpins it in the usual brilliant Shirky manner.

I’ve been reflecting a lot on Canada 3.0 (think SXSWi meets government and big business) since the conference’s end. I want to open by saying there were a number of positive highlights. I came away with renewed respect and confidence in the CRTC. My sense is net neutrality and other core internet issues are well understood and respected by the people I spoke with. Moreover, I was encouraged by what some public servants had to say regarding their vision for Canada’s digital economy. In many corners there were some key people who seemed to understand what policy, legal and physical infrastructure needs to be in place to ensure Canada’s future success.

But these moments aside, the more I reflect on the conference the more troubled I feel. I can’t claim to have attended every session but I did attend a number and my main conclusion is striking: Canada 3.0 was not a conference primarily about Canada’s digital future. Canada 3.0 was a conference about Canada’s digital commercial future. Worse, this meant the conference failed on two levels. Firstly, it failed because people weren’t trying to imagine a digital future that would serve Canadians as creators, citizens and contributors to the internet and what this would mean to commerce, democracy and technology. Instead, my sense was that the digital future largely being contemplated was one where Canadians consumed services over the internet. This, frankly, is the least important and interesting part of the internet. Designing a digital strategy for companies is very different than designing one for Canadians.

But, secondly, even when judged in commercial terms, the conference, in my mind, failed. This is not because the wrong people were there, or that the organizers and participants were not well-intentioned. Far from it. Many good and many necessary people were in attendance (at least as one could expect when hosting it in Stratford).

No, the conference’s main problem was that, at the core of many conversations lay an untested assumption: That we can manage the transition of broadcast media (by this I mean movies, books, newspaper & magazines, television) as well as other industries from an (a) broadcast economy to a (b) networked/digital economy. Consequently, the central business and policy challenge is how do we help these businesses survive this transitionary period and get “b” happening asap so that the new business models work.

But the key assumption is that the institutions – private and public – that were relevant in the broadcast economy can transition. Or that the future will allow for a media industry that we could even recognize. While I’m open to the possibility that some entities may make it, I’m more convinced that most will not. Indeed, it isn’t even clear that a single traditional business model, even radically adapted, can adjust to a network world.

What no one wants to suggest is that we may not be managing a transition. We may be managing death.

The result: a conference that doesn’t let those who have let go of the past roam freely. Instead they must lug around all the old modes like a ball and chain.

Indeed, one case in point was listening to managers of the Government of Canada’s multimedia fund share how, to get funding, a creator would need to partner with a traditional broadcaster. To be clear, if you want to kill content, give it to a broadcaster, they’ll play it once or twice, then put it in a vault and one will ever see it again. Furthermore, a broadcaster has all the infrastructure, processes and overhead that make them unworkable and unprofitable in the online era. Why saddle someone new with all this? Ultimately this is a program designed to create failures and worse, pollute the minds of emerging multimedia artists with all sorts of broadcast baggage. All in the belief that it will help bridge the transition. It won’t.

The ugly truth is that just like the big horse buggy makers didn’t survive the transition to the automobile, or that many of the creators of large complex mainframe computers didn’t survive the arrival of the personal computer, our traditional media environment is loaded with the walking dead. Letting them control the conversation, influence policy and shape the agenda is akin to asking horse drawn carriage makers write the rules for the automobile era. But this is exactly what we are doing. The copyright law, the pillar of this next economy, is being written not by the PMO, but by the losers of the last economy. Expect it to slow our development down dramatically.

And that’s why Canada 3.0 isn’t about planning for 3.0 at all. More like trying to save 1.0.