Social Media and Rioters

My friend Alexandra Samuel penned a piece titled “After a Loss in Vancouver, Troubling Signals of Citizen Surveillance” over at the Harvard Business Review. The piece highlights her concern with the number of people willing to engage in citizen surveillance.

As she states:

It’s one thing to take pictures as part of the process of telling your story, or as part of your (paid or unpaid) work as a citizen journalist. It’s another thing entirely to take and post pictures and videos with the explicit intention of identifying illegal (or potentially illegal) activity. At that moment you are no longer engaging in citizen journalism; you’re engaging in citizen surveillance.

And I don’t think we want to live in a society that turns social media into a form of crowdsourced surveillance. When social media users embrace Twitter, Facebook, YouTube and blogs as channels for curating, identifying and pursuing criminals, that is exactly what they are moving toward.

I encourage you to read the piece, and, I’m not sure I agree with much of it on two levels.

First, I want to steer away from good versus bad and right versus wrong. Social Media isn’t going to create only good outcomes, or only bad outcomes, it is going to create both (something I know Alex acknowledges). This technology will, like previous technologies, reset what normal means. In the new world we are becoming more powerful “sensors” in our society. We can enable others to know what, good and bad, is going on around us. To believe that we won’t share, and that others won’t use our shared information to inform their decisions, is simply not logical. As dBarefoot points out in the comments there are lots of social good that can come for surveillance. In the end you can’t post videos of human right injustices without also being able to post videos of people at abortion clinics, you can’t post videos of officials taking bribes without also being able to post videos of people smoking drugs at a party. The alternative, a society where people are not permitted to share, strikes me as even more dangerous than a society where we can share but where one element of that sharing ends up being used as surveillance. My suspicion is that we may end up regulating some use – there will be some things people cannot share online (visiting abortion clinics may end up being one of those) but I’m not confident of even this.

But I suspect that in a few decades my children will be stunned that I grew up in a world of no mutual surveillance. That we tolerated the risks of a world where mutual surveillance didn’t exist – they may wonder at a basic level, how we felt safe at night or in certain circumstances (I really recommend David Brin’s Science Fiction writing, especially Earth in which he explores this idea). I can also imagine they will find the idea of total anonymity and having an untraceable past to both eerie, frightening and intriguing. In their world, having grown up with social media will be different, some of the things we feel are bad, they will like, and vice versa.

Another issues missing from Alex’s piece is the role of the state. It is one thing for people to post pictures of each other, it is another about how, and if, the state does the same. As many tweeters stated – this isn’t 1994 (the last time there were riots in Vancouver). Social media is going to do is make the enforcement of law a much and the role of the state a much trickier subject. Ultimately, they cannot ignore photos of rioters engaged in illegal acts. So the question isn’t so much on what we are going to share, it is about what we should allow the state to do, and not to do, with the information we create. The state’s monopoly on violence gives it a unique role, one that will need to be managed carefully. This monopoly, combined with a world of perfect (or at least, a lot more) information will I imagine necessitate a state and justice system that that looks very, very different than the one we have right now if we are to protect of civil liberties as we presently understand them. (I suspect I’ll be writing some more about this)

But I think the place where I disagree the most with Alex is in the last paragraph:

What social media is for — or what it can be for, if we use it to its fullest potential — is to create community. And there is nothing that will erode community faster, both online and off, than creating a society of mutual surveillance.

Here, Alex confuses the society she’d like to live in with what social media enables. I see nothing to suggest that mutual surveillance will erode community, indeed, I think it already has demonstrated that it does the opposite. Mutual surveillance fosters lots of communities – from communities that track human rights abuses, to communities that track abortion providers to communities that track disabled parking violators. Surveillance builds communities, it may be that, in many cases, those communities pursue the marginalization of another community or termination of a specific behaviour, but that does not make them any less a part of our society’s fabric. It may not create communities everyone likes, but it can create community. What matters here is not if we can monitor one another, but what ends up happening with the information we generate, and why I think we’ll want to think hard about what we allow the state to do and to permit others to do, more and more carefully.

How GitHub Saved OpenSource

For a long time I’ve been thinking about just how much Github has revolutionized open source. Yes, it has made managing the code base significantly easier but its real impact has likely been on the social aspects of managing open source. Github has rebooted how the innovation cycle in open source while simultaneously raising the bar for good community management.

The irony may be that it has done this by making it easy to do the one thing many people thought would kill open source: forking. I remember talking to friends who – before Github launched – felt that forking, while a necessary check on any project, was also its biggest threat and so needed to be managed carefully.

Today, nothing could feel further from the truth. By collapsing the transaction costs around forking Github hasn’t killed open source. It has saved it.

The false fear of forking

The concern with forking – as it was always explained to me – was that it would splinter a community, potentially to the degree that none of the emerging groups would have the necessary critical mass to carry the project forward. Yes, it was necessary that forking be allowed – but only as a last result to manage the worst excesses of bad community leadership. But forking was messy stuff even emotionally painful and exhausting: while sometimes it was mutually agreed upon and cordial, many feared that it would usually was preceded by ugly infighting and nastiness that culminated in an (sometimes) angry rejection and (almost) political act forming a new community.

Forking = Innovation Accelerated

Maybe forking was an almost political act – when it was hard to do. But once anyone could do it, anytime, anywhere, the dynamics changed. I believe open source projects work best when contributors are able to engage in low transaction cost cooperation and high transaction cost collaboration is minimized. The genius of open source is that it does not require a group to debate every issue and work on problems collectively, quite the opposite. It works best when architected so that individuals or functioning sub-groups can grab a part of the whole, take it away, play with it, and bring the solution back and it fit it back into the larger project.

In this world innovation isn’t driven by getting lots of people to work together simultaneously, compromising, negotiating solutions, and waiting on others to complete their part. Such a process can be slow, and worse, can allow promising ideas to be killed by early criticism or be watered down before they reach their potential. What people often need is a private place where their idea can be nursed, an innovation cycle driven by enabling people to work on the same problem in isolation, and then bring working solutions back to the group to be debated. (Yes, this is a simplification, but I think the general idea stands).

And this is why GitHub was such a godsend. Yes it made managing the code base easier, but what it really did was empower contributors. It took something everyone thought would kill open source projects – forking – and made it a powerful tool of experimentation and play. Now, rather than just play with a small part of the code base, you could play with the entire thing. My strong suspension is that this has rebooted the innovation cycle for many open source projects happens. The ability of having lots of people innovating in the safety of their private repository has produced more new ideas then ever before.

Forking = Better Community Management

I also suspect that eliminating the transaction costs around forking has improved open source in another, important way. It has made open source project leads more accountable to the communities they manage.

Why?

Before Github the transaction costs around forking were higher. Setting up a new repository, firing up a bug tracking system and creating all the other necessary infrastructure wasn’t impossible, but neither was it simple. As a result, I suspect it usually only made sense to do if you could motivate a group of contributors to fork with you – there needed to be a deep grievance to justify all this effort. In short, the barriers to forking were high. That meant that project leaders had a lot of leeway in how they engaged in their community before the “threat” of forking became real. The high transaction cost of forking created a cushion for lazy, bad, or incompetent open source leadership and community management.

But collapse the transaction costs to forking and the cost of a parallel project emerging also drops significantly. This is not to claim that the cost of forking is zero – but I suspect that open source community leaders now have to be much more sensitive to the needs, demands, wishes and contributions of their community. More importantly, I suspect this has been good for open source in general.

If the Prime Minister Wants Accountable Healthcare, let's make it Transparent too

Over at the Beyond the Commons blog Aaron Wherry has a series of quotes from recent speeches on healthcare by Canadian Prime Minister Stephen Harper in which the one constant keyword is… accountability.

Who can blame him?

Take everyone promising to limit growth to a still unsustainable 6% (gulp) and throw in some dubiously costly projects ($1 billion spent on e-health records in Ontario when an open source solution – VistA – could likely have been implemented at a fraction of the cost) and the obvious question is… what is the country going to do about healthcare costs?

I don’t want to claim that open data can solve the problem. It can’t. There isn’t going to be a single solution. But I think it could help spread best practices, improve customer choice and service as well as possibly yield other potential benefits.

Anyone who’s been around me for the last month knows about my restaurant inspection open data example (which could also yield healthcare savings) but I think we can go bigger. A Federal Government that is serious about accountability in Healthcare needs to build a system where that accountability isn’t just between the provinces and the feds, it needs to be between the Healthcare system and its users; us.

Since the feds usually attach several provisions to their healthcare dollars, the one I’d like to see is an open data provision. One where provinces, and hospitals are required to track and make open a whole set of performance data, in machine readable formats, in a common national standard, that anyone in Canada (or around the world) can download and access.

Some of the data I’d love to see mandated to be tracked and shared, includes:

  • Emergency Room wait times – in real time.
  • Wait times, by hospital, for a variety of operations
  • All budget data, down to the hospital or even unit level, let’s allow the public to do a cost/patient analysis for every unit in the country
  • Survival rates for various surgeries (obviously controversial since some hospitals that have the lowest rates are actually the best since they get the hardest cases – but let’s trust the public with the data)
  • Inspection data – especially if we launched something akin to the Institute for Health Management’s Protecting 5 Millions Lives Campaign
  • I’m confident there is much, much more…

I can imagine a slew of services and analysis that emerge from these, if nothing than a citizenry that is better informed about the true state of its healthcare system. Even something as simple as being able to check ER wait times at all the hospitals near you, so you can drive to the one where the wait times are shortest. That would be nice.

Of course, if the Prime Minister wants to go beyond accountability and think about how data could directly reduce costs, he might take a look at one initiative launched south of the border.

If he did, he might be persuaded to demand that the provinces share a set of anonymized patient records to see if academics or others in the country might be able to build better models for how we should manage healthcare costs. In January of this year I witnessed the launch of the $3 million dollar Heritage Health Prize at the O’Reilly Strata Conference in San Diego. It is a stunningly ambitious, but realistic effort. As the press release notes:

Contestants in the challenge will be provided with a data set consisting of the de-identified medical records of 100,000 patients from the 2008 calendar year. Contestants will then be required to create a predictive algorithm to predict who was hospitalized during the 2009 calendar year. HPN will award the $3 million prize(more than twice what is paid for the Nobel Prize in medicine) to the first participant or team that passes the required level of predictive accuracy. In addition, there will be milestone prizes along the way, which will be awarded to teams leading the competition at various points in time.

In essence Heritage Health is doing to patient management what Netflix (through the $1M Netflix prize) did to movie selections. It’s crowdsourcing the problem to get better results.

The problem is, any algorithm developed by the winners of the Heritage Health Prize will belong to… Heritage Health. This means the benefits of this innovation cannot benefit Canadians (nor anyone else). So why not launch a prize of our own. We have more data, I suspect our data is better (not limited to a single state) and we could place the winning algorithm in the public domain so that it can benefit all of humanity. If Canadian data helped find efficiencies that lowered healthcare costs and improved healthcare outcomes for everyone in the world… it could be the biggest contribution to global healthcare by Canada since Federick Banting discovered insulin and rescued diabetics everywhere.

Of course, open data, and sharing (even anonymized) patient data would be a radical experiment for government, something new, bold and different. But 6% growth is itself unsustainable and Canadians need to see that their government can do something bold, new and innovative. These initiatives would fit the bill.

How the War on Drugs Destabilized the Global Economy

This is truly, truly fantastic. If you haven’t already read this stunning story from the Guardian: How a big US bank laundered billions from Mexico’s murderous drug gangs. This is, in essence a chronicle how the dark and sordid side of banking and about how one US bank – Wachovia – essentially allowed Mexican drug cartels to launder a whopping $378B.

But this interestingly, is just the tip of the iceberg. It turns out that Mexican money may have been the only thing holding the US financial system together. Check out the following and the last paragraph especially (I’ve bolded it, it is so stunning):

More shocking, and more important, the bank was sanctioned for failing to apply the proper anti-laundering strictures to the transfer of $378.4bn – a sum equivalent to one-third of Mexico’s gross national product – into dollar accounts from so-called casas de cambio (CDCs) in Mexico, currency exchange houses with which the bank did business.

“Wachovia’s blatant disregard for our banking laws gave international cocaine cartels a virtual carte blanche to finance their operations,” said Jeffrey Sloman, the federal prosecutor. Yet the total fine was less than 2% of the bank’s $12.3bn profit for 2009. On 24 March 2010, Wells Fargo stock traded at $30.86 – up 1% on the week of the court settlement.

The conclusion to the case was only the tip of an iceberg, demonstrating the role of the “legal” banking sector in swilling hundreds of billions of dollars – the blood money from the murderous drug trade in Mexico and other places in the world – around their global operations, now bailed out by the taxpayer.

At the height of the 2008 banking crisis, Antonio Maria Costa, then head of the United Nations office on drugs and crime, said he had evidence to suggest the proceeds from drugs and crime were “the only liquid investment capital” available to banks on the brink of collapse. “Inter-bank loans were funded by money that originated from the drugs trade,” he said. “There were signs that some banks were rescued that way.”

But the more interesting part of the story, that picks up on the above quote by Antonio Maria Costa, lies deeper in the story:

“In April and May 2007, Wachovia – as a result of increasing interest and pressure from the US attorney’s office – began to close its relationship with some of the casas de cambio.”

and, a paragraph later…

“In July 2007, all of Wachovia’s remaining 10 Mexican casa de cambio clients operating through London suddenly stopped doing so.”

In other words from April through July, with increasing intensity, Wachovia got out of the drug money laundering business. Of course, this just also happens to be at the exact same time that the liquidity crisis starts hitting US banks prompting “The Bank Run We Knew So Little About.”

This is not to say that the financial crises was caused by drug money – it wasn’t. All those crazy mortgages and masses of consumer debt created a house of cards that was teetering away. But it could be that the sudden end to access of vast billions of Latin America drug money did tip the system over the edge.

I say this because here in Canada we have a government that not only does not believe in harm reduction as an effective way to deal with the drug problem, but it intends to pursue a prison focused US style approach to crime that even the most ardent US conservatives are calling a failure. And why does this matter? I mention the above stories because it is worth noting the size, scope and complexity of problem with face. This is a structural, systemic problem, not something that is going to be solved by throwing an additional 1,000 or even 100,000 people in jail. $378B. Through one bank. One third of Mexico’s GDP. And that’s all just pure profit. That’s probably 80 times more than we spend on fighting the war on drugs every year. Through one bank.

And, as the US authorities appear to have demonstrated it may be that the only thing more expensive than losing the war on drugs is winning a major battle – as apparently that can throw the entire global financial system into disarray. So if we think that upping the amount we spend on this war by $1B or even $10B is going to make a lick of difference, we’ve got another thing coming. But I suppose in the mean time, it will secure a few votes.

Lost Open Data Opportunities

Even sometimes my home town of Vancouver gets it wrong.

Reading Chad Skelton’s blog (which I frequently regularly and recommend to my fellow Vancouverites) I was reminded of the great work he did creating an interactive visualization of the city’s parking tickets as part of a series around parking in Vancouver. Indeed, it is worth noting that the entire series was powered by data supplied by the city. Sadly, it just wasn’t (and still isn’t) open data. Quite the opposite, it was data that was wrestled, with enormous difficulty, via an FOI (ATIP) request.

parking-tickets

In the same blog post Chad recounts how he struggled to get the parking data from the city:

Indeed, the last major FOI request I made to the city was for its parking-ticket data. I had to fight the city tooth and nail to get them to cough up the information in the format I wanted it in (for months their FOI coordinator claimed, falsely, that she couldn’t provide the records in spreadsheet format). Then, when the parking ticket series finally ran, I got an email from the head of parking enforcement. He was wondering how he could get reprints of the series — he thought it was so good he wanted to hand it out to new parking enforcement officers during their training.

What is really frustrating about this paragraph is the last sentence. Obviously the people who find the most value in this analysis and tool are the city staff who manage parking infractions. So here is someone who, for free(!), provides an analysis and some stories that they now use to train new officers and he had to fight to get the data. The city would have been poorer without Chad’s story and analysis. And yet it fought him. Worse, an important player in the civic space (and an open data ally) feels frustrated by the city.

There are of course, other uses I could imagine for this data. I could imagine the data embedded into an application (ideally one like Washington DC’s Park IT DC – which let’s you find parking meters on a map, identify if they are available or not, and see local car crime rates for the area) so that you can access the risk of getting a ticket if you choose not to pay. This feels like the worse case scenario for the city, and frankly, it doesn’t feel that bad and would probably not affect people’s behaviour that much. But there may be other important uses of this data – it may correlate in some interestingly and unpredictably against other events – connections that if made and shared, might actually allow the city to leverage its enforcement officers more efficiently and effectively.

Of course, we won’t know what those could be, since the data isn’t shared, but it is the kind of thing Vancouver should be doing, given the existence of its open data portal. But all government’s should take note. There is a cost to not sharing data. Lost opportunities, lost insights and value, lost allies and networks of people interested in contributing to your success. It’s all our loss.

Birthday, technology adoption and my happiness

Yesterday I was reminded by the fact that I have great friends – friends who are far better to me than I deserve. You see, yesterday was my birthday and I was overwhelmed with the number of well wishers who sent me a little note.  I’m so, so lucky – something I should never forget.

It was also an illustrative guide to technology adoption and technology is and isn’t impacting my life.

I was struck by the way people got in touch with me. I’m a heavy twitter user and so I don’t spend a lot of time on facebook but yesterday was a huge reminder of how much in the minority I am. While I received maybe 10 mentions or DMs wishing me a happy birthday via twitter (all deeply appreciated) I received somewhere around 100 wall postings and/or facebook messages. Good old email came in at around 15-20 messages. Facebook is simply just big. Huge even. I know that on an intellectual level, but it is great to have these visceral reminders every once in a while. They hit home much harder.

Of course, the results are not a perfect metric of adoption. One thing facebook has going for it that email and twitter don’t is it reminds you of your friends birthdays on its landing page. This is just plain smart of facebook’s part. But it is also interesting in that, knowing this face had no impact on how happy or grateful I was to get messages from people. The fact that technology reminded people – and so they weren’t simply remembering on their own – didn’t matter a lick in how happy I was to hear from them. Indeed, it was wonderful to hear from people – such as old high school friends – I haven’t seen or heard of in ages.

All of this is to say, I continue to read how social media sites and social networks specifically are creating more superficial connections and reducing the quality or intensity of who is a “friend.” My birthday was a great reminder of how ridiculous this talk is. My close friends still reached out, and I got to spend a great day on the weekend with a number of them. Facebook has not displaced them. What it has done however is kept me connected with people who can’t always be close to me, either because of the constraints of geography, or because the evolution of time. Ultimately, these technologies don’t create binary choices between having close intimate friends or lots of weak ties, they are complimentary. My close friends who move away can stay connected to me, and those with whom I form “loose” ties, migrate into my strong ties.

In both cases – for those I get to see frequently and those I don’t – I’m grateful to have them in my life, and that Facebook, twitter and email makes this easier has frankly, made my life richer.

The Review I want to Read of "What Technology Wants"

A few weeks ago I finished “What Technology Wants” by Kevin Kelly. For those unfamiliar with Kelly (as I was) he was one of the co-founders of Wired magazine and sits on the board of the Long Now Foundation.

What Technology Wants is a fascinating read – both attracting and repulsing me on several occasions. Often I find book reading to be a fairly binary experience – either I already (explicitly or intuitively) broadly agree with the thesis and the book is an exercise in validation and greater evidence, or I disagree, and the book pushes me to re-evaluate assumptions I have. More rare is a book which does both at the same time.

For example, Kelly’s breakdown of the universe as a series of systems for moving around information so completely resonated with me. From DNA, to language, to written word, our world keeps getting filled with systems the transmit, share and remix more information faster. The way Kelly paints this universe is fascinating and thought provoking. In contrast, his determinist view of technology, that we are pre-ordained to make the next discovery and that, from a technological point of view, our history is already written and is just waiting to unwind, ran counter to so many of my values (a strong believer in free-will). It was as if the tech-tree from a game like Civilization actually got it all right – that technology had to be discovered in a preset order and that if we rewound the clock of history, it would (more or less) this aspect of it would play out the same.

The tech tree is civilization always bothered me on a basic level – it challenged the notion that someone smart enough, with enough vision and imagination could have in a parallel universe, created a completely different technology tree in our history. I mean, Leonardo De Vinci drafted plans for helicopters, guns and tanks (among other things) in the 14th century? And yet, Kelly’s case is so compelling and with the simplest of arguments: No inventor ever sits around unworried that someone else is going to make the same discovery – quite the opposite, inventors know that a parallel discovery is inevitable, just a matter of time, and usually not that much time.

Indeed, Kelly convinces me that the era of the unique idea, or the singular discovery may be over, in fact the whole thing was just an illusion created by the limits of time, space and capacity. Previously, it took time for ideas to spread, so they could appear to come from a single source, but in a world of instant communication, we increasingly see that ideas spring up simultaneously everywhere – an interest point given the arguments over patents and copyright.

But what I’d really like to read is a feminist critique of What Technology Wants (if someone knows of one, please post it or send it to me). It’s not that I think that Kelly is sexist (there is nothing that suggests this is the case) it is just that the book reads like much of what comes out of the technology space – which sadly – tends to be dominated by men. Indeed, looking at the end of the book, Kelly thanks 49 thinkers and authors who took time to help him enhance his thesis, and the list is impressive including names such as Richard Dawkins, Chris Anderson, David Brin, and Paul Hawken. But I couldn’t help but notice only 2 of the 49 were obviously women (there may be, tops 4 women, who made the list). What Technology Wants is a great read, and I think, for me, the experience will be richer once I see how some other perspectives wrap their heads around its ideas.

Individualism in the networked world

Evolving thought:

One of the large challenges of the 21st century is going to be reconciling our increasingly networked world with traditional notions of individualism.

The more I look at a networked world – not in some geopolitical sense but on a day to day experience for everyone – the more it appears that many of the core to elements of liberal individualism are going to be challenged. Authorship is a great example of this dynamic playing out – yes Wikipedia makes it impossible to identify who an author is – but even tweets, and blogs and all forms of digital medium confuse who is the original author of a work. More over, we may no longer live in a world of unique individual thought. As Kevin Kelly so remarkably documents in What Technology Wants by looking at patent submissions and scientific papers, it is increasingly apparent that technologies are being simultaneously discovered everywhere, the notion of attributing something to an individual may be at best difficult, at worst impossibly random.

And of course networked systems disproportionately reward hubs. Hubs in a network attract more traffic (ideas/money/anything) and therefor may appear to many others in the network as the source of these ideas as they are shared out. I for example get to hear more about open data, or technology and government, then many other people, as a result my thinking gets to be pushed further and faster allowing me to in turn share more ideas that are of interest and attract still more connections. I benefit not simply from inherent individual abilities, but from the structure of, and my location in, a network.

Of course, socialist collectivism is going to be challenged as well in some different way but I think that may be less traumatic for our political systems the a direct challenge to individualism – something many centrist and right leaning parties may struggle with.

This is all still half formed but mental note for myself. More thinking/research on this needed. Open to ideas, articles, etc…

How to Unsuck Canada’s Internet – creating the right incentives

This week at the Mesh conference in Toronto (where I’ll be talking Open Data) the always thoughtful Jesse Brown, of TVO’s Search Engine will be running a session title How to Unsuck Canada’s Internet.

As part of the lead up to the session he asked me if I could write him a sentence or two about my thoughts on how to unsuck our internet. In his words:

The idea is to take a practical approach to fixing Canada's lousy
Internet (policies/infrastructure/open data/culture- interpret the
suck as you will).

So my first thought is that we should prevent anyone who owns any telecommunications infrastructure from owning content. Period. Delivery mechanisms should compete with delivery mechanisms and content should compete with content. But don’t let them mix, cause it screws up all the incentives.

A second thought would be to allocate the freed up broadcast spectrum to new internet providers (which is really what all the cell phone providers are about to become anyways). I’m actually deeply confident that we may be 5 years away from this problem becoming moot in the main urban areas. Once our internet access is freed from cables and the last mile, then all bets are off. That won’t help rural areas, but it may end up transforming urban access and costs. Just like cities clustered around seaports and key places nodes along trade networks, cities (and workers) will cluster around better telecommunication access.

But the longer thought comes from some reflections over the timely recent release of OpenMedia.ca/CIPPIC’s second submission to the CRTC’s proceedings on usage-based billing (UBB) which I think is actually fairly aligned with the piece I wrote back in February on titled Why the CRTC was right about User Based Billing (please read the piece and the comments below before freaking out).

Here, I think our goal shouldn’t be punitive (that will only encourage the telco’s to do “just enough” to comply. What we need to do is get the incentives right (which is, again, why they shouldn’t be allowed to own content, but I digress).

An important part of getting the incentives right is understanding what the actual constraints on internet access. One of the main problems is that people often get confused about what is scarce and what is abundant when talking about the internet. I think what everyone realizes is that content is abundant. There are probably over a trillion websites out there, billions of videos and god knows what else. There is no scarcity there.

This is why any description of access that uses an image like the one below will, in my mind, fail.

Charging per byte shouldn’t be permitted if the pipe has infinite capacity (or at least it wouldn’t make sense in a truly competitive market). What should happen is that companies would be able to charge the cost of the infrastructure plus a reasonable rate of return.

But while the pipe may have infinite capacity over time, at any given moment it does not. The issue isn’t about how many bytes you consume, it’s about the capacity to deliver those bytes in a given moment when you have lots of competing users. This is why it isn’t the “where the data is coming from/going to” that matters, but rather how much of it is in the pipe at a given moment. What matters is not the cable, but the it’s cross section.

A cable that is empty or only at 40% capacity should deliver rip-roaring internet to anyone who wants it. My understanding is that the problem is when the cable is at 100% or more capacity. Then users start crowding each other out and performance (for everyone) suffers.

 

 

 

 

 

 

 

Indeed this is where the OpenMeida/CIPPIC document left me confused. On the one hand they correctly argue that the internet’s content is not a limited resource (such as natural gas). But they seem to be arguing that the network capacity is not a finite resource (sections 21 and 22) while at the same time accepting that there may be constraints on capacity during peak hours (sections 27 and 30 where they seem to accept that off peak users should not be subsidizing peak time users and again in the conclusion where they state “As noted in far greater detail above, ISP provisioning costs are driven primarily by peak period usage.” If you have peak period usage then, by definition, you have scarcity). The last two points seem to be in conflict. The network capacity cannot be both infinite and constrained during peak hours? Can it?

Now, it may be that there is more network capacity in Canada then there is demand – even at peak times – at which point, any modicum of sympathy I might have felt for the telcos disappears immediately. However, if there is a peak consumption period that does stress the network’s capacity, I’d be relatively comfortable adopting a pricing mechanism that allocates the “scarce” amount of broadband pie. Maybe there are users – especially many BitTorrenters – whose activities are not time sensitive. Having a system in place that encourages them to bittorrent during off-peak hours would create a network that was better utilized.

So the OpenMedia piece seems to be open to the idea of peak usage pricing (which was what I was getting at in my UBB piece) so I think we are actually aligned (which is good since I like the people at OpenMedia.ca).

The question is, does this create the right incentives for the telco’s to invest more in capacity? My hope would be yes, that competition would cause users to migrate to networks that provided high speeds and competitive low and/or peak usage time fees. But I’m open to the possibility that it wouldn’t. It’s a complicated problem and I don’t pretend to think that I’ve solved it in one blog post. Just trying to work it though in my head.