This week at the Mesh conference in Toronto (where I’ll be talking Open Data) the always thoughtful Jesse Brown, of TVO’s Search Engine will be running a session title How to Unsuck Canada’s Internet.
As part of the lead up to the session he asked me if I could write him a sentence or two about my thoughts on how to unsuck our internet. In his words:
The idea is to take a practical approach to fixing Canada's lousy Internet (policies/infrastructure/open data/culture- interpret the suck as you will).
So my first thought is that we should prevent anyone who owns any telecommunications infrastructure from owning content. Period. Delivery mechanisms should compete with delivery mechanisms and content should compete with content. But don’t let them mix, cause it screws up all the incentives.
A second thought would be to allocate the freed up broadcast spectrum to new internet providers (which is really what all the cell phone providers are about to become anyways). I’m actually deeply confident that we may be 5 years away from this problem becoming moot in the main urban areas. Once our internet access is freed from cables and the last mile, then all bets are off. That won’t help rural areas, but it may end up transforming urban access and costs. Just like cities clustered around seaports and key places nodes along trade networks, cities (and workers) will cluster around better telecommunication access.
But the longer thought comes from some reflections over the timely recent release of OpenMedia.ca/CIPPIC’s second submission to the CRTC’s proceedings on usage-based billing (UBB) which I think is actually fairly aligned with the piece I wrote back in February on titled Why the CRTC was right about User Based Billing (please read the piece and the comments below before freaking out).
Here, I think our goal shouldn’t be punitive (that will only encourage the telco’s to do “just enough” to comply. What we need to do is get the incentives right (which is, again, why they shouldn’t be allowed to own content, but I digress).
An important part of getting the incentives right is understanding what the actual constraints on internet access. One of the main problems is that people often get confused about what is scarce and what is abundant when talking about the internet. I think what everyone realizes is that content is abundant. There are probably over a trillion websites out there, billions of videos and god knows what else. There is no scarcity there.
This is why any description of access that uses an image like the one below will, in my mind, fail.
Charging per byte shouldn’t be permitted if the pipe has infinite capacity (or at least it wouldn’t make sense in a truly competitive market). What should happen is that companies would be able to charge the cost of the infrastructure plus a reasonable rate of return.
But while the pipe may have infinite capacity over time, at any given moment it does not. The issue isn’t about how many bytes you consume, it’s about the capacity to deliver those bytes in a given moment when you have lots of competing users. This is why it isn’t the “where the data is coming from/going to” that matters, but rather how much of it is in the pipe at a given moment. What matters is not the cable, but the it’s cross section.
A cable that is empty or only at 40% capacity should deliver rip-roaring internet to anyone who wants it. My understanding is that the problem is when the cable is at 100% or more capacity. Then users start crowding each other out and performance (for everyone) suffers.
Indeed this is where the OpenMeida/CIPPIC document left me confused. On the one hand they correctly argue that the internet’s content is not a limited resource (such as natural gas). But they seem to be arguing that the network capacity is not a finite resource (sections 21 and 22) while at the same time accepting that there may be constraints on capacity during peak hours (sections 27 and 30 where they seem to accept that off peak users should not be subsidizing peak time users and again in the conclusion where they state “As noted in far greater detail above, ISP provisioning costs are driven primarily by peak period usage.” If you have peak period usage then, by definition, you have scarcity). The last two points seem to be in conflict. The network capacity cannot be both infinite and constrained during peak hours? Can it?
Now, it may be that there is more network capacity in Canada then there is demand – even at peak times – at which point, any modicum of sympathy I might have felt for the telcos disappears immediately. However, if there is a peak consumption period that does stress the network’s capacity, I’d be relatively comfortable adopting a pricing mechanism that allocates the “scarce” amount of broadband pie. Maybe there are users – especially many BitTorrenters – whose activities are not time sensitive. Having a system in place that encourages them to bittorrent during off-peak hours would create a network that was better utilized.
So the OpenMedia piece seems to be open to the idea of peak usage pricing (which was what I was getting at in my UBB piece) so I think we are actually aligned (which is good since I like the people at OpenMedia.ca).
The question is, does this create the right incentives for the telco’s to invest more in capacity? My hope would be yes, that competition would cause users to migrate to networks that provided high speeds and competitive low and/or peak usage time fees. But I’m open to the possibility that it wouldn’t. It’s a complicated problem and I don’t pretend to think that I’ve solved it in one blog post. Just trying to work it though in my head.
Maybe there should be a cap, but on the Telcos — perhaps they should be required (as a consumer protection measure) to provision a network that can provide X% of the marketed capacity to all of their customers simultaneously, N% of the time, and be prohibited (capped!) from signing up new customers if they run over until they build out more capacity. This help would counter the “Up To Z Megabits per second!” advertising, which can be very misleading. I would suggest numbers in the range of 10-20% for X and 99+% for N.
In your argument, there’s a lot of similarities between data delivery & water delivery. We’re moving towards metered water (which likewise, is for our purposes “infinite” in supply, but limited at any given moment in the ability to deliver X amount/second), metered roads, so why not metered data?
at my folks’ condo in Toronto, the building managers figured out some math about the building’s water usage, and now on all their water-using appliances, they have little reminder stickers about when’s a “best (cheapest)” time to use them. Their building can also hive off “core” water (for the whole building), per-suite averages & tell which suites are not participating in the water-usage optimization program. They don’t yet, but are debating having variable fees based on who’s participating in their water-costs-saving program and who’s not. The interesting tidbit is that not only did the building save about 30%/month on water costs since starting this, overall consumption is down too, by some 15%, according to my mum.
That feels like something that could be done by telco’s around data too. There are people who simply do not care about “peak” usage and just want to stream all the time, and those willing to adjust usage to off-peak times. And charge those customers differently.
So far as I can judge, if in fact there is a shortage of bandwidth (something that has not been demonstrated to my satisfaction) then (a) it’s artificial, and (b) it could be addressed at a fraction of the cost consumers are being charged for extra use.
Where I perceive the greatest internet bandwidth bottlenecks to exist is at the point of service – on a hotel wireless network, for example, or conference centre internet. Once you’ve made it past the first mile, there is no network congestion to speak of. This is easily verified (and has been determined to my satisfaction) using traceroute statistics.
I actually don’t have a problem with UBB so long as the markup is reasonable. Various stories, which are hard to find, indicated the the ISPs are paying about 3 cents per Gb of traffic. Charging $5 per GB overage is beyond punitive.
Pingback: The Audacity of Shaw: How Canada’s Internet just got Worse | eaves.ca
Pingback: The Audacity of Shaw: How Canada’s Internet just got Worse | Worlds express