Tag Archives: technology

Apps for Climate Action Update – Lessons and some new sexy data

ttl_A4CAOkay, so I’ll be the first to say that the Apps4Climate Action data catalog has not always been the easiest to navigate and some of the data sets have not been machine readable, or even data at all.

That however, is starting to change.

Indeed, the good news is three fold.

First, the data catalog has been tweaked and has better search and an improved capacity to sort out non-machine readable data sets. A great example of a government starting to think like the web, iterating and learning as the program progresses.

Second, and more importantly, new and better sets are starting to be added to the catalog. Most recently the Community Energy and Emissions Inventories were released in an excel format. This data shows carbon emissions for all sorts of activities and infrastructure at a very granular level. Want to compare the GHG emissions of a duplex in Vancouver versus a duplex in Prince George? Now you can.

Moreover, this is the first time any government has released this type of data at all, not to mention making it machine readable. So not only have the app possibilities (how green is your neighborhood, rate my city, calculate my GHG emissions) all become much more realizable, but any app using this data will be among the first in the world.

Finally, probably one of the most positive outcomes of the app competition to date is largely hidden from the public. The fact that members of the public have been asking for better data or even for data sets at all(!) has made a number of public servants realize the value of making this information public.

Prior to the competition making data public was a compliance problem, something you did but you figured no one would ever look at or read it. Now, for a growing number of public servants, it is an innovation opportunity. Someone may take what the government produces and do something interesting with it. Even if they don’t, someone is nonetheless taking interest in your work – something that has rewards in of itself. This, of course, doesn’t mean that things will improve over night, but it does help advance the goal of getting government to share more machine readable data.

Better still, the government is reaching out to stakeholders in the development community and soliciting advice on how to improve the site and the program, all in a cost-effective manner.

So even within the Apps4Climate Action project we see some of the changes the promise of Government 2.0 holds for us:

  • Feedback from community participants driving the project to adapt
  • Iterations of development conducted “on the fly” during a project or program
  • Success and failures resulting in queries in quick improvements (release of more data, better website)
  • Shifting culture around disclosure and cross sector innovation
  • All on a timeline that can be measured in weeks

Once this project is over I’ll write more on it, but wanted to update people, especially given some of the new data sets that have become available.

And if you are a developer or someone who would like to do a cool visualization with the data, check out the Apps4Climate Action website or drop me an email, happy to talk you through your idea.

How to Engage with Social Media: An Example

The other week I wrote a blog post titled Canadian Governments: How to Waste millions online ($30M and Counting) in which I argued that OpenID should be the cornerstone of the government’s online identification system. The post generated a lot of online discussion, much of which was of very high quality and deeply thoughtful. On occasion, comments can enhance and even exceed a post’s initial value, and I’d argue this is one of these cases – something that is always a joy when it happens.

There was however, one comment that struck me as particularly important, not only because it was thoughtful, but because the type of comment is so rare. This is because it came from a government official. In this case, from Dave Nikolejsin, the CIO of the Government of British Columbia.

Everything about Mr. Nikolejsin’s comment deserves to be studied and understood by those in the public and private sector seeking to understand how to engage the public online. His comment is a perfect case of how and why governments should allow public servants to comment on blogs that tackle issues they are themselves addressing.

What makes Mr. Nikolejsin’s comment (which I’ve reprinted below) so effective? Let me break out the key components:

  1. It’s curious: Given the nature of my blog post a respondent could easily have gone on the offensive and merely countered claims they disagreed with. Instead Mr Nikolejsin remains open and curious about the ideas in the post and its claims. This makes readers and other commentators less likely to attack and more likely to engage and seek to understand.
  2. It seeks to return to first principles: The comment is effective because it is concise and it tackles the specific issues raised by the post. But part of what really makes it shine is how it seeks to identify first principles by talking about different approaches to online ID’s. Rather than ending up arguing about solutions, the post engages readers to identify what assumptions they may or may not have in common with one another. This won’t necessarily makes people more likely to agree, but they’ll end up debating the right thing (goals, assumptions) rather than the wrong thing (specific solutions).
  3. It links to further readings: Rather than try to explain everything in his response, the comment instead links to relevant work. This keeps the comment shorter and more readable, while also providing those who care about this issue (like me) with resources to learn more.
  4. It solicits feedback: “I really encourage you to take a look at the education link and tell me what you think.Frequently comments simply retort points in the original post they disagree with. This can reinforce the sense that the two parties are in opposition. Mr. Nikolejsin and I actually agree far more than we disagree: we both want a secure, cost effective, and user friendly online ID management system for government. By asking for feedback he implicitly recognizes this and is asking me to be a partner, not an antagonist.
  5. It is light: One thing about the web is that it is deeply human. Overly formal statements looks canned and cause people to tune out. This comment is intelligent and serious with its content, but remains light and human in its style. I get the sense a human wrote this, not a communications department. People like engaging with humans. They don’t like engaging with communication departments.
  6. Community Feedback: The comment has already sparked a number of responses which contain supportive thoughts, suggestions and questions, including some by those working in municipalities, as experts in the field and citizen users. It’s actually a pretty decent group of people there – the kind a government would want to engage.

In short, this is a comment that sought to engage. And I can tell you, it has been deeply, deeply successful. I know that some of what I wrote might have been difficult to read but after reading Mr. Nikolejsin’s comments, I’m much more likely to bend over backwards to help him out. Isn’t this what any government would want of its citizens?

Now, am I suggesting that governments should respond to every blog post out there? Definitely not. But there were a number of good comments on this post and the readership in terms of who was showing up makes commenting on a post likely worthwhile.

I’ve a number of thoughts on the comment that I hope to post shortly. But first, I wanted to repost the comment, which you can also read in the original post’s thread here.

Dave Nikolejsin <dave.nikolejsin@gov.bc.ca> (unregistered) wrote: Thanks for this post David – I think it’s excellent that this debate is happening, but I do need to set the record straight on what we here in BC are doing (and not doing).

First and foremost, you certainly got my attention with the title of your post! I was reading with interest to see who in Canada was wasting $30M – imagine my surprise when I saw it was me! Since I know that we’ve only spent about 1% of that so far I asked Ian what exactly it was he presented at the MISA conference you mentioned (Ian works for me). While we would certainly like someone to give us $30M, we are not sure where you got the idea we currently have such plans.

That said I would like to tell you what we are up to and really encourage the debate that your post started. I personally think that figuring out how we will get some sort of Identity layer on the Internet is one of the most important (and vexing) issues of our day. First, just to be clear, we have absolutely nothing against OpenID. I think it has a place in the solution set we need, but as others have noted we do have some issues using foreign authentication services to access government services here in BC simply because we have legislation against any personal info related to gov services crossing the border. I do like Jeff’s thinking about whom in Canada can/will issue OpenID’s here. It is worth thinking about a key difference we see emerging between us and the USA. In Canada it seems that Government’s will issue on line identity claims just like we issue the paper/plastic documents we all use to prove our Identities (driver’s licenses, birth certificates, passports, SIN’s, etc.). In the USA it seems that claims will be issued by the private sector (PayPal, Google, Equifax, banks, etc.). I’m not sure why this is, but perhaps it speaks to some combination of culture, role of government, trust, and the debacle that REALID has become.

Another issue I see with OpenID relates to the level of assurance you get with an OpenID. As you will know if you look at the pilots that are underway in US Gov, or look at what you can access with an OpenID right now, they are all pretty safe. In other words “good enough” assurance of who you are is ok, and if someone (either the OpenID site or the relying site) makes a mistake it’s no big deal. For much of what government does this is actually an acceptable level of assurance. We just need a “good enough” sense of who you are, and we need to know it’s the same person who was on the site before. However, we ALSO need to solve the MUCH harder problem of HIGH ASSURANCE on-line transactions. All Government’s want to put very high-value services on-line like allowing people access to their personal health information, their kids report cards, driver’s license renewals, even voting some day, and to do these things we have to REALLY be sure who’s on the other end of the Internet. In order to do that someone (we think government) needs to vouch (on-line) that you are really you. The key to our ability to do so is not technology, or picking one solution over the other, the key is the ID proofing experience that happens BEFORE the tech is applied. It’s worth noting that even the OpenID guys are starting to think about OpenID v.Next (http://self-issued.info/?p=256) because they agree with the assurance level limitation of the current implementation of OpenID. And OpenID v.Next will not be backward compatible with OpenID.

Think about it – why is the Driver’s License the best, most accepted form of ID in the “paper” world. It’s because they have the best ID proofing practices. They bring you to a counter, check your foundation documents (birth cert., Card Card, etc.), take your picture and digitally compare it to all the other pictures in the database to make sure you don’t have another DL under another name, etc. Here in BC we have a similar set of processes (minus the picture) under our Personal BCeID service (https://www.bceid.ca/register/personal/). We are now working on “claims enabling” BCeID and doing all the architecture and standards work necessary to make this work for our services. Take a look at this work here (http://www.cio.gov.bc.ca/cio/idim/index.page?).

I really encourage you to take a look at the education link and tell me what you think. Also, the standards package is getting very strong feedback from vendors and standards groups like the ICF, OIX, OASIS and Kantara folks. This is really early days and we are really trying to make sure we get it right – and spend the minimum by tracking to Internet standards and solutions wherever possible.

Sorry for the long post, but like I said – this is important stuff (at least to me!) Keep the fires burning!

Thanks – Dave.

Saving Millions: Why Cities should Fork the Kuali Foundation

For those interested in my writing on open source, municipal issues and technology, I want to be blunt: I consider this to be one of the most important posts I’ll write this year.

A few months ago I wrote an article and blog post about “Muniforge,” an idea based on a speech I’d given at a conference in 2009 in which I advocated that cities with common needs should band together and co-develop software to reduce procurement costs and better meet requirements. I continued to believe in the idea, but have recognized that cultural barriers would likely mean it would be difficult to realize.

Last month that all changed. While at Northern Voice I ended up talking to Jens Haeusser an IT strategist at the University of British Columbia and confirmed something I’d long suspected: that some people much smarter than me had already had the same idea and had made it a reality… not among cities but among academic institutions.

The result? The Kuali foundation. “…A growing community of universities, colleges, businesses, and other organizations that have partnered to build and sustain open-source administrative software for higher education, by higher education.”

In other words for the past 5 years over 35 universities in the United States, Canada, Australia and South Africa have been successfully co-developing software.

For cities everywhere interested in controlling spending or reducing costs, this should be an earth shattering revelation – a wake up call – for several reasons:

  • First, a viable working model for muniforge has existed for 5 years and has been a demonstrable success, both in creating high quality software and in saving the participating institutions significant money. Devising a methodology to calculate how much a city could save by co-developing software with an open source license is probably very, very easy.
  • Second, what is also great about universities is that they suffer from many of the challenges of cities. Both have: conservative bureaucracies, limited budgets, and significant legacy systems. In addition, neither have IT as the core competency and both are frequently concerned with licenses, liability and the “owning” intellectual property.
  • Which thirdly, leads to possibly the best part. The Kuali Foundation has already addressed all the critical obstacles to such an endeavour and has developed licensing agreements, policies, decision-making structures, and work flows processes that address necessary for success. Moreover, all of this legal, policy and work infrastructure is itself available to be copied. For free. Right now.
  • Fourth, the Kuali foundation is not a bunch of free-software hippies that depend on the kindness of strangers to patch their software (a stereotype that really must end). Quite the opposite. The Kuali foundation has helped spawn 10 different companies that specialize in implementing and supporting (through SLAs) the software the foundation develops. In other words, the universities have created a group of competing firms dedicated to serving their niche market. Think about that. Rather than deal with vendors who specialize in serving large multinationals and who’ve tweaked their software to (somewhat) work for cities, the foundation has fostered competing service providers (to say it again) within the higher education niche.

As a result, I believe a group of forwarding thinking cities – perhaps starting with those in North America – should fork the Kuali Foundation. That is, they should copy Kuali’s bylaws, it structure, its licenses and pretty much everything else – possibly even the source code for some of its projects – and create a Kuali for cities. Call it Muniforge, or Communiforge or CivicHub or whatever… but create it.

We can radically reduce the costs of software to cities, improve support by creating the right market incentive to help foster companies whose interests are directly aligned with cities and create better software that meets cities unique needs. The question is… will we? All that is required is for CIO’s to being networking and for a few to discover some common needs. One I idea I have immediately is for the City of Nanaimo to apply the Kuali modified Apache license to its council monitoring software package it developed in house, and to upload it to GitHub. That would be a great start – one that could collectively save cities millions.

If you are a city CIO/CTO/Technology Director and are interested in this idea, please check out these links:

The Kuali Foundation homepage

Open Source Collaboration in Higher Education: Guidelines and Report of the Licensing and Policy Framework Summit for Software Sharing in Higher Education by Brad Wheeler and Daniel Greenstein (key architects behind Kuali)

Open Source 2010: Reflections on 2007 by Brad Wheeler (a must read, lots of great tips in here)

Heck, I suggest looking at all of Brad Wheeler’s articles and presentations.

Another overview article on Kuali by University Business

Phillip Ashlock of Open Plans has an overview article of where some cities are heading re open source.

And again, my original article on Muniforge.

If you aren’t already, consider reading the OpenSF blog – these guys are leaders and one way or another will be part of the mix.

Also, if you’re on twitter, consider following Jay Nath and Philip Ashlock.

Canadian Governments: How to waste millions online ($30M and counting)

Back from DC and Toronto I’m feeling recharged and reinvigorated. The Gov 2.0 expo was fantastic, it was great to meet colleagues from around the world in person. The FCM AGM was equally exciting with a great turnout for our session on Government 2.0 and lots of engagement from the attendees.

So, now that I’m in a good mood, it’s only natural that I’m suddenly burning up about some awesomely poor decisions being made at the provincial level and that may also may be in the process of being made at the federal level.

Last year at the BC Chapter of the Municipal Information Systems Association conference I stumbled, by chance, into a session run by the British Columbia government about a single login system it was creating for government website. So I get that this sounds mundane but check this out: it would means that if you live in BC you’ll have a single login name and password when accessing any provincial government service. Convenient! Better still, the government was telling the municipalities that this system (still in development) could work for their websites too. So only one user name and password to access any government service in BC! It all sounds like $30 million (the number I think they quoted) well spent.

So what could be wrong with this…?

How about the fact that such a system already exists. For free.

Yes, OpenID, is a system that has been created to do just this. It’s free and licensed for use by anyone. Better still, it’s been adopted by a number of small institutions such as Google, Yahoo, AOL, PayPal, and Verisign and… none other than the US government which recently began a pilot adoption of it.

So let me ask you: Do you think the login system designed by the BC government is going to be more, or less secure that that an open source system that enjoys the support of Google, Yahoo, AOL, PayPal, Verisign and the US Government? Moreover, do we think that the security concerns these organizations have regarding their clients and citizens are less strict than those of the BC government?

I suspect not.

But that isn’t going to prevent us from sinking millions into a system that will be less secure and will costs millions more to sustain over the coming decades (since we’ll be the only ones using it… we’ll have to have uniquely trained people to sustain it!).

Of course, it gets worse. While the BC government is designing its own system, rumour has it that the Federal Government is looking into replacing Epass; it’s own aging website login system which, by the by, does not work with Firefox, a web browser used by only a quarter of all Canadians. Of course, I’m willing to bet almost anything that no one is even contemplating using OpenID. Instead, we will sink 10’s of millions of dollars (if not more…) into a system. Of course, what’s $100 million of taxpayer dollars…

Oh, and today’s my birthday! And despite the tone of this post I’m actually in a really good mood and have had a great time with friends, family and loved ones. And where will I be today…? At 30,000 ft flying to Ottawa for GovCamp Canada. Isn’t that appropriate? :)

Open Data: An Example of the Long Tail of Public Policy at Work

VancouverGraffiti_AnalysisAs many readers know, Vancouver passed what has locally been termed the Open3 motion a year ago and has had a open data portal up and running for several months.

Around the world much of the focus of open data initiatives have focused on the development of applications like Vancouver’s Vantrash, Washington DC’s Stumble Safely or Toronto’s Childcare locator. But the other use of data portals is to actually better understand and analyze phenomena in a city – all of which can potentially lead to a broader diversity of perspectives, better public policy and a more informed public and/or decision makers.

I was thus pleased to find out about another example of what I’ve been calling the Long Tail of Public Policy when I received an email from Victor Ngo, a student at the University of British Columbia who just completed his 2nd year in the Human Geography program with an Urban Studies focus (He’s also a co-op student looking for a summer job – nudge to the City of Vancouver).

It turns out that last month, he and two classmates did a project on graffiti occurrence and its relationship to land use, crime rates, and socio-economic variables. As Victor shared with me:

It was a group project I did with two other members in March/April. It was for an introductory GIS class and given our knowledge, our analysis was certainly not as robust and refined as it could have been. But having been responsible for GIS analysis part of the project, I’m proud of what we accomplished.

The “Graffiti sites” shapefile was very instrumental to my project. I’m a big fan of the site and I’ll be using it more in the future as I continue my studies.

So here we have University students in Vancouver using real city data to work on projects that could provide some insights, all while learning. This is another small example of why open data matters. This is the future of public policy development. Today Victor may be a student, less certain about the quality of his work (don’t underestimate yourself, Victor) but tomorrow he could be working for government, a think tank, a consulting firm, an insurance company or a citizen advocacy group. But wherever he is, the open data portal will be a resource he will want to turn to.

With Victor’s permission I’ve uploaded his report, Graffiti in the Urban Everyday – Comparing Graffiti Occurrence with Crime Rates, Land Use, and Socio-Economic Indicators in Vancouver, to my site so anyone can download it. Victor has said he’d love to get people’s feedback on it.

And what was the main drawback of using the open data? There wasn’t enough of it.

…one thing I would have liked was better crime statistics, in particular, the data for the actual location of crime occurrence. It would have certainly made our analysis more refined. The weekly Crime Maps that the VPD publishes is an example of what I mean:

http://vancouver.ca/police/CrimeMaps/index.htm

You’re able to see the actual location where the crime was committed. We had to tabulate data from summary tables found at:

http://vancouver.ca/police/organization/planning-research-audit/neighbourhood-statistics.html

To translate: essentially the city releases this information in a non-machine-readable format, meaning that citizens, public servants at other levels of government and (I’m willing to wager) City of Vancouver public servants outside the police department have to recreate the data in a digital format. What a colossal waste of time and energy. Why not just share the data in a structured digital way? The city already makes it public, why not make it useful as well? This is what Washington DC (search crime) and San Francisco have done.

I hope that more apps get created in Vancouver, but as a public policy geek, I’m also hoping that more reports like these (and the one Bing Thom architects published on the future of Vancouver also using data from the open data catalog) get published. Ultimately, more people learning, thinking, writing and seeking solutions to our challenges will create a smarter, more vibrant and more successful city. Isn’t that what you’d want your city government (or any government, really…) to do?

Mick Jagger & why copyright doesn't always help artists

I recently read this wonderful interview with Mick Jagger on the BBC website which had this fantastic extract about the impact of the internet on the music industry. What I love about this interview is that Mick Jagger is, of course, about as old a legend as you can find in the music industry.

…I’m talking about the internet.

But that’s just one facet of the technology of music. Music has been aligned with technology for a long time. The model of records and record selling is a very complex subject and quite boring, to be honest.

But your view is valid because you have a huge catalogue, which is worth a lot of money, and you’ve been in the business a long time, so you have perspective.

Well, it’s all changed in the last couple of years. We’ve gone through a period where everyone downloaded everything for nothing and we’ve gone into a grey period it’s much easier to pay for things – assuming you’ve got any money.

Are you quite relaxed about it?

I am quite relaxed about it. But, you know, it is a massive change and it does alter the fact that people don’t make as much money out of records.

But I have a take on that – people only made money out of records for a very, very small time. When The Rolling Stones started out, we didn’t make any money out of records because record companies wouldn’t pay you! They didn’t pay anyone!

Then, there was a small period from 1970 to 1997, where people did get paid, and they got paid very handsomely and everyone made money. But now that period has gone.

So if you look at the history of recorded music from 1900 to now, there was a 25 year period where artists did very well, but the rest of the time they didn’t.

So what does this have to do with copyright? Well, remember, the record labels and other content distributors (not creators!) keep saying how artists will starve unless there is copyright. But understand that for the entire 110-year period that Mick Jagger is referencing there was copyright… and yet artists were paid to record LPs and records for only a small fraction (less than a quarter) of that period. During the rest of the time, the way they made money was by performing. There is nothing about a stronger copyright regime that ensures artists (the creators!) will receive for more money or compensation.

So when the record labels say that without stricter copyright legislation artists will suffer, what they really mean to say is one specific business model – one that requires distributors and that they happen to do well by – will suffer. Artists, who traditionally never received much from the labels (and even during this 25 year period only a tiny few profited handsomely) have no guarantees that with stricter copyright they will see more revenue. No, rather, the distributors will simply own their content for longer and have greater control over its use.

This country is about to go into a dark, dark place with the new copyright legislation. I suspect we will end up stalled for 30 years and cultural innovation will shift to other parts of the world where creativity, remix culture and forms of artistic expression are kept more free.

Again, as Lessig says:

  • Creativity and innovation always builds on the past.
  • The past always tries to control the creativity that builds upon it.
  • Free societies enable the future by limiting this power of the past.
  • Ours is less and less a free society.

Welcome to copyright reform. A Canada where the past controls the creativity that gets built upon it.

Canada 3.0 & The Collapse of Complex Business Models

If you haven’t already, I strongly encourage everyone to go read Clay Shirky’s The Collapse of Complex Business Models. I just read it while finishing up this piece and it articulates much of what underpins it in the usual brilliant Shirky manner.

I’ve been reflecting a lot on Canada 3.0 (think SXSWi meets government and big business) since the conference’s end. I want to open by saying there were a number of positive highlights. I came away with renewed respect and confidence in the CRTC. My sense is net neutrality and other core internet issues are well understood and respected by the people I spoke with. Moreover, I was encouraged by what some public servants had to say regarding their vision for Canada’s digital economy. In many corners there were some key people who seemed to understand what policy, legal and physical infrastructure needs to be in place to ensure Canada’s future success.

But these moments aside, the more I reflect on the conference the more troubled I feel. I can’t claim to have attended every session but I did attend a number and my main conclusion is striking: Canada 3.0 was not a conference primarily about Canada’s digital future. Canada 3.0 was a conference about Canada’s digital commercial future. Worse, this meant the conference failed on two levels. Firstly, it failed because people weren’t trying to imagine a digital future that would serve Canadians as creators, citizens and contributors to the internet and what this would mean to commerce, democracy and technology. Instead, my sense was that the digital future largely being contemplated was one where Canadians consumed services over the internet. This, frankly, is the least important and interesting part of the internet. Designing a digital strategy for companies is very different than designing one for Canadians.

But, secondly, even when judged in commercial terms, the conference, in my mind, failed. This is not because the wrong people were there, or that the organizers and participants were not well-intentioned. Far from it. Many good and many necessary people were in attendance (at least as one could expect when hosting it in Stratford).

No, the conference’s main problem was that, at the core of many conversations lay an untested assumption: That we can manage the transition of broadcast media (by this I mean movies, books, newspaper & magazines, television) as well as other industries from an (a) broadcast economy to a (b) networked/digital economy. Consequently, the central business and policy challenge is how do we help these businesses survive this transitionary period and get “b” happening asap so that the new business models work.

But the key assumption is that the institutions – private and public – that were relevant in the broadcast economy can transition. Or that the future will allow for a media industry that we could even recognize. While I’m open to the possibility that some entities may make it, I’m more convinced that most will not. Indeed, it isn’t even clear that a single traditional business model, even radically adapted, can adjust to a network world.

What no one wants to suggest is that we may not be managing a transition. We may be managing death.

The result: a conference that doesn’t let those who have let go of the past roam freely. Instead they must lug around all the old modes like a ball and chain.

Indeed, one case in point was listening to managers of the Government of Canada’s multimedia fund share how, to get funding, a creator would need to partner with a traditional broadcaster. To be clear, if you want to kill content, give it to a broadcaster, they’ll play it once or twice, then put it in a vault and one will ever see it again. Furthermore, a broadcaster has all the infrastructure, processes and overhead that make them unworkable and unprofitable in the online era. Why saddle someone new with all this? Ultimately this is a program designed to create failures and worse, pollute the minds of emerging multimedia artists with all sorts of broadcast baggage. All in the belief that it will help bridge the transition. It won’t.

The ugly truth is that just like the big horse buggy makers didn’t survive the transition to the automobile, or that many of the creators of large complex mainframe computers didn’t survive the arrival of the personal computer, our traditional media environment is loaded with the walking dead. Letting them control the conversation, influence policy and shape the agenda is akin to asking horse drawn carriage makers write the rules for the automobile era. But this is exactly what we are doing. The copyright law, the pillar of this next economy, is being written not by the PMO, but by the losers of the last economy. Expect it to slow our development down dramatically.

And that’s why Canada 3.0 isn’t about planning for 3.0 at all. More like trying to save 1.0.

Banned Blogs

So I’m fed up. I’m tired of hearing about fantastic blogs written by fantastic people that are banned by different federal departments of the Canadian public service.

Banned you say? Isn’t that a little dramatic?

No! I mean banned.

The IT departments of several federal governments block certain websites that are deemed to have inappropriate or non-work related content. Typically these include sites like Facebook, Gmail and of course, various porn sites (a list of well known mainstream sites that are blocked can be found here).

I’ve known for a while that my site – eaves.ca – is blocked by several departments and it hasn’t bothered me (I’ve always felt that blocking someone increase people’s interest in them), But as whispers about the number of blogs blocked grows, I find the practice puzzling and disturbing. These are not casual blogs. One might think this is limited to casual or personel blogs but many of the blogs I hear about are on public policy or the public service. They may even contain interesting insights that could help public servants. They are not sites that contain pornographic material, games or other content that could be construed as leisure (as enjoyable as I know reading my blog is…).

So, in an effort to get a better grasp of the scope and depth of the problem I’d like your help to put together a list. On eaves.ca I’ve created a new page – entitled “Banned Blogs” that lists blogs and the Canadian Federal Government Ministries that ban them. If you are a public servant and you know of a blog that is blocked from your computer please send me a note. If you know a public servant, ask them to check their favourite blogs. If you know of a site that is blocked you can send me an email, at tweet, or an anonymous comment on this blog, I’ll add it to the list. It would be fantastic to get a sense of who is blocked and by which departments. Maybe we’ll even knock some sense into some IT policies.

Maybe.

(Post script: Douglas B. has some great suggestions about how to deal with blocked sites and lists some of the ancient policies that could help public servants fight this trend).

Digital Economy Strategy: Why we risk asking the wrong question

Far better an approximate answer to the right question, than the exact answer to the wrong question, which can always be made precise….

John Tukey

I’ve always admired Paul Erdos, the wandering mathematician who I first learned about by reading his obituary in the Economist back in 1996 (and later learned was a friend and frequent house guest of my grandfather’s). What I remember best about that economist obituary was how one of his students talking about his genius not lying in his capacity to produce mathematical proofs, but in his ability to ask the right question, which set events in motion so that the proof could be found at all.

It is with that idea in mind that I turn to the Canada 3.0 conference here in Stratford Ontario where I’ve been invited to take part in a meeting with industry types and policy leaders to talk about what Canada must do to become a leading digital nation by 2017. The intent is to build on last year’s Stratford Declaration and develop an action plan.

So what do I think we need to do? First, I think we need to ask the right question.

I think we need to stop talking about a digital as the future.

This whole conversation isn’t about being a digital country. It isn’t about a future where everything is going to be digitized. That isn’t the challenge. It is already happening. It’s done. It’s over. Canada is already well on its way to becoming digital. Anyone who uses MS Word to write a document is digital. I’ve been submitting papers using a word processor since high school (this comes from a place of privilege, something I’ll loop back to). Worse, talking about digital means talking about technology like servers or standards or business models like Bell, or Google or Music Producers and all the other things that don’t matter.

The dirty truth is that Canada’s digital future isn’t about digital. What is special isn’t that everything is being digitized. It’s that everything is being connected. The web isn’t interesting because you can read it on a computer screen. It is special because of hyperlinks – that information is connected to other information (again, something the newspaper have yet to figure out). So this is a conversation about connectivity. It is about the policy and legal structure needed when me, you, information, and places, when everything, everywhere is connected to everything else, everywhere persistently. That’s the big change.

So if a digital economy strategy is really about a networked economy strategy, and what makes a networked economy work better is stronger and more effective connectivity, then the challenge isn’t about what happens when something shifts from physical to digital. It is about how we promote the connectivity of everything to everything in a fair manner. How do we make ourselves the most networked country, in the physical, legally and policy terms. This is the challenge.

Viewed in this frame. We do indeed have some serious challenges and are already far behind many others when it comes to connectivity if we want to be a global leader by 2017. So what are the key issues limiting or preventing connectivity and what are the consequences of a networked economy we need to be worried about? How about:

  • Expensive and poor broadband and mobile access in (in both remote and urban communities)
  • Throttling and threats to Net Neutrality
  • Using copyright as a vehicle to limit the connectivity of information (ACTA) or threaten peoples right to connect
  • Using copyright as a vehicle to protect business models built on limiting peoples capacity to connect to innovations and ideas
  • Government’s that don’t connect their employees to one another and the public
  • It’s also about connective rights. Individual rights to limit connectivity to privacy, and right to freely associate and disassociate

So what are the three things we need to start thinking about immediately?

If connectivity is the source of innovation, wealth and prosperity then how do we ensure that Canadians are the most connected citizens in the world?

1)    a net neutral broadband and mobile market place where the costs of access are the lowest in the world.

That is would be a source of enormous competitive advantage and a critical stepping stone to ensuring access to education and an innovation fueled economy. Sadly, we have work to do. Take for example, the fact that we have the worst cell phone penetration rates in the developed world. This at a time when cellphone internet access is overtaking desktop internet access.

But more importantly, I was lucky to be able to use a word processor 20 years ago. Today, not having access to the internet is tantamount to preventing a child from being able to go to the library, or worse, preventing them from learning to read. Affordable access is not a rural or urban issue. It’s a rights and basic education issue.

Equally important is that the network remain a neutral platform upon which anyone can innovate. The country that allows its networks to grant (or sell) certain companies or individuals special privileges is one that one that will quickly fall behind the innovation curve. New companies and business models inevitable displace established players. If those established players are allowed to snuff out new ideas before they mature, then there will be no new players. No innovation. No new jobs. No competitive advantage.

2)    A copyright regime that enables the distribution of ideas and the creation of new culture.

Here I am in Stratford, Ontario, home of the Stratford Shakespeare Festival, one of the biggest open source festivals in the country. Every year the city celebrates plays that, because they are in the public domain, can be remixed, re-interpreted, and used without anyone’s permission to create new derivative cultural works (as well as bring joy and economic prosperity to untold people). A copyright regime that overly impedes the connectivity of works to one another (no fair use!) or the connectivity of people to ideas is one that will limit innovation in Canada.

A networked economy is not just one that connects people to a network. That is a broadcast economy. A networked economy is one that allows people to connect works together to create new works. Copyright should protect creators of content, but it should do so to benefit the creators, not support vast industries that market, sell, and repackage these works long after the original creator is dead. As Lawrence Lessig so eloquently put it:

  • Creativity and innovation always builds on the past.
  • The past always tries to control the creativity that builds upon it.
  • Free societies enable the future by limiting this power of the past.
  • Ours is less and less a free society.

A networked economy limits the past to enable the future.

3)    A government that uses a networked approach to creating a strategy for a connected economy.

An agrarian economy was managed using papyrus, an industrial economy was managed via printing press, typewriters and carbon copy paper. A digital economy strategy and managing policies were created on Microsoft Word and with email. A Network Economy can and only will be successfully managed and regulated when those trying to regulate it stop using siloed, industrial modes of production, and instead start thinking and organizing like a network. Not to ring an old bell, but today, that means drafting the policy, from beginning to end, on GCPEDIA, the only platform where federal public servants can actually organize in a network.

Managing an industrial economy would have been impossible using hand written papyrus, not just because the tools could not have handled the volume and complexity of the work but because the underlying forms of thinking and organizing that are shaped by that tool are so different from how an industrial economy works.

I’m going to predict it right now. Until a digital economy strategy is drafted using online but internally-connected tools like wikis, it will fail. I say this not because the people working on it will not be intelligent, but because they won’t be thinking in a connected way. It will be like horse and buggy users trying to devise what a policy framework for cars should look like. It will suck and terrible, terrible decisions will be made.

In summary, these are the three things I think the federal government needs to be focused on if we are going to create a digital economy strategy that positions us to be leaders by 2017. This is the infrastructure that needs to be in place to ensure that we maximize our capacity to connect each other and our work and reap the benefits of that network.

The Dangers of Being a Platform

Andrew P. sent me this article Apple vs. the Web: The Case for Staying Out of Steve Jobs’s Walled Garden that makes a strong case for your media company to not develop (or at least not bet the bank on) an iPhone App as the way out of trouble.

Few companies actually know how to manage being a platform for an ecosystem and Apple is definitely not one of them. Remember this is company that’s never played well with others and has a deeply disturbing control freakishness to it. Much like Canadians are willing to tolerate the annoying traits of the federal NDP, consumers and developers were willing to tolerate these annoying traits as long as Apple was merely influencing the marketplace but not shaping it. As Apple’s influence grows, so to do the rumblings about its behaviour. People say nice things about Apple’s products. I don’t hear people say nice things about Apple. This is stage one of any decline.

Here, history could be instructive. Look back at another, much more maligned company that has a reputation of not playing well with others: Microsoft. Last year, I wrote this piece about how their inability to partner helped contribute to their relative decline. In short, after kicked around and bullying those who succeeded on its platform, people caught the message and stopped. Today Apple thrives because people elect to innovate on their platform. Because it has been interesting, fun, and to a much, much lesser degree, profitable. Take away the “interesting” and “fun” and/or offer up even a relatively interesting competing platform… and that equation changes.

Heck, even from a end user’s perspective the deal Apple made with me is breaking. Their brand is around great design and fun (think of all those cute fun ads). They still have great design, but increasingly when I think of Apple and the letter F comes to mind the word “fun” isn’t what pops into my head… its “fascism.” Personally I’m fairly confident my next phone will not be an iPhone. I like the phone, but I find the idea of Steve Jobs controlling what I do and how I do it simply too freaky. And I don’t even own a multi-million dollar media empire.

So being a platform is hard. It isn’t license to just print money or run roughshod over whoever you want. It is about managing a social contract with all the developers and content creators as well as all the end users and consumers. That is an enormous responsibility. Indeed, it is one so great we rarely entrust it to a single organization that isn’t the government. Those seeking to create platforms, and Apple, and Facebook especially (and Google and Microsoft to a lesser extent) would all do well to remember that fact.

Oh, and if you’re part of a media companies, don’t expect to saved by some hot new gizmo. Check out this fantastic piece by John Yemma, the Editor of The Christian Science Monitor:

So here’s my position: There is no future in a paywall. No salvation in digital razzle dazzle.

There is, however, a bold future in relevant content.

That’s right. Apple won’t save you. Facebook doesn’t even want to save you. Indeed, there is only one place online where the social contract is clear. And that’s the one you can create with your readers by producing great content. On the web.