Category Archives: technology

CBC: A Case Study in what happens when the Lawyers take over

Like many other people, I’ve been following the virtual meltdown at the CBC over its new (i)copyright rules. For a great summary of the back and forth I strongly encourage you to check out Jesse Brown’s blog. In short the terms of use of the CBC seemed to suggest that no one was allowed to report/reprint excerpts of CBC pieces without the CBC express permission. This, as Cameron McMaster noted, actually runs counter to Canadian copyright law.

And yes, the CBC has been moving quickly and relatively transparently to address this matter and hopefully clearer rules – that are consistent with Canadian law – will emerge. That said, even as they try, the organization will still have a lot of work to do to persuade its readers it isn’t from Mars when it comes to understanding the internet. Consider this devastating line from the CBC’s spokeperson in response to the outcry.

You’ll also still be able to post links to CBC.ca content on blogs, Facebook pages, Twitter or other online media at no charge and will continue to offer free RSS stories for websites (found here).

Really? I’m still allowed to link to the CBC? How is this even under discussion? Who charges people to link to their site? How is that even possible?

Well, if you think that that is weird, it gets weirder. Dig a little deeper and you’ll find what what appears to have so far gone unnoticed in the current debate over the CBC’s bizarre terms of use. On the CBC’s Reuse and Permissions FAQ page the second question and answer reads as follows:

Can we link to your site?
We encourage people to link to us. However, we ask that you read our Terms of Use, which outline the conditions by which external sites may link to ours.

So what are the CBC’s terms of use to linking to their site? Well this is when the Lawyers really take over:

While CBC/Radio Canada encourages links to the Web site, it does not wish to be linked to or from any third-party web site which (i) contains, posts or transmits any unlawful, threatening, abusive, libellous, defamatory, obscene, vulgar, pornographic, profane or indecent information of any kind, including, without limitation, any content constituting or encouraging conduct that would constitute a criminal offense, give rise to civil liability or otherwise violate any local, state, provincial, national or international law, regulation which may be damaging or detrimental to the activities, operations, credibility or integrity of CBC/Radio Canada or which contains, posts or transmits any material or information of any kind which promotes racism, bigotry, hatred or physical harm of any kind against any group or individual, could be harmful to minors, harasses or advocates harassment of another person, provides material that exploits people under the age of 18 in a sexual or violent manner, provides instructional information about illegal activities, including, without limitation, the making or buying of illegal weapons; or (ii) contains, posts or transmits any information, software or other material which violates or infringes upon the rights of others, including material which is an invasion of privacy or publicity rights, or which is protected by copyright, trademark or other proprietary rights. CBC/Radio Canada reserves the right to prohibit or refuse to accept any link to the Web site, including, without limitation, any link which contains or makes available any content or information of the foregoing nature, at any time. You agree to remove any link you may have to the Web site upon the request of CBC/Radio Canada.

This sounds all legal and proper. And hey, I don’t want bigots or child molesters linking to my site either. But that doesn’t mean I can legally prevent them.

The CBC’s terms of use uses language that suggests they have the right to prevent you, or anyone from linking to their website. But from a practical, business strategy and legal perspective it is completely baffling.

In my mind, this is akin to the CBC claiming that it can prevent you from telling people their address or giving them directions to their buildings. Or, the CBC is claiming dominion over every website in the world and that they may dictate whether or not it can link to their site.

I have my suspicions that there is nothing in Canadian law to support the CBC’s position. If anyone knows of a law or decision that would support the CBC’s terms of use please do send me a note or comment below.

Otherwise, I hope the CBC will also edit this part of its Terms of Use and its Reuse and Permissions FAQ page. We need the organization to be in the 21st century.

More Open Data Apps hit Vancouver

Since the launch of Vancouver’s open data portal a lot of the talk has focused on independent or small groups of programmers hacking together free applications for citizens to use. Obviously I’ve talked a lot about (and have been involved in) Vantrash and have been a big fan of the Amazon.ca/Vancouver Public Library Greasemonkey script created by Steve Tannock.

But independent hackers aren’t the only ones who’ve been interested. Shortly after the launch of the city’s Open Data Portal, Microsoft launched an Open Data App Competition for developers at the Microsoft Canadian Development Centre just outside Vancouver in Richmond, British Columbia. On Wednesday I had the pleasure of being invited to the complex to eat free pizza and, better still, serve as a guest judge during the final presentations.

So here are 5 more applications that have been developed using the city’s open data. (Some are still being tweaked and refined, but the goal is to have them looking shiny and ready by the Olympics.)

Gold

MoBuddy by Thomas Wei: Possibly the most ambitious of the projects, MoBuddy enables you to connect with friends and visitors during Olympics to plan and share experiences through mobile social networking including Facebook.

Silver

Vancouver Parking by Igor Babichev: Probably the most immediately useful app for Vancouverites, Vancouver Parking helps you plan your trip by using your computer in advanced to find parking spots, identify time restrictions, parking duration and costs… It even knows which spots won’t be available for the Olympics. After the Olympics are over, it will be interesting to see if other hackers want to help advance this app. I think a mobile or text message enabled version might be interesting.

Bronze (tie):

Free Finders by Avi Brenner: Another app that could be quite useful to Vancouver residents and visiting tourists, Free Finders uses your facebook connection to find free events and services across the city. Lots of potential here for some local newspapers to pick up this app and run with it.

eVanTivitY by Johannes Stockmann: A great play on creativity and Vancouver, eVanTivity enables you to find City and social events and add-in user-defined data-feeds. Once the Olympics are over I’ve got some serious ideas about how this app could help Vancouver’s Arts & Cultural sector.

Honourable Mention:

MapWay by Xinyang Qiu: Offers a way to find City of Vancouver facilities and Olympic events in Bing Maps as well as create a series of customized maps that combine city data with your own.

More interestingly, in addition to being available to use, each of these applications can be downloaded, hacked on, remixed and tinkered with under an open source license (GNU I believe) once the Olympics are over. The source codes will be available at Microsoft’s Codeplex.

In short, it is great to see a large company like Microsoft take an active interest in Vancouver’s Open Data and try to find some ways to give back to the community – particularly using an open source licenses. I’d also like to give a shout out to Mark Gayler (especially) as well as Dennis Pilarinos and Barbara Berg for making the competition possible and, of course, to all the coders at the Development Centre who volunteered their time and energy to create these apps. These are interesting times for a company like Microsoft and so I’d also like to give a shout out to David Crow who’s been working hard to get important people inside the organization comfortable with the idea of open source and open to experimenting with it.

The Real-time Politician – It's about filters (and being unfiltered)

The other day Mathew Ingram, in response to articles about the president’s one year anniversary asked What Are the Implications of a Real-Time, Connected President? More specifically:

Is a real-time connected president more likely to think for himself and look outside the usual Washington circles for ideas or input, or is being connected just a giant distraction for someone who is supposed to be leading the nation?

The policy implications of a real-time, connected president could be interestingly different around say, copyright law, net-neutrality and a myrad of other modern issues a pre-internet president might not get.

But in response to Mathew’s specific question I think the connected president (or politician) has more ways to fail, but if they manage their filters correctly, could also be much, much smarter.

Let me explain why.

The entire infrastructure around a politician is about filtering. As odd as it may be for some readers to hear, politicians do almost nothing but work with information. Indeed, they are overwhelmed with the stuff. Theirs is among the first jobs to deal with the noise to signal problem. (How do you distinguish important information – signal – from unimportant information – noise). Ever notice when you talk to many politicians (particularly ones you don’t know), they listen but aren’t really absorbing what you say – it is because they have people telling them “what matters” about 9-14 hours out of every day. And each issue they get approached about is “the most important.”

Moreover, most politicians have marginal influence at best (even the president can only change so much, particularly without Congress’s help). So that glazed look… it’s not that they don’t care, they are just overwhelmed and don’t know how to prioritize you.

To deal with all this information (not to mention, for politicians like the President, all the decisions), politicians have evolved filters. These filters are staffers. This is why, in many instances, advisers are so deeply powerful – the elected officials they serve are often completely dependent on them to filter out all the noise (irrelevant information) and feed them the factual and political information they need to know (the relevant information) and not much else (like, say, context). A good constituency office staffer knows who in the riding absolutely needs to be called versus who is the time-suck that would never vote for you anyway.  A good policy adviser can provide a briefing note that filters out the misinformation and presents the core message or choice the politician must communicate or make.

Previous new communication technology either didn’t disrupt this filter mechanism because they were purely broadcast (think radio or television), or had limited effect because they only widened circle of people the politician could consult in a narrow fashion (telephone or telegraph). The internet however does two things. One, it allows you to communicate, in an unfiltered manner, with millions of people, who can in turn communicate back to you. Second, it allows one to access a vast swath of information – much of which is itself already filtered.

The implication of the first shift has been widely talked about. I think politicians are still grappling with this opportunity, but Facebook, Twitter, even email all allow politicians to access their supporters and constituents in interesting ways. They also allow constituents to easily self-organize to give you feedback, be it positive or, (as Obama experienced when his own supporters organized on my.barackobama.com in protest to his vote in favour of the Foreign Intelligence Surveillance Act) “corrective.” In this regard, politicians are going to need a whole new set of filters – ones that are able to identify which 2,000 person facebook group might swell into a 220,000 person group in 3 weeks.

But the really interesting shift is in the relationship between politicians and their advisers. And here we’ve already seen that shift.

The fact is that most technologies have allowed politicians – particularly those with executive authority – to further centralize that authority. The telegraph, and then telephone allowed politicians to have more direct contact with more people. This gave them the opportunity to micromanage their affairs rather than delegate to officials (think Nixon with the telephone and the details he would get into or the ever centralizing authority of the Canadian Prime Minister’s Office since Trudeau).

For the networked politicians the temptation to reach out and micromanage a greater array of staffers – or even to be consulted directly on a greater number of smaller decisions – is enormous. At some point, in a networked world the flow of information, the quantity of decisions, and the number of relationships will simply become overwhelming.This is how these technologies can cause filter breakdown and ultimately paralyze the decision making process (a problem Canada’s present Prime Minister has wrestled with).

And this is why the situation will be so interesting. A networked world increases the power of both the politician and their advisers. As connected politicians have to deal with so much more information the need for filters, and thus the role of advisers, actually becomes more important. At the same time however, the President’s capacity to go around their filters – to access the opinions of outsiders, particularly those who have been filtered by the masses as being credible – also increases. So, in some ways politicians are more autonomous: less dependent on, or more able to challenge, their advisers. (This is somewhat the picture being painted in the Washington Post article about Obama.)

My sense is that the networked politician has a difficult time in front of them. Finding the right balance between trusting one’s advisers, managing decisions at the appropriate level and knowing when to listen to outsiders will require more discipline than ever before. Networks and modern communication technology make the ability (and temptation) to do too much of any of these much, much easier.

On the flip side however, if a politician can stay disciplined, they may be able to demand better work from their advisers and engage in a greater swath of issues effectively.

Facebook Activists: Engaged, Voting and Older

Today I have the following article on the Globe and Mail website. Interestingly, it seems some of the opposition leaders are beginning to take an interest in the Facebook group – Liberal leader Michael Ignatieff announced yesterday that he will be doing an online townhall on proroguing parliament on his facebook page. Will be interesting to see how this goes and if political parties can get comfortable with a two-way medium where they can’t control the message.

Facebook Activists: Engaged, Voting and Older

Over the last few weeks a number of pundits have been unsure how to react to sudden rise of the Facebook group Canadians Against Proroguing Parliament. Conservative politicians attempted to label the over 200,000-person strong group as part of “the chattering classes” and political pundits have questioned whether online protests even have meaning or weight.

What is more likely is that few politicians or pundits have actually spent time on the Facebook group and fewer still have tried to understand who its members are and what they believe. Recently Pierre Killeen, an Ottawa-based online public engagement strategist, conducted a survey of the group’s membership in partnership with the Rideau Institute.

Over 340 members of the anti-prorogation Facebook group shared their views and while not a scientific survey, it does provide a window into the group’s makeup and the motivations of its members. Some of the results will surprise both pundits and politicians:

Older than exepcted

To begin, contrary to the view that Facebook is entirely youth driven, just under half of those who completed the survey were 45 years of age or older. Thirty-four per cent were aged 31 to 44 and 16 per cent answered that they were aged 18 to 30. Not a single person who opted to take the survey was aged 12 to 18.

They vote

Perhaps the most interesting part of the survey was the fact that 96 per cent of the participants said they voted in the last federal election. Survey recipients frequently overstate their voting history (people wish to sound more responsible than they are) and this result should be regarded with some skepticism. However, it nonetheless suggests group members are more likely to vote than the general population. (Sixty per cent of Canadians voted in the last federal election).

New to, but believers in, online activism

Over half of the members surveyed (55 per cent) said this was the first time they had joined a politically oriented Facebook group. Another 33 per cent indicated they had previously joined only two to four Facebook groups with political themes. Interestingly, 75 per cent of respondents believe the group “will make a difference” while 22 per cent were unsure.

Democracy and accountability are the key issues

Lastly, when asked why they joined, just over half (53 per cent) of respondents indicated it was because “proroguing parliament is undemocratic” and another 33 per cent said it was because “Parliament needs to investigate the Afghan detainee matter.”

Again, it is worth noting that this survey is not scientific, but is our best window to date into who has joined Canadians Against Proroguing Parliament.

And what should people take away from all this? The Facebook group matters for reasons beyond those I initially outlined for The Globe. The fact that this is the first time a majority of those surveyed have joined a politically oriented online campaign suggests such groups may serve as an on-ramp to greater activism and awareness.

More importantly, however, if the survey results are even remotely representative, then the members of the Facebook group vote. Any time 200,000 citizens say an issue will affect their vote, politicians should not discount them so hastily.

Finally, given that Canadians Against Proroguing Parliament has signed up twice the number of Facebook members than all the political leaders combined (Conservatives 29,616; Liberals 28,898; NDP 27,713; Bloc 4,020; for a collective total of 90,247 fans) this is a constituency whose impact may be better monitored in the voting booth than on the street.

David Eaves is a public-policy entrepreneur, open government activist and negotiation expert based in Vancouver

The Internet as Surveillance Tool

There is a deliciously ironic, pathetically sad and deeply frightening story coming out of France this week.

On January 1st France’s new (and controversial law) Haute Autorité pour la Diffusion des Œuvres et la Protection des Droits sur Internet otherwise known by its abbreviation – Hadopi – came into effect. The law makes it illegal to download copyright protected works and uses a “three-strikes” system of enforcement. The first two times an individual illegally downloads copyrighted content (knowingly or unknowingly) they receive a warning. Upon the third infraction the entire household has its internet access permanently cut off and is added to a blacklist. To restore internet access the households’ computers must be outfitted with special monitoring software which tracks everything the computer does and every website it visits.

Over at FontFeed, Yves Peters chronicles how the French Agency designated with enforcing the legislation, also named Hadopi, illegally used a copyrighted font, without the permission of its owner, in their logo design. Worse, once caught the organization tried to cover up this fact by lying to the public. I can imagine that fonts and internet law are probably not your thing, but the story really is worth reading (and is beautifully told).

But as sad, funny and ironic as the story is, it is deeply scary. Hadopi, which is intended to prevent the illegal downloading of copyrighted materials, couldn’t even launch without (innocently or not) breaking the law. They however, are above the law. There will be no repercussions for the organization and no threat that its internet access will be cut off.

The story for French internet users will, however, be quite different. Over the next few months I wouldn’t be surprised if tens, or even hundreds of thousands of French citizens (or their children, or someone else in their home) inadvertently download copyrighted material illegally and, in order to continue to have access to the internet, will be forced to acquiesce to allowing the French Government to monitor everything they do on their computer. In short, Hadopi will functionally become a system of mass surveillance – a tool to enable the French government to monitor the online activities of more and more of its citizens. Indeed, it is conceivable that after a few years a significant number and possibly even a majority of French computers could be monitored. Forget Google. In France, the government is the Big Brother you need to worry about.

Internet users in other countries should also be concerned. “Three Strikes” provisions likes those adopted by France have allegedly been discussed during the negotiations of ACTA, an international anti-counterfeiting treaty that is being secretly negotiated between a number of developed countries.

Suddenly copyright becomes a vehicle to justify the governments right to know everything you do online. To ensure some of your online activities don’t violate copyright online, all online activities will need to be monitored. France, and possibly your country soon too, will thus transform the internet, the greatest single vehicle for free thought and expression, into a giant wiretap.

(Oh, and just in case you thought the French already didn’t understand the internet, it gets worse. Read this story from the economist. How one country can be so backward is hard to imagine).

The Supreme Court of Canada: There are no journalists, only citizens

I’ll confess some confusion around the slant taken by several newspapers and media outfits regarding yesterday’s supreme court decision on defense of libel claims.

For those new to this story, yesterday, the Supreme Court of Canada ruled that a libel claim can be defeated even when the facts or allegations made turn out to be false (e.g. I don’t owe you money if I say something nasty and untrue about you) as long as the story was in the public interest and I met a certain standard around trying to ascertain the truth. In short, my intentions, not my output, is what matters most. This new line of defense has a fancy new name to go with it… the defence of responsible communication.

Boring, and esoteric? Hardly.

Notice how it isn’t called “the defence of responsible journalism?” (although, ahem, someone should let CTV know). This story matters as it demonstrates that the law is finally beginning to grasp what the internet means for our democracy and society.

Sadly, the Globe, CBC, National Post and CTV (indeed everyone with the exception of Colby Cosh at Macleans) all framed the decision as being about journalism and journalists.

It isn’t.

This is about all us – and our rights and responsible in a democracy in the internet age. Indeed, as if to hammer home this point the justices went out of their way to in their decision to essentially say: there is no such thing as “a journalist” in the legal sense.

A second preliminary question is what the new defence should be called.  In arguments before us, the defence was referred to as the responsible journalism test.  This has the value of capturing the essence of the defence in succinct style.  However, the traditional media are rapidly being complemented by new ways of communicating on matters of public interest, many of them online, which do not involve journalists.  These new disseminators of news and information should, absent good reasons for exclusion, be subject to the same laws as established media outlets.  I agree with Lord Hoffmann that the new defence is “available to anyone who publishes material of public interest in any medium” [paragraph 96]

and early they went ever further:

The press and others engaged in public communication on matters of public interest, like bloggers, must act carefully, having regard to the injury to reputation that a false statement can cause. [paragraph 62]

If you are going to say “blogger” you might as well say “citizen.”  All the more so when “publishing material of public interest in any medium” includes blogs, twitter, an SMS text message, a youtube video… mediums through which anyone can publish and broadcast.

Rather than being about journalism this case was about freedom of expression and about laying a legal framework for a post-journalism world. Traditional journalists benefit as well (which is nice – and there will still be demand for their services) but the decision is so much broader and far reaching than them. At its core, this is about what one citizen can say about another citizen, be that in the Globe, on CBC, on my blog, or anywhere. And rather than celebrate or connote any unique status upon journalist it does the opposite. The ruling acknowledges that we are all now journalists and that we need a legal regime that recognizes this reality.

I suspect some journalists will likely protest this post. But the ruling reflects reality. The notion of journalists as a professional class was and has always been problematic. There are no standards to guide the profession and no professional college to supervise members (as there is with the legal or medical profession). Some institutions take on the role of standard setting themselves (read journalism schools and media outlets) but they have no enforcement capacity and ultimately this is not a self-regulated profession. Rather, it has always been regulated by the courts. Technology has just made that more evident, and now the courts have too. Today, when speaking of others we are all a little better protected, and also have the burden of behaving a little more responsibly.

MuniForge: Creating municipalities that work like the web

Last month I published the following article in the Municipal Information Systems Association’s journal Municipal Interface. The article was behind a firewall so now that the month has gone by I’m throwing it up here. Basically, it makes the case for why, if government’s applied open source licenses to the software they developed (or paid to develop), they could save 100’s of millions, or more likely billions of dollars, a year. Got a couple of emails from municipal IT professionals from across the country

MuniForge: Creating Municipalities that Work like the Web

Introduction

This past May the City of Vancouver passed what is now referred to as “Open 3”.This motion states that the City will use open standards for managing its information, treat open source and proprietary software equally during the procurement cycle, and apply open source licenses to software the city creates.

While a great deal of media attention has focused on the citizen engagement potential of open data, but the implications of the second half of the motion – that relating to open source software – has gone relatively unnoticed. This is all the more surprising since last year the Mayor of Toronto’s also promised his city would apply an open source license to software it creates. This means that two of Canada’s largest municipalities are set to apply open source licenses to software they create in house. Consequently, the source code and the software itself will be available for free under a license that permits users to use, change, improve and redistribute it in modified or unmodified forms.

If capitalized upon these announcements could herald a revolution in how cities currently procure and develop software. Rather than having thousands of small municipalities collectively spending billions of dollars to each recreate the own wheel the open sourcing of municipal software could weave together Canada’s municipal IT departments into one giant network in which expertise and specialized talents drive up quality and security to the benefit of all while simultaneously collapsing the costs of development and support. Most interestingly, while this shift will benefit larger cities, its benefit and impact could be most dramatic and positive among the country’s smaller cities (those with populations under 200K). What is needed to make it happen is a central platform where the source code and documentation for software that cities wish to share can be uploaded and collaborated on. In short, Canada needs a Sourceforge, or better, a GitHub for municipal software.

The cost

For the last two hundred years one feature has dominated the landscape for the majority if municipalities in Canada: isolation. In a country as vast and sparsely populated as ours villages, towns, and cities have often found themselves alone. For citizens the railway, the telegraph, then the highway and telecommunications system eroded that isolation, but if we look at the operations of cities this isolation remains a dominant feature. Most Canadian municipalities are highly effective, but ultimately self contained islands. Municipal IT departments are no different. One municipality rarely talks to that of another, particularly if they are not neighbours.

The result of this process is that in many cities across Canada IT solutions are frequently developed in one of two manners.

The first is the procurement model. Thankfully, when the product is off the shelf, or easily customized, deployment can occur quickly, this however, is rarely the case. More often, larger software and expensive consulting firms are needed to deploy such solutions frequently leaving them beyond the means of many smaller cities. Moreover, from an economic development perspective the dollars spent on these deployments often flow out of the community to companies and consultants based elsewhere. On the flip side, local, smaller firms, if they exist at all, tend to be untested and frequently lack the expertise and competition necessary to provide a reliable and affordable product. Finally, regardless of the firms’ size, most solutions are proprietary and so lock a city into the solution in perpetuity. This not only holds the city hostage to the supplier, it eliminates future competition and worse, should the provider go out of business, it saddles the city with an unsupported system which will be painful and expensive to upgrade out of.

The second option is to develop in-house. For smaller cities with limited IT departments this option can be challenging, but is often still cheaper than hiring an external vendor. Here the challenge is that any solution is limited by the skills and talents of the City’s IT staff. A small city, with even a gifted IT staff of 2-5 people will be challenged to effectively build and roll out all the IT infrastructure city staff and citizens need. Moreover, keeping pace with security concerns, new technologies and new services poses additional challenges.

In both cases the IT services a city can develop and support for staff and citizens is be limited by either the skills and capacity of its team or the size of its procurement budget. In short, the collective purchasing power, development capacity and technical expertise of Canada’s municipal IT departments is lost because we remain isolated from one another. With each city IT department acting like an island this creates enormous constraints and waste. Software is frequently recreated hundreds of times over as each small city creates its own service or purchases its own license.

The opportunity

It need not be this way. Rather than a patchwork of isolated islands, Canada’s municipal IT departments could be a vast interconnected network.

If even two small communities in Canada applied an open source license to a software they were producing, allowed anyone to download it and documented it well the cost savings would be significant. Rather than having two entities create what is functionally the same piece of software, the cost would be shared. Once available, other cities could download and write patches that would allow this software to integrate with their own hardware/software infrastructure. These patches would also be open source making it easier for still more cities to use the software. The more cities participate in identifying bugs, supplying patches and writing documentation, the lower the costs to everyone becomes. This is how Linus Torvalds started a community whose operating system – Linux – would become world class. It is the same process by which Apache came to dominate webservers and it is the same approach used by Mozilla to create Firefox, a web browser whose market share now rivals that of Internet Explorer. The opportunity to save municipalities millions, if not billions in software licensing and/or development costs every year is real and tangible.

What would such a network look like and how hard would it be to create? I suspect that two pieces would need to be in place to begin growing a nascent network.

First, and foremost, there need to be a handful of small projects. Often the most successful source projects are those that start collaboratively. This way the processes and culture are, from the get go, geared towards collaboration and sharing.  This is also why smaller cities are the perfect place to start for collaborating on open source projects. The world’s large cities are happy to explore new models, but they are too rich, too big and too invested in their current systems to drive change. The big cities can afford Accenture. Small cities are not only more nimble, they have the most to gain. By working together and using open source they can provide a level of service comparable to that of the big cities, at a fraction of the cost. An even simpler first step would be to ensure that when contractors sign on to create new software for a city, they agree that the final product will be available under and open source license.

Second, MISA, or another body, should create a Sourceforge clone for hosting open sourced municipal software projects. Sourceforge is an American based open source software development web site which provides services that help people build cool and share software with coders around the world. It presently hosts more than 230,000 software projects has over 2 million registered users. Soureforge operates as a sort of market place for software initiatives, a place where one can locate software one is interested in and then both download it and/or become part of a community to improve it.

A Soureforge clone – say Muniforge – would be a repository for software that municipalities across the country could download and use for free. It would also be the platform upon which collaboration around developing, patching and documenting would take place. Muniforge could also offer tips, tools and learning materials for those new to the open source space on how to effectively lead, participate and work within an open source community. This said, if MISA wanted to keep costs even lower, it wouldn’t even need to create a sourecforge clone, it could simply use the actual sourceforge website and lobby the company to create a new “municipal” category.

And herein lies the second great opportunity of such a platform. It can completely restructure the government software business in Canada. At the moment Canadian municipalities must choose between competing proprietary systems that lock them into to a specific vendor. Worst still, they must pay for both the software development and ongoing support. A Muniforge would allow for a new type of vendor modeled after Redhat – the company that offers support to users that adopt its version of the free, open source Linux operating system. Suddenly while vendors can’t sell software found on Muniforge, they could offer support for it. Cities would not have the benefit of outsourcing support, without having to pay for the development of a custom, proprietary software system. Moreover, if they are not happy with their support they can always bring it in house, or even ask a competing company to provide support. Since the software is open source nothing prevents several companies from supporting the same piece of software – enhancing service, increasing competition and driving down prices.

There is another, final, global benefit to this approach to software development. Over time, a Muniforge could begin to host all of the software necessary to run a modern day municipality. This has dramatic implications for cities in the developing world. Today, thanks to rapid urbanization, many towns and villages in Asian and Africa will be tomorrow’s cities and megacities. With only a fraction of the resources these cities will need to be able to offer the services that are today common place in Canada. With Muniforge they could potentially download all the infrastructure they need for free – enabling precious resources to go towards other critical pieces of infrastructure such as sewers and drinking water. Moreover, a Muniforge would encourage small local IT support organizations to develop in those cities providing jobs fostering IT innovation where it is needed most.  Better still, over time, patches and solutions would flow the other way, as more and more cities help improve the code base of projects found on Muniforge.

Conclusion

The internet has demonstrated that new, low cost models of software development exist. Open source software development has shown how loosely connected networks of coders/users from across a country, or even around the world can create world class software that rivals and even outperforms software created by the largest proprietary developers. This is the logic of the web – participation, better development and low-cost development.

The question cities across Canada need to ask themselves is: do we want to remain isolated islands, or do we want to work like the web, working collaboratively to offer better services, more quickly and at a lower cost. If even only some cities choose the later answer an infrastructure to enable collaboration can be put in place at virtually no cost, while the potential benefits and the opportunity to restructure the government software industry would be significant. Island or network – which do we want to be?

オープンデータの3つの規則

[The following is a Japanese Translation of this post – I’ll be publishing a different language each day this week.]

158px-Flag_of_Japan.svg私はここ数年来、開かれた政府の仕事に深く関わってきた。具体的には政府のデータの公開情報を市民の誰もが活用ができるようにと主張してきた。私が興味を持ったことを書いてみると、公開情報と技術と世代の変化が政府を変えて行くと言うことだ。
今年の初めに、バンクーバーの市長と市議会にオープンモーション(スタッフの間ではOpen 3と呼ばれている)を導入することを助言し始め、カナダで最初にバンクーバー市のオープンデータポータルを作り出した。最近では、オーストラリア政府が私に国際リファレンスグループのための政府2.0特別専門委員会に.出席するよう依頼してきた。

開かれた政府の仕事は広範囲にわたるが、最近の私の仕事では公開データにおいて、結局、何が必要か何を求めているのかの本質を見極めることを強く求められた。カナダ政府情報機関の委員が行ったデジタル時代のRight to Know 週間において、結果的には議会の討論中に私の努力したことが出席者と共有された。:

政府公開データの3つの規則

  1. もし、スパイダーやインデックスがなければ利用できない。
  2. もし、読めない形式だったら活用できない。
  3. もし、法律の枠組みにおいて、許可がなければ使用できない。

例えば、(1) 基本的なことを言うと、もしC(または他の検索エンジン)が見つからなければ、殆どの市民がそのことを調べることができない。だから、それが可能な、あらゆる検索エンジンスパイダーを利用したほうがいい。

データを見つけた後、(2)で言っていることは、そのデータが使えることが必要である。役立つフォームで引き出したりダウンロードしたりする必要がある。(例えば、API、サブスクリプションフィード、書類)グーグルマップや他のデータセットを使ったり、オープンオフィースで分析したり、標準のものを変換したり、必要なプログラムを使えることが必要である。
データーを自由に使ったりできない人は討論から外される。

最終的にデータが見つかって使えても、(3)の関係で、作ったものを共有したり、他の人を動員したり、新しいサービスや興味あることを供給する法律的な許可が必要と言う著作権の問題が生じてくる。情報は自由に使用できるライセンスが必要であるが、理想を言えば、全くライセンスがないほうが良く、著作権に触れない政府のデータがあれば一番良い。

データを見つけて使って、共有することが私たちの望んでいることである。

もちろん、インターネットの検索をすると他人も同じように考えていたことが分かる。
おそらくCIOレベルや低レベルの会話には適した8つの重要なオープンガーバメントデータがあるが、政治家(あるいは副大臣、政府長官、最高経営者)に話す時には、以上の3つが基本になっていることが分かった。それは、覚えておくべき必要とされる本質的な事項である。

This Japanese translation was made possible thanks to the generous volunteer work of Tosh Nagashima at the Space-Time Research company in Australia. The team there was amazing in providing a number of translations – I am very much in their debt.

Drie Regels voor Open Overheidsdata

The following is a Dutch Translation of this post – I’ll be publishing a different language each day this week.

158px-Flag_of_the_Netherlands.svgIn de afgelopen jaren ben ik in toenemende mate betrokken geworden bij de Open Overheid beweging en in het bijzonder kom ik op voor Open Data, het beschikbaar maken van informatie die de overheid verzamelt en creëert zodat burgers de informatie kunnen analyseren, gebruiken en hergebruiken voor nieuwe doelen. Mijn interesse in dit onderwerp is een gevolg dat ik veel geschreven en werk heb verricht hoe technologie, open systemen en de generatie overgang de overheid zullen veranderen. Begin dit jaar ben ik begonnen met het adviseren van de burgemeester en gemeenteraad van de stad Vancouver om de Open Motie aan te nemen (ook wel Open3 genoemd) en het ontwikkelen van Vancouver’s Open Data portaal, de eerste gemeentelijk open data portaal in Canada. Recentelijk ben ik gevraagd door de Australische overheid om deel te nemen aan de International Reference Group voor haar Overheid 2.0 Taskforce.

Uiteraard is de Open Overheid beweging behoorlijk breed, maar in mijn meer recente werk heb ik getracht om de kern van Open Data te destilleren uit deze bredere beweging. Wat hebben we nu echt nodig en vragen we dat wel? Tijdens de  Conferentie voor Parlementariërs: Transparantie in het Digitale Tijdperk “Right to Know Week” panel discussie – georganiseerd door het Office of the Information Commissioner – , introduceerde ik drie regels voor Open Overheidsdata.

Drie Regels voor Open Overheidsdata

  1. Als data niet kan worden gevonden of doorzoekbaar gemaakt, dan bestaat het niet
  2. Als data niet beschikbaar is in een open en leesbare vorm voor computers, dan zal het burgers niet uitnodigen om er mee aan de slag te gaan.
  3. Als er geen juridisch raamwerk is dat toestaat om de data te hergebruiken, dan zal het burgers niet empoweren.

Een korte toelichting, (1) betekent eigenlijk: kan ik het vinden? Wanneer Google (en elke andere zoekmachine) informatie niet kan vinden, dan zal dat voor de meeste burgers betekenen dat de informatie niet bestaat. Dus het is van cruciaal belang dat je ervoor zorgt dat de data geoptimaliseerd is om te worden geïndexeerd door allerlei soorten zoekmachines.

Als ik de data heb gevonden, dan richt Regel (2) zich op het bruikbaar maken van de data. Ik moet met de data kunnen spelen. Dat betekent dat ik in staat moet zijn om de data te downloaden in een eenvoudig en bruikbaar formaat (zoals een API, een RSS feed of een bestand met toelichting). Burgers hebben data nodig dat ze in staat stelt om een mash-up te maken met Google Maps of andere websites, of te analyseren in OpenOffice of het te converteren naar een bestandsformaat of programma naar eigen inzicht. Burgers die niet kunnen spelen met informatie zijn burgers die niet meedoen aan het debat .

Uiteindelijk, zelfs wanneer ik de data kan vinden en er mee kan spelen, dan benadrukt Regel (3) dat ik een juridisch raamwerk nodig heb dat mij toestaat om te delen wat ik heb gemaakt, dat ik andere burgers mag uitnodigen en organiseren om te participeren, er een nieuwe dienst om heen kan bouwen of dat ik gewoon interessante feiten mag benadrukken.  Dit betekent dat de licentie behorende tot de informatie en data zo min mogelijk restricties oplegt aan het gebruik, idealiter komt wordt overheidsdata beschikbaar gesteld aan het publieke domein. De beste overheidsdata en informatie is welke die niet beschermd is door auteursrechten. Databestanden  die gelicenseerd zijn op een wijze dat het burgers onmogelijk wordt gemaakt om hun werk te delen, maakt burgers monddood en leidt tot censurering.

Zoeken, spelen, en delen. Dat is wat wij willen.

Een snelle zoektocht op het Internet laat zien dat andere mensen ook hebben nagedacht over dit onderwerp. Er is een uitstekend stuk over 8 Principes van Open Overheidsdata die meer gedetailleerd zijn, en misschien zelfs beter, zeker voor discussies op CIO niveau. Maar voordat we gaan praten met politici (of senior ambtenaren en CEO’s) en net zoals de mensen aanwezig bij de conferentie vorige maanden, vond ik de eenvoud van drie belangrijker: het zijn drie simpele regels die iedereen makkelijk kan onthouden.

This Dutch translation was made possible due to the generous work of Diederik van Lieree.

Making Open Source Communities (and Open Cities) More Efficient

My friend Diederik and I are starting to work more closely with some open source projects about how to help “open” communities (be they software projects or cities) become more efficient.

One of the claims of open source is that many eyes make all bugs shallow. However, this claim is only relevant if there is a mechanism for registering and tackling the bugs. If a thousand people point out a problem, one may find that one is overwhelmed with problems – some of which may be critical, some of which are duplicates and some of which are not problems at all, but mistakes, misunderstandings or feature requests. Indeed, in recent conversations with open source community leaders, one of the biggest challenges and time sinks in a project is sorting through bugs and identifying those that are both legitimate and “new.” Cities, particularly those with 311 systems that act similar to “bug tracking” software in open source projects, have a similar challenge. They essentially have to ensure that each new complaint is both legitimate, and geuninely “new” (and not a duplicate complaint – eg. are there 2 potholes at Broadway and 8th vs. two people have called in to complain about the same pothole).

The other month Diederik published the graph below that used bug submission data for Mozilla Firefox tracked in Bugzilla to demonstrate how, over time, bug submitters on average do become more efficient (blue line). However, what is interesting is that despite the improved average quality the variability in the efficacy of individual bug submitters remained high (red line). The graph makes it appear as though the variability increases as submitters become more experienced but this is not the case, towards the left there were simply many more bug submitters and they averaged each other out creating the illusion of less variability. As you move to the right the number of bug submitters with these levels of experience are quite few, sometimes only 1-2 per data point, so the variability simply becomes more apparent.

Consequently, the group encircled by purple oval are very experienced and yet continue to submit bugs the community ultimately chooses to either ignore or deems not worth fixing. Sorting through, testing and evaluating these bugs suck up precious time and resource.

We are presently looking at more data to assess if we can come up with a profile for what makes for a bug submitter who falls into this group (as opposed to be “average” or exceedingly effective). If one could screen for such bug submitters, then a community might be able to better educate them and/or provide more effective tools and thus improve their performance. In more radical cases – if the net cost of their participation was too great – one could even screen them out of the bug submission process. If one could improve the performance of this purple oval group by even 25% there would be a significant improvement in the average (blue line). We are looking forward to talk and share more about this in the near future.

As a secondary point, I feel it is important to note that we are still in the early days of open source development model. My sense is there are still improvements – largely through more effective community management – that can yield dramatic (as opposed to incremental) boosts in productivity for open source projects. This separates them again from proprietary models which – as far as I can tell – can at the moment at best hope for incremental improvements in productivity. Thus, for those evaluating the costs of open versus closed processes, it might be worth considering the fact that the two approaches may be (and, in my estimation, are) evolving at very different rates.

(If someone from a city government is reading this and you have data regarding 311 reports – we would be interested in analyzing your data to see if similar results bear out – plus it may enable us to help you manage you call volume more effectively.)