Category Archives: technology

When good companies go bad – How Nokia Siemens helped Iran monitor its citizens

Last week my friend Diederik wrote a blog titled “Twittering to End Dictatorship: Ensuring the Future of Web-based Social Movements” in which he expressed his concern that (Western) corporations might facilitate oppressive regimes in wiretapping and spying on their citizens.

Now it appears that his concerns have turned out to be true. As he points on more recently on his blog:

  • The Wall Street Journal reports that Nokia and Siemens have supplied Iran with deep-inspection technologies to develop “one of the world’s most sophisticated mechanisms for controlling and censoring the Internet, allowing it to examine the content of individual online communications on a massive scale”. The  Washington Post also reported this.
  • Siemens has not just sold the “Intelligence Platform” to Iran, but to a total of 60 countries. Siemens calls it “lawful interception”, but in countries with oppressive regimes everything that the government does is lawful.
  • The New York Times reports that China is requiring Internet censor software to be installed on all computers starting from July 1st.

Of course, being Nordic, the Nokia Siemens joint venture which developed and sold the monitoring centre to Iran has a strict code of ethics on their website that addresses issues of human rights, censorship and torture. In theory this should have guided their choice of selling equipment to Iran – obviously it has not.

So Diederik and his friends have started a petition to enable people voice their concern over the failure of Nokia Seimens to adhere to their own code of conduct by selling advanced technology to help the government of Iran to control its citizens. I hope it takes off…

Open Source Journalism at the Guardian

crowd sourcedA few months ago I wrote a piece called the Death of Journalism which talked about how – even if they find a new revenue model – newspapers are in trouble because they are fundamentally opaque institutions. This built on a piece Taylor Owen wrote called Missing the Link about why newspapers don’t understand (or effectively use) the internet.

Today Nicolas T.  sent me this great link that puts some of the ideas found in both pieces into practice. Apparently, in the wake of the MP expense scandal in the United Kingdom, the Guardian has obtained 700,000 documents of MPs’ expenses to identify individual claims. Most MPs probably imagined they can hide their expenses in the sea of data, for what newspaper could devote the resources to searching through them all?

No newspaper could, if by “newspaper” you mean only its staff and not its community of readers. The Guardian, interestingly, has taken on this larger community definition and has crowd sourced the problem by asking its readers to download and read one or a few documents and report back any relevant information.

What makes this exciting is it is one example of how – by being transparent and leveraging the interest and wisdom of their readership – newspapers and media outlets can do better, more indepth, cheaper and more effective journalism. Think of it. First, what was once an impossible journalist endeavor is now possible. Second, a level of accountability previously unimaginable has been created. And third, a constituency of traditional (and possibly new) Guardian readers has been engaged – likely increasing their loyalty.

Indeed, in effect the Guardian has deputizing its readers to be micro-journalists. This is the best example to date of a traditional (or mainstream) media institution warming (or even embracing) at least a limited concept of “the citizen journalists.” I suspect that as institutions find ways to leverage readers and citizen journalists and that the lines between journalist and reader will increasingly blur. Actually it will have to.

Why is that?

Because for the Guardian model to work, they had to strike an agreement (a bargain – as Clay Shirky calls it) with their community. I don’t think anyone would have been satisfied to do this work and then simply hand it back to the Guardian without the right to access their work, or the work of other micro-journalists. Indeed, following the open source model, the guardian has posted the results for every document read and analyzed. This means that the “raw data” and analysis is available to anyone. Anyone of these micro-journalists can now, in turn, read the assemblage of document reviews and write their own story about the MPs expenses. Indeed, I’m willing to wager the some of the most interesting stories about these 700,000 pages will not be written by staff of The Guardian but by other parties assessing the raw data.

So what does that make the Guardian? Is it a repository, a community coordinator, an editorial service…? And what does it make those who write those stories who aren’t employed by the Guardian? Caring about, or getting caught up in these terms and definitions is interesting, but ultimately, it doesn’t matter. The fact is that journalism is being reinvented and this is one compelling model of why the new model can tackle problems the old one couldn’t even contemplate.

Will Firefox’s JetPack let us soar too high?

Recently Mozilla introduced Jetpack, a Firefox add-on that makes it possible to post-process webpages within the web browser. For the non-techies out there, this means that one can now create small software programs that, if installed, can alter a webpages content by changing, adding or removing parts of it before it is displayed on your computer screen.

For the more technically minded, this post-processing of web pages is made possible because JetPack plugins have access to the Document Object Model (DOM). Since the DOM describes the structure and content of a web page, the software can manipulate the webpage’s content after the page is received from the web server but before it is displayed to the user. As a result static web pages, even the ones you do not control, can become dynamic mashups.

This may seem insignificant but it has dramatic implications. For example, imagine a JetPack plugin that overlays a website – say of BarrackObama.com or FoxNews.com – that causes text bubbles that counterspin a story when your mouse hovers over it. The next republican nominee could encourage supporters to download such a hypothetical plugin and then direct their supporters to Obama’s website where each story could be re-spun and links to donating money to the republican campaign could be proffered. They would, in short, dynamically use Obama’s webpage and content as a way to generate money and support. TPM could create a similar Jetpack plugin for the FoxNews website which would do something similar to the title and body text of articles that were false or misleading.

Such plugins would have a dramatic impact on the web experience. First, they would lower costs for staying informed. Being informed would cease to be a matter of spending time searching for alternative sources, but a matter of installing the appropriate JetPack plugin. Second, every site would now be “hijackable” in that, with the right plugin a community could evolve that would alter its content without the permission of the site owner/author. On the flip side, it could also provide site owners with powerful community engagement tools: think open source editing of newspapers, open source editing of magazines, open source editing of television channels.

The ultimate conclusion however is that JetPack continues to tilt power away from the website creators to viewers. Webpage owners will have still less control over how their websites get viewed, used and understood. Effectively anyone who can persuade people to download their JetPack plugin can reappropriate a website – be it BarrackObama.com, FoxNews.com, eBay, or even little old eaves.ca – for their own purposes without the permission of the website owner. How the web eco-system and website developers in particular react to this loss of control will be interesting. Such speculation is difficult. Perhaps there will be no reaction. But one threat is that certain websites place content within proprietary systems like Flash where it would be more difficult for JetPack to alter their contents. More difficult to imagine, but worth discussion, is that some sites might simply not permit Firefox browsers to view their site.

In the interim three obstacles need to be overcome before JetPack realizes its full potential. Currently, only a relatively small community of technically minded people can develop JetPack add-ons. However, once Jetpack becomes an integral part of the Firefox browser this community will grow. Second, at present installing a JetPack plugin triggers a stern security warning that will likely scare many casual users away. Mozilla has hinted at developing a trusted friends system to help users determining whether a plug-in is safe. Such trust systems will probably be necessary to make JetPack a mainstream technology. If such a community can be built, and a system for sorting out trusted and untrustworthy plugins can be developed, then Jetpack might redefine our web experience.

We are in for some interesting times with the launch of Firefox 3.5 and new technologies like JetPack around the corner!

Jetpack is available at jetpack.mozillalabs.com

Diederik van Liere helped write this post and likes to think the world is one big network.

10,000 hours and The Coming Online Talent Explosion

About half way through Gladwell’s Outliers: The Story of Success and, if he’s thesis and the research it is based on is valid, I think we are in for some exciting times in the online writing world.

Gladwell talks about how it takes about 10,000 hours to achieve mastery in area, subject or practice. Referencing a study of musicians that sought to determine how many “natural” talents their were, Gladwell notes that:

“The curious thing about Ericsson’s study is that he and his colleagues couldn’t find any “naturals” – musicians who could float effortlessly to the top while practicing a fraction of the time that their peers did. Nor could they find “grinds”, people who worked harder than everyone else and yet just didn’t have what it takes to break into the top ranks. Their research suggested that once you have enough ability to get into a top music school, the thing that distinguishes one performer from another is how hard he or she works. That’s it. What’s more, the people at the very top don’t just work much harder than everyone else. They work much, much harder.”(H/T Tim Finin)

How much harder?

“In those first few years everyone practiced roughly the same amount, about two or three hours a week. But around the age of 8 real difference started to emerge. the sudtents who would end up as the best in their class began to practice more than everyone else. 6 hours a week by age 9, 8 hours a week by age 12, 16 hours a week by age fourteen and up and up until by the age of 20, they were practicing – that is purposefully, and single-mindedly playing their instruments with the intent to get better – well over 30 hours a week. In fact by the age of 20 the elite performers had totalled 10,000 hours  of practice over the course of their lives, by contrast the merely good students had totaled 8000 hours and the future music teachers had totaled just over 400 hours. “

He then cites example after example of this trend. 10,000 hours – usually attained only after about 10 years – is a magic number.

Well, two years ago my friend Taylor and I wrote this piece about the 10th anniversary of blogging. Since the blogosphere is only about 12 years old there are not that many people who’ve been blogging for 10 years – moreover, the scant few who have are most likely to be those who work, or and deeply interested, in Information Technology. If Gladwell is correct it means that virtually all bloggers  (self-included, only 3.5 years) and especially those without an IT background, are likely well short of the 10,000 hour mastery threshold.

This is exciting news. It means that despite the already huge number of great blogs and bloggers we are probably only experiencing a fraction of what is to come. Given bloggings exponential growth I’d wager that the world is about 2-5 years away from an explosion in writing talent. Today all sorts of people who would never have previously written are writing blogs. Many are terrible, some are good, and fewer still are excellent. But what is important is that they are gaining experience and learning. With more people reaching that 10,000 hour mark, more talented people will also reach it – consequently, we should see more gifted writers. Better still, it is possible their talent will be restricted to blogs – but perhaps not. As these writers get more recognized some they will shift to books, or magazines or whatever new medium exists by then.

All in all, the first half of the 21st century could be one of the greatest for writers – and as a result, for readers from thereafter too. The internet’s writing renaissance could be upon us soon.

The Open Cities Blog on the Creative Exchange

Excited to let everyone know that I’ll be blogging at the Creative Exchange on Open Cities. I’ll continue to blog here 4 times a week and the pieces I post there I’ll cross-post here as well.

It’s an opportunity to talk about how openess and transparency can/will change our cities to a wider audience.

Wish me luck. Here was my first post.

Creating Open Cities

Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an “architecture of participation,” and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.

Tim O’Reilly

To the popular press “hacker” means someone who breaks into computers. Among programmers it means a good programmer. But the two meanings are connected. To programmers, “hackers” connotes mastery in the most literal sense: someone who can make a computer do what he wants-whether the computer wants to or not.

Paul Graham, Hackers & Painters

Welcome to the Open Cities blog on CCE. My name is David Eaves and I’ve been writing, speaking, and thinking about open, citizen engagement and public policy for a number of years. Most recently, I worked to help push forward the City of Vancouver motion that requires the city to share more data, adopt open standards, and treat open source and proprietary software equally.

Cities have always been platforms – geographic and legal platforms upon which people collaborate to create enterprises, exchange ideas, educate themselves, celebrate their culture, start families, found communities, and raise children. Today the power of information technology is extending this platform, granting us new ways to collaborate and be creative. As Clay Shirky notes in Here Comes Everybody, this new (dis)order is powerful. For the meaning and operation of cities, it will be transformative.

How transformative? The change created by information technology is driving what will perhaps be seen as the greatest citizen-led renewal of urban spaces in our history. Indeed, I believe it may even be creating a new type of city, one whose governance models, economies and notions of citizenship are still emerging, but different from their predecessors. These new cities are Open Cities: cities that, like the network of web 2.0, are architected for participation and so allow individuals to create self-organized solutions and allow governments to tap into the long-tail of public policy.

And just in the nick of time. To succeed in the 21st century, cities will have to simultaneously thrive in a global economy, adapt to climate change, integrate a tsunami of rural and/or foreign migrants, as well as deal with innumerable other challenges and opportunities. These issues go far beyond the capacity and scope of almost any government – not to mention the all-too-often under-resourced City Hall.

Open Cities address this capacity shortfall by drawing on the social capital of their citizens. Online, city dwellers are hacking the virtual manifestation of their city which, in turn, is giving them the power to shape the physical space. Google transit, DIYcity, Apps for Democracy are great urban hacks, they allow cities to work for citizens in ways that were previously impossible. And this is only the beginning.

Still more exciting, hacking is a positive sum game. The more people hack their city – not in the poorly misunderstood popular press meaning of breaking into computers but in (sometimes artful, sometimes amateur) way of making a system (read city) work for their benefit – the more useful data and services they create and remix. Ultimately, Open Cities will be increasingly vibrant and safe because they are hackable. This will allow their citizens to unleash their creativity, foster new services, find conveniences and efficiencies, notice safety problems, and build communities.

In short, the cities that harness the collective ingenuity, creativity, and energy of its citizenry will thrive. Those that don’t – those that remain closed – won’t. And this divide – open vs. closed – could become the new dividing line of our age. And it is through this lens that this blog will look at the challenges and opportunities facing cities, their citizens, and institutions. Let’s see who’s open, how they’re getting open, and what it will all mean.

why collaborative skills matter in open source

For the past several years now I’ve been talking about how community management – broadly defined as enhancing a community’s collaborative skills, establishing and modeling behaviour/culture and embedding development tools and communications mediums with prompts that “nudge” us towards collaborative behaviour – is imperative to the success of open source communities. (For those interested in this, my FSOSS 2008 on the subject has been slidecasted here, and is on on google video here.

Re-reading Shirkly’s latest book, Here Comes Everybody, has re-affirmed my thinking. Indeed, it’s made me more aggressive. Why? Consider these two paragraphs:

This ability of the traditional management structure to simplify coordination helps answer one of the most famous questions in all of economics: If markets are such a good idea, why do we have organizations at all? Why can’t all exchanges of value happen in the market? This question originally was posed by Ronald Coase in 1937 in his famous paper “The Nature of the Firm,” wherein he also offered the first coherent explanation of the value of hierarchical organization. Coase realized that workers could simply contract with one another, selling their labor, and buying the labor of others in turn, in a market, without needing any managerial oversight. However, a completely open market for labor, reasoned Coase, would underperform labor in firms because of the transaction costs, and in particular the costs of discovering the options and making and enforcing agreements among the participating parties. The more people are involved in a given task, the more potential agreements need to be negotiated to do anything, and the greater the transaction costs…

And later, Shirky essentially describes the thesis of his book:

But what if transaction costs don’t fall moderately? What if they collapse. This scenario is harder to predict from Coase’s original work, at it used to be purely academic. Now’s it not, because it is happening, or rather it has already happened, and we’re starting to see the results.

My conclusion: the lower the transaction costs, the greater the playing field will favour self-organizations systems like open source communities and the less it will favour large proprietary producers.

This is why open source communities should (and do) work collectively to reduce transaction costs among their members. Enabling the further collapse of transaction costs tilts the landscape in our favour. Sometimes, this can be down in the way we architect the software. Indeed, this is why – in FirefoxAdd-Ons are so powerful. The Add-On functionality dramatically reduces transaction costs by creating a dependable and predictable platform, essentially allowing coders to work in isolation from one another (the difference between collaborative vs. cooperative). This strategy has been among the most successful. It is important and should be pursued, but it cannot help collapse transaction costs for all parts of a project – especially the base code.

But what more can be done? There are likely diminishing returns to re-architecting the software and in finding new, easier ways, to connect developers to one another. The areas I think offer real promise include:

  • fostering cultures within open source communities that reward collaborative (low transaction cost) behaviour,
  • promoting leaders who model collaborative (low transaction cost) behaviour
  • developing tools and communications mediums/methods that prompt participants to improve the noise to signal, minimize misunderstandings, limit unnecessary conflict, and help resolve differences quickly and effectively (the idea being that all of these outcomes lower transactions costs).

This is why I continue to think about how to port over the ideas, theories and tools from the negotiation/collaboration field, into the open source space.

For open source communities, eliminating transaction costs is a source of strategic advantage – one that we should find ways to exploit ruthlessly.

Neo-Progressivism watch: online collectivism as the 3rd way that works

Just finished reading Kevin Kelly’s piece The New Socialism: Global Collectivist Society Is Coming Online in Wired Magazine. It talks about the same themes Taylor and I were trying to surface Progressivism’s End and I suspect we agree with Kelly’s in many regards.

Taylor and I talked about how the left (now old left) killed progressive politics and how progressive politics is re-emerging in new forms (I had wanted to use Mozilla as a mini-case, but came to it too late). Kelly’s piece deals less with the past and focuses exclusively on the nascent politics that is emerging in the online space:

We’re not talking about your grandfather’s socialism. In fact, there is a long list of past movements this new socialism is not. It is not class warfare. It is not anti-American; indeed, digital socialism may be the newest American innovation. While old-school socialism was an arm of the state, digital socialism is socialism without the state. This new brand of socialism currently operates in the realm of culture and economics, rather than government—for now.

When masses of people who own the means of production work toward a common goal and share their products in common, when they contribute labor without wages and enjoy the fruits free of charge, it’s not unreasonable to call that socialism.

Maybe. I think the socialism label takes the argument a bit far. Kelly’s piece portrays open source and collective online projects as disconnected from capitalism. Certainly in the case of open-source, this is a strained argument. While motivations vary, many people who fund and contribute to Firefox do so because having an open browser allows the web – and all the commerce conducted on it – to be open and competitive. Same with Linux, between 75%-90% of contributors are paid by their employers to contribute. As Amanda McPherson, director of marketing at the Linux Foundation notes: “They’re not the guys in the basements, the hobbyists.” Consequently, many open-source projects are about preserving an open platform so that value can shift to another part of the system. It is about allowing for better, more efficient and more open markets – not about ending them.

Still more difficult to believe is Kelly’s assertion that “The more we benefit from such collaboration, the more open we become to socialist institutions in government.” If there is one political philosophy that is emerging among the online coders and hackers it isn’t socialism – it is libertarianism. I see no evidence that socialism is making a comeback – this is where Kelly’s use of the term hurts him the most. If we are seeing anything it is the re-emergence of the values of progressive politics: a desire for meritocracy, openness, transparency, efficiency and equality of opportunity. The means of achieving this is shifting, but not back towards socialism of any form.

One area I strongly agree with Kelly is that neo-progressivism (or as he prefers, the new socialism) is strongly pragmatic:

On the face of it, one might expect a lot of political posturing from folks who are constructing an alternative to capitalism and corporatism. But the coders, hackers, and programmers who design sharing tools don’t think of themselves as revolutionaries. No new political party is being organized in conference rooms—at least, not in the US. (In Sweden, the Pirate Party formed on a platform of file-sharing. It won a paltry 0.63 percent of votes in the 2006 national election.)

Indeed, the leaders of the new socialism are extremely pragmatic. A survey of 2,784 open source developers explored their motivations. The most common was “to learn and develop new skills.” That’s practical. One academic put it this way (paraphrasing): The major reason for working on free stuff is to improve my own damn software. Basically, overt politics is not practical enough.

As we wrote in an early draft of Progressivism’s End:

Having lost, or never gained, hope in either partisan politics or the political institutions that underlie the modern state, much of this generation has tuned out.  Driven by outcomes, neo-progressive’s are tired of the malaise of New Deal institutions. Believing, but with a healthy dose of skepticism, in both the regulatory capacity of the state and the effectiveness of the market economy, they are put off by the absolutism of both the right and left.  And, valuing pragmatism over ideology, they are embarrassed by partisan bickering.

The simple fact is that in a world that moves quickly, it is easier than ever to quickly ascertain what works and what does not. This gives pragamatists a real advantage over theoretically driven ideologues who have a model of the world they want reality to conform to. Kelly may be right that, at some point, this neo-progressive (or new-socialist) movement will get political. But I suspect that will only be the case if a) their modes of production are threatened (hence the copyright was). I suspect they will simple (continue) ignore the political whenever possible – why get them involved if you can achieve results without them?

How the Mighty Fall vs. The Black Swan

blackswanI’ve almost finished listening to Nassim Nicholas Taleb’s The Black Swan, a book about how large-impact, hard-to-predict, and rare event beyond the realm of normal expectations. At the same time, Tim O’Reilly caused me to stumble upon this article previewing Jim Collins‘ (author of Good to Great and Built to Last) new book “How the Mighty Fall.”

In some way the two authors’ could not be more different. Taleb writes in a harsh, sarcastic, cutting tone that heaps scorn on many of the worlds finest minds as well as, one senses, the books readers. His harshest barbs are reserved for academics, who if often sees as being to interested in theory to help with real world problems. I’ve never seen Taleb in person or on video, but after listening to The Black Swan I can’t help but see him as an lethal and angry intellectual street fighter, mad at a world that didn’t notice his brilliance earlier.

How the Might FallCollins, in contrast, reads like a classic business academic writer who has gone mainstream. He never offends, and his tone is never harsh – he seems like the archtype westcoast Business school Professor – smart, driven and direct, but slightly geeky in that friendly way and not overly intense (hence westcoast).

But while their styles (and I hypothesize, personalities) are dramatically different, they overlap in some curious and interesting ways. Both are concerned with business issues and both are writing about outliers. Taleb is concerned with the outlying events that can completely alter one’s world. Collins in concerned with outlier companies – those that experience impressive and continuous success. And while I’m sure there are lots of areas where the two will disagree, it is interesting to focus on where the two almost completely overlap.

The first appears where Collins talks about the first symptom of a company going into decline: Hubris Born of Success:

“The best leaders we’ve studied never presume they’ve reached ultimate understanding of all the factors that brought them success. For one thing, they retain a somewhat irrational fear that perhaps their success stems in large part from fortuitous circumstance. Suppose you discount your own success (“We might have been just really lucky/were in the right place at the right time/have been living off momentum/have been operating without serious competition”) and thereby worry incessantly about how to make yourself stronger and better-positioned for the day your good luck runs out. What’s the downside if you’re wrong? Minimal: If you’re wrong, you’ll just be that much stronger by virtue of your disciplined approach. But suppose instead you succumb to hubris and attribute success to your own superior qualities (“We deserve success because we’re so good/so smart/so innovative/so amazing”). What’s the downside if you’re wrong? Significant. You just might find yourself surprised and unprepared when you wake up to discover your vulnerabilities too late.”

This whole paragraph sounds like a friendly version of Taleb. Praising leaders who don’t claim to understand the full complexity of their world, their business or even their own success? Classic Taleb.

More interesting however, is the emphasis on luck. Taleb regularly argues that luck is (at a minimum) underestimated and more often ignored outright, as a factor in a businesses success. No CEO wants to stand up and say, yes, we become $10B dollar company not just because we were good, but because we were lucky – it doesn’t exactly send a positive message to share holders (or does it justify their enormous bonus). But Collins not only agrees that luck is a factor, he argues that good companies admit to themselves that luck was a factor.

In hockey you hear people say you’ve got to be good to be lucky and lucky to be good. The point is, if you work hard, bounces will eventually come your way and you’ve got to be good enough to pounce on them and make those opportunities count. Begin to think you don’t need luck, you stop seeing the opportunities and also begin to believe you are inherently better than anyone. Fact is, you’re not. You’ve got to work. Hard. And hope for some luck. Even then, you probably never become Google.

The second interesting place of overlap is in Collins discussion about how companies begin to deny that they are at risk or in peril.

“Bill Gore, founder of W.L. Gore & Associates, articulated a helpful concept for decision-making and risk-taking, what he called the “waterline” principle. Think of being on a ship, and imagine that any decision gone bad will blow a hole in the side of the ship. If you blow a hole above the waterline (where the ship won’t take on water and possibly sink), you can patch the hole, learn from the experience, and sail on. But if you blow a hole below the waterline, you can find yourself facing gushers of water pouring in, pulling you toward the ocean floor. And if it’s a big enough hole, you might go down really fast, just like some of the financial firm catastrophes of 2008. To be clear, great enterprises do make big bets, but they avoid big bets that could blow holes below the waterline.”

In The Black Swan, Taleb has an entire piece on assessing risk which parallels this quote. He notes that too often business people and – in particular – financial types, focus on predicting the likelihood of an event – even when a prediction model is deeply flawed or essentially meaningless. Since often assessing the likelihood of an event is often impossible Taleb argues it becomes much more important to ascertain the likely magnitude of it’s impact. So avoid doing things or exposing yourself to risks that, if they go wrong, will blow out your hull. Indeed, the Black Swan is essentially a 250 page book on this paragraph.

Open Cities: Popularity lessons for municipal politicians

Last Thursday I posted the Vancouver City motion that is being introduced today.

Prior to the posting the motion several of my friends wondered if the subject of open data, open cities and open source were niche issues, ones that wouldn’t attract the attention or care of the media, not to mention citizens. I’m not sure that this is, as of yet, a mainstream issue, but there is clear, vocal, engaged and growing constituency – that is surprisingly broad – supporting it.

For politicians (who are often looking for media attention), open-advocates (who are looking for ways to get politicians attention) and others (who usually care about and want access to, some of the data the city collects) there are real wins to be had by putting forward a motion such as this.

To begin with, let’s look at the media and broader attention this motion has garnered to date:

First, a search of twitter for the terms Vancouver and Open shows hundreds upon hundreds of tweet from around the world celebrating the proposal. Over the weekends, I tried to track down the various tweets relating to the motion and they number at least 500, and possibly even exceed 1000. What is most interesting is that some tweets included people saying – “that they wished they lived in Vancouver.”

As an aside, have no doubt, City Hall sees this initiative in part as an effort to attract and retain talent. Paul Graham, who created a multimillion dollar software company and is now a venture capitalist summed it up best “Great [programmers] also generally insist on using open source software. Not just because it’s better, but because it gives them more control… This is part of what makes them good: when something’s broken, they need to fix it. You want them to feel this way about the software they’re writing for you.” Vancouver is not broken – but it could always be improved, and  twitter confirms a suspicion I have: that programmers and creative workers in all industries are attracted to places that are open because it allows them to participate in improving where they live. Having a city that is attractive to great software programmers is a strategic imperative for Vancouver. Where there are great software programmers there will be big software companies and start ups.

Blogs, of course, have also started to get active. As sampling includes locals in the tech sector, such as David Asher of Mozilla, Duane Nickull of Adobe, as well as others interested in geographic data. Academics/public thinkers also took note on their blogs.

Then, the online tech magazines began to write about it too. ReadWriteWeb wrote this piece, ZDnet had this piece and my original blog post went to orange on Slashdot (a popular tech news aggregator).

Of course, traditional media was in the mix too. The Straight’s tech blog was onto the story very early with this piece, a national newspaper, the Globe and Mail, had this piece by Frances Bula (which has an unfortunate sensationalist title which has nothing to do with the content) and finally, today, the Vancouver Sun published this piece.

Still more interesting will be to see the number of supportive letters/emails and the diversity of their sources. I’ve already heard supportive letters coming from local technology companies, large international tech companies, local gardening groups and a real estate consulting firm. Each of these diverse actors sees ways to use city data to help lower the costs of their business, conduct better analysis or facilitate their charitable work.

In short, issues surrounding the open city – open data, open source software and open standards – are less and less restricted to the domain of a few technology enthusiasts. There is a growing and increasingly vocal constituency in support.

Update May 20th, 2009 more media links:

The Libertarian Western Standard wrote this positive piece (apparently this is the Vancouver City only good initiative).

I did a CBC radio interview on the afternoon of May 19th during the show On the Coast with Stephen Quinn. The CBC also published this piece on its news site.

Vancouver enters the age of the open city

A few hours ago, Vancouver’s city government posted the agenda to a council meeting next week in which this motion will be read:

MOTION ON NOTICE

Open Data, Open Standards and Open Source
MOVER: Councillor Andrea Reimer
SECONDER: Councillor

WHEREAS the City of Vancouver is committed to bringing the community into City Hall by engaging citizens, and soliciting their ideas, input and creative energy;

WHEREAS municipalities across Canada have an opportunity to dramatically lower their costs by collectively sharing and supporting software they use and create;

WHEREAS the total value of public data is maximized when provided for free or where necessary only a minimal cost of distribution;

WHEREAS when data is shared freely, citizens are enabled to use and re-purpose it to help create a more economically vibrant and environmentally sustainable city;

WHEREAS Vancouver needs to look for opportunities for creating economic activity and partnership with the creative tech sector;

WHEREAS the adoption of open standards improves transparency, access to city information by citizens and businesses and improved coordination and efficiencies across municipal boundaries and with federal and provincial partners;

WHEREAS the Integrated Cadastral Information Society (ICIS) is a not-for-profit society created as a partnership between local government, provincial government and major utility companies in British Columbia to share and integrate spatial data to which 94% of BC local governments are members but Vancouver is not;

WHEREAS digital innovation can enhance citizen communications, support the brand of the city as creative and innovative, improve service delivery, support citizens to self-organize and solve their own problems, and create a stronger sense of civic engagement, community, and pride;

WHEREAS the City of Vancouver has incredible resources of data and information, and has recently been awarded the Best City Archive of the World.

THEREFORE BE IT RESOLVED THAT the City of Vancouver endorses the principles of:

  • Open and Accessible Data – the City of Vancouver will freely share with citizens, businesses and other jurisdictions the greatest amount of data possible while respecting privacy and security concerns;
  • Open Standards – the City of Vancouver will move as quickly as possible to adopt prevailing open standards for data, documents, maps, and other formats of media;
  • Open Source Software – the City of Vancouver, when replacing existing software or considering new applications, will place open source software on an equal footing with commercial systems during procurement cycles; and

BE IT FURTHER RESOLVED THAT in pursuit of open data the City of Vancouver will:

  • Identify immediate opportunities to distribute more of its data;
  • Index, publish and syndicate its data to the internet using prevailing open standards, interfaces and formats;
  • Develop appropriate agreements to share its data with the Integrated Cadastral Information Society (ICIS) and encourage the ICIS to in turn share its data with the public at large
  • Develop a plan to digitize and freely distribute suitable archival data to the public;
  • Ensure that data supplied to the City by third parties (developers, contractors, consultants) are unlicensed, in a prevailing open standard format, and not copyrighted except if otherwise prevented by legal considerations;
  • License any software applications developed by the City of Vancouver such that they may be used by other municipalities, businesses, and the public without restriction.

BE IT FINALLY RESOLVED THAT the City Manager be tasked with developing an action plan for implementation of the above.

A number of us having been working hard getting this motion into place. While several cities, like Portland, Washington DC, and Toronto, have pursued some of the ideas outlined in this motion, none have codified or been as comprehensive and explicit in their intention.

I certainly see this motion as the cornerstone to transforming Vancouver into a open city, or as my friend Surman puts it, a city that thinks like the web.

At a high level, the goal behind this motion is to enable citizens to create, grow and control the virtual manifestation of their city so that they can in turn better influence the real physical city.

In practice, I believe this motion will foster several outcomes, including:

1. New services and applications: That as data is opened up, shared and has  APIs published for it, our citizen coders will create web based applications that will make their lives – and the lives of other citizens – easier, more efficient, and more pleasant.

2. Tapping into the long tail of public policy analysis: As more and more Vancouverites look over the city’s data, maps and other pieces of information citizens will notice inefficiencies, problems and other issues that could save money, improve services and generally make for a stronger better city.

3. Create new businesses and attract talent: As the city shares more data and uses more open source software new businesses that create services out of this data and that support this software will spring up. More generally, I think this motion, over time could attract talent to Vancouver. Paul Graham once said that great programmers want great tools and interesting challenges. We are giving them both. The challenge of improving the community in which they live and the tools and data to help make it better.

For those interested in appearing before City Council to support this motion, details can be found here. The council meeting is this Tuesday, May 19th at 2pm, PST. You can also watch the proceedings live.

For those interested in writing a letter in support of the motion, send your letter here.