Tag Archives: open

Vancouver's Open Data Portal: Use it, or Lose it.

As some of you saw yesterday via Twitter, Vancouver has launched a beta version of its open data portal. This is a major milestone for Vancouver on several levels. It is a testament to our politicians, who had the vision and foresight to embrace this idea. It is a tribute to the city’s staff who have worked unbelievably hard to make this project come alive so quickly. It is a triumph for those of us who advocate and have been working with the city to move us towards open government and government as platform. Finally, it represents an enormous opportunity for coders and citizens alike, and it is to this group that I’d like to address this blog post.

The Data Portal represents an opportunity for citizens, especially citizen coders, to help create a City that Thinks Like the Web: a city that enables citizens to create and access collective knowledge and information to create new services, suggest new ideas, and identify critical bugs in the infrastructure and services, among other a million other possibilities.

But the open data is only the part of the puzzle. Yes, our data is now beginning to be set free. But we have to use it.

If not, we’ll risk losing it.

I wish I could say that the city will share data no matter what and that political support will continue forever. But the fact is, municipal resources are limited. While the potential of open data is enormous, we need more than potential; we need some wins. More importantly, we need an active and engaged community working to make Vancouver better, more efficient, and more interesting because of our open data. We need to show politicians and public servants in Vancouver, but also in Edmonton, Calgary, Ottawa, Toronto, Montreal, Nanaimo, Moncton and other places across the country that citizens want access to data, and that if we get it, we will help their city (or province, or country) come alive in new and inventive ways.

Back in June, shortly after City Council passed Open3 (the nickname for the Open Motion), I gave this presentation to both City staff and at Open Web Vancouver. In it I described how “the bargain” Clay Shirky says exists on every successful web 2.0 site also exists in cities that want to think like the web.

In our case the bargain is simple: On one side, the city agrees to share as much data as it possibly can, in open formats, as quickly as it can. On the other side, the community – and in particular citizen coders – must make that data come alive in applications, websites and analysis. The city has taken the first step in fulfilling its side of the bargain. (And yes, we need to keep adding more data; there is work to be done.) Now it is time to activate the other half of the bargain. If we don’t, we put the deal at risk.

So what can you do?

First, you can code up an app, or find ways to help those who are. Indeed, there is going to be a Hackathon tomorrow evening at the Vancouver Archives to do just this. A number of projects are already underway that you can join – or start one yourself! I will be there myself, and I encourage you to swing by too.

Second, if you’d like to build an application, but the dataset you need is currently not available, then complete the city’s Open Data priority survey!

Third, come add ideas, resources and projects to the Vancouver Open Data Wiki.

I’m enormously excited to see what evolves next. As many of you know, I’ve been advising the Mayor’s Office on open data and open government for several months now – and through my work with them and with city staff, I’ve been deeply impressed by the energy and commitment that I’ve seen. As far as I know, only three major cities have created data portals such as this, and to do this in three months is incredible. Over the next few days I’m going to share some more thoughts on what the Open Data portal means for Vancouver. If you get a chance I hope you’ll send me your thoughts as well, or post some to your own blog if you have one.

Many eyes make all bugs shallow – especially when the eyes get smarter

[Please bear with me for the next 24 hours – I’m moving to a new hosting company, and there may be some glitches here and there to work out. Thanks.]

My friend Diederik has been writing a number of cool posts over at his blog Network-labs.org. For example he has an awesome jetpack plugin that predicts the likelihood a bug will get fixed and a plugin that allows users to rate how badly a bug is being flamed (to spot counterproductive behaviours).

But he recently published a very cool post that uses data from Mozilla bug submissions over the past decade to demonstrate that bug submitters become more proficient over time. However, there are outliers who are a “drag” on the system. More importantly, I believe his data can be interpreted in another way. That, with some minor investment (particularly some simpler vetting screens prior to reaching bugzilla) bug submitters could learn faster.

For example, a landing screen that asks you if you’ve ever submitted a bug before might take newbies to a different page where the bugzilla process is explained in greater detail, the fact that this is not a support site is outlined, and some models of good “submissions” are shared (along with some words of encouragement). By segmenting newbies we might ease the work burden on those who have to vet the bugs.

I think this also has implications for cities and 311 systems (systems that essentially allow citizens to report problems – or bugs – with city infrastructure or services). There will inevitably be people who become heavy users of 311 – the goal is not to get frustrated with them, but to move them up a learning curve of submitting helpful and useful information as quickly as possible. Ignoring them will not work. Strategies for engagement, including understanding what is motivating them and why they continue to use the system in an ineffective way, could help save the city resources and help foster a larger army of citizens who use their spare social capital to make the city a better and more resilient space.

SXSWi Panel: Fostering Collaborative Open Source Communities

Yesterday I saw this academic journal article and was reminded about how an individuals behaviour can negatively impact and groups productivity. In his article “Overlooked but not untouched: How incivility reduces onlookers’ performance on routine and creative tasks.” in the Organizational Behavior and Human Decision Processes (109: 29-44) Amir Erez describes how even just witnessing rudeness resulted in diminished creativity, increased one’s own negative behaviour, damaged productivity and short term memory.

This is a perfect example of why I believe we need open source communities to foster collaborative cultures that nudge people to engage in positive and constructive ways.

In pursuit of talking about this more, I’ve put together a presentation proposal for SXSWi in which I’d like to build on my FSOSS presentation (which has logged over 15000 views since going up on SlideShare.net two years ago) on how the skills and tools from the field of negotiation and collaboration can help improve community management and productivity in open source communities. If this sounds at all interesting to you, I’m hoping that you’ll consider going to the SXSWi Panel Picker website and voting for this panel.

Since FSOSS 2008 I’ve done more research and work in the area and so have more examples to share out of the open source space. In addition, I’ve been working with Diederik Van Liere at the University of Toronto’s business school trying to get data around how behaviour impacts a open source community’s effectiveness.


Fostering Collaborative Open Source Communities


Community management is a core competency of open source. So what skills, tools and culture can facilitate and enable collaboration? Drawing from negotiation theory David shares what open source project participants can do to foster sustainable and effective collaborative communities where conflict is productive and not soul-sucking or time consuming.

Questions Answered:

  1. What skills does an open source project leader need
  2. How to a minimize destructive conversations?
  3. How can I encourage participation in my open source project?
  4. How do enable members of my open source community to work together better?
  5. What is negotiation theory?
  6. Someone is being a troll in my discussion group. What do I do?
  7. How can I attract more users to my open source project?
  8. How can I make my open project contributors more effective?
  9. I don’t like arguing with people, what should I do?
  10. I think I may be abrasive, what should do?


Community / Online Community, Open Source, Self-Help /Self-Improvement, User Experience

How to Engage Citizens on a Municipal Website…

Sometimes, it’s nice to be small, the City of Nanaimo has been pushing the envelop on open data and open government for a number of years now.

Recently, I was directed to their new Council Agendas and Minutes webpage. I recommend you check it out.

Here’s why.

At first blush the site seems normal. There is the standard video of the council meeting (queue cheesy local cable access public service announcement), but them meeting minutes underneath are actually broken down by the second and by clicking on them you can jump straight to that moment in the meeting.

As anyone who’s ever attended a City Council meeting (or the legislature, or parliament) knows, the 80/20 rule is basically always in effect. About 80% of the time the proceedings are either dead boring and about 20% (often much less) of the time the proceedings are exciting, or more importantly, pertinent to you. One challenge with getting citizens engaged on the local level is that they often encounter a noise to signal problem. The ratio of “noise” (issues a given citizen doesn’t care about) drowns out the “signal” (the relatively fewer issues they do care about).

The City of Nanaimo’s website helps address this problem. It enables citizens to find what matters to them without having to watch or scroll through a long and dry council meeting. Better still, they are given a number of options by which to share that relevant moment with friends, neighbours, allies or colleagues via twitter, facebook, delicious or any other number of social media tools.

One might be wondering: can my city afford such a wizbang setup?

Excellent question.

Given Nanaimo’s modest size (it has 78,692 citizens) suggests they have a modest IT budget. So I asked Chris McLuckie, a City of Nanaimo public servant who worked on the project. He informed me that the system was built in-house by him and another city staff member, it uses off-the-shelf hardware and software and so cost under $2000 and it took 2 week to code up.

2 weeks?

No million dollar contract? No 8 month timeline? No expensive new software?

No. Instead, if you’re smart, you might find a couple of local creative citizen-hackers to put something together in no time at all.

You know what’s more, because Chris and the City of Nanaimo want to help more cities learn how to think like the web, I bet if the IT director from any city (or legislative body) asked nicely, they would just give them the code.

So how Open is your city? And if not, do they have $2000 lying around to change that?

A Case Study in Open Government: The Burrard Bridge Trial


On Monday, July 13th the City of Vancouver began the Burrard Bridge lane trial. For those unfamiliar with the trial, the Burrard Bridge is a 6 lane bridge that connects the downtown core of Vancouver with one of the cities major suburban (but still relatively dense) neighbourhoods.

Historically bikers and pedestrians have shared the narrow sidewalks on either side of the bridge. This has resulted in a number of dangerous accidents (the Burrard bridge has more cyclist accidents than any other bridge in the city) and deters cyclists from using the bridge. During the trial the three vehicle lanes headed into downtown have remained unchanged. However, one lane headed out of downtown has been converted to a protected cycling lane.

Pre-trial: cyclists and pedestrians share a narrow sidewalk

Present: Southbound, Northbound cyclists and pedestrians each have their own sidewalk or lane.

A Case Study in Open Government

So what does this have to do with open government?

To access the trial’s impact the city began measuring traffic, cycling, and pedestrian levels 2 weeks prior to the trial started and has continued to measure them ever since. Traditionally, the data generated by a trial like this would kept hidden from the public until a certain date when a report is presented to council to determine if the trial should be made permanent.

Interestingly however, the City of Vancouver has opted to share the raw data on a regularly basis, as well as blog about the trial and give citizens an opportunity to leave comments and feedback. Indeed, the whole Burrard Street Lane Trial website – including twitter account and facebook page – is a well organized affair. Unsurprisingly, the data shows that the number of people cycling over the bridge has increased significantly.

The real story here isn’t about whether the Burrard Bridge Lane Trial becomes permanent or not. It’s about the process. For perhaps the first time in the history of the city citizens and interested groups can conduct their own analysis of the trials significance, in real time, using credible data. Better yet, the analysis won’t be limited to what public servants think. Anyone, in the city, or in the world for that matter, can take this data and mash it up with other data sets or simply analyze as it is. A debate grounded in fact, not emotions or anecdotes, can now take place.

This means cycling advocates or commuter/car advocacy groups can mash the data up with other data sets or take a crack at explaining why the trial is good or bad. I, for example, would love to see if the members of the cycling community who created this website might create a site that measures the reduction in carbon emissions made possible by the trial. Or if anti-cycle lane advocates can mash the data up with traffic reports to show if commuting times have been increased.

Regardless of the outcome however, the process, created by an open government, has ensured that Vancouver’s citizens are better equipped to see what is actually happening, to make suggestions for improvement and to explain to their fellow citizens the significance of the trial. That is the essence of what Open Government allows – it enables anyone who wants to become more engaged in their community by giving them more and better information.

Making it better

As great as the City’s website is, it could be better. To begin with, there is no RSS feed on the blog, so you’ve actually got to go to the website to get updates.

Much more important, there is no way for citizens to subscribe to or download the raw data. An RSS feed or XML feed for the data would allow other websites to automatically get updates. Creating such a feed would cost the city nothing and would vastly enhance the ability of news organizations and interested citizens to re-use, re-mix and re-purpose the data.

A final note. For full disclosure it should be known that I sit on the executive of Vision Vancouver, the political party that proposed and made possible, the Burrard Bridge Lane Trial.

How to predict the "Fixability" of a Bugzilla Submission

bugzilla iconMy friend Diederik van Liere has written a very, very cool jet-pack add-on that calculates the probability a bug report will result in a fixed bug.

The skinny on it is that Diederik’s app bases its prediction on the bug reporter’s experience, their past success rate, the presence of a stack trace and whether the bug reporter is a Mozilla affiliate. These variables appear to be strong and positive predictors of whether a bug will be fixed. The add-on can be downloaded here and its underlying methodology is explained in this blog post.

One way the add-on could be helpful is that it would enable the mozilla community to focus its resources on the most promising bug reports. Volunteer coders with limited time who want to show up and and take ownership over a specific bug would probably find this add-on handy as it would help them spend their precious volunteer time on bugs that are likely well thought through, documented effectively and submitted by someone who will be accessible and able to provide them with input if necessary.

The danger of course, is that a tool like this might further enhance (what I imagine is) a power-law like distribution of bug submitters. The add-on would allow those who are already the most effective bug submitters to get still more attention while first time submitters or those who are still learning may not receive as much sufficient attention (coaching, feedback, support) to improve. Indeed, one powerful way the tool might be used (and which I’m about to talk to Diederik about) is to determine if there are classes of bug submitters who are least likely to be successful. If we can find some common traits among them it might be possible to identify ways to better support them and/or enable them to contribute to the community more effectively. Suddenly a group of people who have expressed interest but have been inadvertently marginalized (not on purpose) could be brought more effectively into the community. Such a group might be the lowest hanging fruit in finding the next million mozillians.

Structurelessness, feminism and open: what open advocates can learn from second wave feminists

Just finished reading feminist activist Jo Freeman’s article, written in 1970, called The Tyranny of Structurelessness. She argues there is no such thing as a structureless group, and that structurelessness tends to be invoked to cover up or obscure — and cannot eliminate — the role, nature, ownership and use of power within a group.

The article is worth reading, especially for advocates of open (open-source and openspace/unconference). Occasionally I hear advocates of open source — and more frequently hear organizers of unconferences/openspaces — argue that because of the open, unstructured nature of the process, they are more democratic than alternatives. Freeman’s article is worth reading as it serves as a useful critique of the limits of open as well as a reminder that open groups, organizations and processes are neither structureless, nor inherently democratic. Claiming either is at best problematic; at worst it places the sustainability of the organization or effort in jeopardy. Moreover, recognizing this reality doesn’t make being open less powerful or useful, but it does allow us to think critically and have honest conversations about to what structures we do want and how we should manage power.

It’s worth recognizing that Freeman wrote this article because she did want feminist organizations to be more democratic (whereas I do not believe open source or unconferences need to be democratic), but this does not make her observations less salient. For example, Freeman’s article opens with an attack on the very notion of structurelessness:

“…to strive for a ‘structureless’ group is as useful and as deceptive, as to aim at an ‘objective’ news story, ‘value-free’ social science or a ‘free’ economy. A ‘laissez-faire’ group is about as realistic as a ‘laissez-faire’ society; the idea becomes a smokescreen for the strong or the lucky to establish unquestioned hegemony over others. This hegemony can easily be established because the idea of ‘structurelessness’ does not prevent the formation of informal structures, but only formal ones.”

This is an important recognition of fact, one that challenges the perspective held by many “open” advocates. In many respects, unconferences and some open source projects are reactions to the challenges and limitations of structure — a move away from top-heavy governance that limits creativity, stifles action and slows the flow of information. I have personally (and on many occasions) been frustrated by the effect that the structure of government bureaucracies can have on new ideas. I have seen how, despite a clear path for how to move an idea to action, the process nonetheless ends up snuffing the idea out before it can be acted upon — or deforms it to the point of uselessness.

But I have also experienced the inverse. I’ve personally experienced the struggle of trying to engage/penetrate an open source community. Who I should talk to, how to present my ideas, where to present them — all often have rules (of which, within Mozilla, I was usually informed by friends on the inside — while occasionally I discovered the rules awkwardly, after grossly violating them). Most open source communities I know of — such as Mozilla or Canada25 —  never claimed (thankfully) to be democratic, but there is an important lesson here. Recognizing the dangers of too much (or rather the wrong) structure is important. But that should not blind us to the other risk — the danger outlined above by Freeman for feminists in 1970: that in our zeal to avoid bad structure, we open advocates begin to pretend that there is no structure, or no need for structure. This is simply never the case. No matter what, a group structure exists, be it informally or formally. The question is rather how we can design a flexible structure that meets our needs and enables those whom we want to participate, to participate easily.

The danger is real. I’ve been to unconferences where there are those who have felt like insiders and others who have known they were outsiders. The same risk – I imagine – exists for open source projects. This isn’t a problem in and of itself – unless those who become insiders start to be  chosen not solely on account of their competence or contribution, but because of their similarities, shared interests, or affableness to the current set of insiders. Indeed, in this regard Freeman talks very intelligently about “elites”:

“Elites are not conspiracies. Seldom does a small group of people get together and try to take over a larger group for its own ends. Elites are nothing more and nothing less than a group of friends who also happen to participate in the same political activities. They would probably maintain their friendship whether or not they were involved in political activities; they would probably be involved in political activities whether or not they maintained their friendships. It is the coincidence of these two phenomena which creates elites in any groups and makes them so difficult to break.”

This is something I have witnessed both within an open source community and at an unconference. And this is not bad per se. One wants the organizers and contributors in open projects to align themselves with the values of the project. At the same time, however, it becomes easy for us to create proxies for shared values — for example, older people don’t get unconferences so we don’t ask them, or gloss over their offers  to help organize. Those who disagree with us becomes labelled trolls. Those who disagree sharply (and ineffectively) are labelled crazy, evil or stupid (or assumed to be suffering from asperger’s syndrom). The challenge here is twofold. First, we need to recognize that while we all strive to be meritocratic when engaging and involving people we are often predisposed to those who act, talk and think like us. For those interested in participation (or, for example, finding the next million mozillians) this is of real interest. If an open source community or an unconference does want to grow (and I’m not saying this should always be a goal), it will probably have to grow beyond its current contributor base. This likely means letting in people who are like those already participating.

The second challenge isn’t to make open source communities more democratic (as Freeman wished for the feminist movement) but to ensure that we recognize that there is power, we acknowledge which individuals hold it, and we make clear how they are held accountable and how that power is transitioned.  This can even be by dictate — but my sense is that whatever the structure, it needs to be widely understood by those involved so they can choose, at a minimum, to opt out (or fork) if they do not agree. As Freeman notes, acting like there is no power, no elite or no structure does not abolish power. “All it does is abdicate the right to demand that those who do exercise power and influence be responsible for it.”

In this regard a few thoughts about structure come to mind:

  1. Clarity around what creates power and influence. Too often participants may not know what allows one to have influence in an open setting. Be clear. If, in an open source community, code is king, state it. And then re-state it. If, in an unconference, having a baseline of knowledge on the conference subject is required, state it. Make it as clear as possible to participants what is valued and never pretend otherwise.
  2. Be clear on who holds what authority, why, and how they are accountable. Again, authority does not have to be derived democratically, but it should be as transparent as possible. “The bargain” about how a group is being governed should be as clear to new contributors and participants as possible so that they know what they are signing for. If that structure is not open to change except by an elite, be honest about it.
  3. Consider encoding ideas 1 and 2 into a social contract that makes “the bargain” completely clear. Knowing how to behave is itself not unimportant. One problem with the “code is king” slogan is that it says nothing about behaviour. By this metric a complete jerk who contributes great code (but possibly turns dozens if not hundreds of other coders off of the project) could become more valued then a less effective contributor who helps new coders become more effective contributors. Codifying and enforcing a minimum rule-set allows a common space to exist.
  4. Facilitate an exit. One of the great things about unconferences and open source is the ability to vote with one’s feet and/or fork. This means those who disagree with the elite (or just the group in general) can create an alternative structure or strike up a new conversation. But ensure that the possibility for this alternative actually exists. I’ve been to unconferences where there was not enough space to create a new conversation – and so dominating conveners tortured the participants with what interested them, not the group. And while many open source projects can be forked, practically doing so is sometimes difficult. But forking – either an open source project or a conference conversation – is an important safety valve on a project. It empowers participants by forcing elites to constantly ensure that community members (and not just the elites) are engaged or risk losing them. I suspect that it is often those who are most committed (a good thing) but feel they do not have another choice (a bad thing) who come to act like resentful trolls, disrupting the community’s work.

Again, to be clear, I’m using Freeman’s piece to highlight that even in “open” systems there are structures and power that needs to be managed. I’m not arguing for unconferences or open source communities to be democratic or given greater structure or governance. I believe in open, transparency and in lightest structures possible for a task. But I also believe that, as advocates of open, we must constantly be testing ourselves and our assumptions, as well as acknowledging and debating practises and ideas that can help us be more effective.

How Open Data even makes Garbage collection sexier, easier and cheaper

So presently the City of Vancouver only shares its garbage schedule (which it divides into north and south) as a PDF file. This is a pity as it means that no one can build any apps around it. Imagine a website or Iphone app that mashed up google maps with a constantly up to date city garbage pick up schedule. With such a application one could:

  • Simply punch in your address and find out your garbage zone (no more guessing if you live on the border of a zone)
  • Get an email or SMS notification 15 minutes, 1 hour, 12 hours (whenever!) before pick up
  • Download the garbage schedule into your calendar using XML, HTML or ICAL


Let’s face it, everyone could do with a garbage day reminder. I can imagine that there are 1000’s of Vancouverites who’d sign up for an email notification.

Maybe this seems all very simply, nice but unimportant. But this is more than just creating convenience. What are the implications for such an application?

For citizens:

  • Let’s face it, for many of us, it would be a convenient and nice to get a reminder of when the garbage is going to be picked up
  • It would improve citizen’s appreciation for city services
  • It might also increase the likelihood citizen will recycle as the notification would enable them to better plan and prepare for garbage pick up

For the city & taxpayers

  • Every day 100s of Vancouverites forget to put out there garbage out for pick up. If garbage isn’t picked up, it piles up. This creates several new risks and costs to the city including: increasing likelihood of rodents and health hazards and an increased risk that some residents will dispose of their garbage inappropriately/illegally
  • When garbage is not put out an expensive city asset (the garbage truck, fuel, garbage men) all pass by unused. This means that taxpayers money is not used as efficiently as it could.
  • Moreover,  when garbage isn’t put out there will be twice as much at the house the next week. Multiply that by 100’s of houses and the very quickly the city must deploy extra garbage trucks to deal with this unusually large volume of garbage – costing city taxpayers.

What would be the total value/savings of such an application? Hard to gauge. But add up, time saved by citizens better able to plan around garbage pick up, a small reduction in illegal garbage disposal, a small increase in recycling, slight increase in garbage pick up efficiency and I’m sure the total value would be in the low millions, and the direct savings for the city in the low $100,000 per year. That is not insignificant – especially if all the city had to do was free the data and allow an intrepid hacker to code up the website.

Of course it doesn’t end there. A reliable source tells me the city collects far more data about garbage collection. For example when the driver can’t pick up the garbage can, they make an entry on their device as to why (e.g., it is under a tree). This entry is sent via a wireless connection back to a City database, and includes the highly precise coordinates of the truck at that moment. Then when a resident calls in to find out why the crew did not pick up the garbage can from their residence, the operator can bring up the address on a map and pinpoint the problem.

Such data could also be on the website in an option with something like “Why didn’t my garbage get picked up?” By sharing this data the city could reduce its call volume (and thus costs). With open data, the possibilities, savings and convenience is endless.

ChangeCamp Vancouver

This weekend ChangeCamp comes to Vancouver. If you are interested definitely sign up early.

I’ll be there of course. But better still Shari Wallace (Director of IT for the City of Vancouver) and I will be running a session together from 3-4 pm to brainstorm what data the City of Vancouver should prioritize on opening up. It’s an opportunity for coders to suggest what might help them build the local apps they’ve always wanted to build.

That, and numerous other sessions will try to help us dive deeper into The Long Tail of Public Policy

So what is ChangeCamp and where will it be?

Saturday June 20th 2009 |  8:30 am – 5:30 pm

555 Seymour Street, Vancouver, BC (BCIT Downtown Campus)

$20 in advance | $25 at the door

Vancouver ChangeCamp is a participatory web-enabled face-to-face event that brings together citizens, technologists, designers, academics, social entrepreneurs, policy wonks, political players, change-makers and government employees to answer these questions:

  • How can we help government become more open and responsive?
  • How do we as citizens organize to get better outcomes ourselves?

The event is a partly structured unconference. One track of the conference will introduce the kinds of projects that harness new ideas and tools for social change. Other tracks at the conference will be participant-driven, with the agenda created collaboratively at the start of the event, allowing participants to share their experiences and expertise.

Hope to see you there!

Will Firefox’s JetPack let us soar too high?

Recently Mozilla introduced Jetpack, a Firefox add-on that makes it possible to post-process webpages within the web browser. For the non-techies out there, this means that one can now create small software programs that, if installed, can alter a webpages content by changing, adding or removing parts of it before it is displayed on your computer screen.

For the more technically minded, this post-processing of web pages is made possible because JetPack plugins have access to the Document Object Model (DOM). Since the DOM describes the structure and content of a web page, the software can manipulate the webpage’s content after the page is received from the web server but before it is displayed to the user. As a result static web pages, even the ones you do not control, can become dynamic mashups.

This may seem insignificant but it has dramatic implications. For example, imagine a JetPack plugin that overlays a website – say of BarrackObama.com or FoxNews.com – that causes text bubbles that counterspin a story when your mouse hovers over it. The next republican nominee could encourage supporters to download such a hypothetical plugin and then direct their supporters to Obama’s website where each story could be re-spun and links to donating money to the republican campaign could be proffered. They would, in short, dynamically use Obama’s webpage and content as a way to generate money and support. TPM could create a similar Jetpack plugin for the FoxNews website which would do something similar to the title and body text of articles that were false or misleading.

Such plugins would have a dramatic impact on the web experience. First, they would lower costs for staying informed. Being informed would cease to be a matter of spending time searching for alternative sources, but a matter of installing the appropriate JetPack plugin. Second, every site would now be “hijackable” in that, with the right plugin a community could evolve that would alter its content without the permission of the site owner/author. On the flip side, it could also provide site owners with powerful community engagement tools: think open source editing of newspapers, open source editing of magazines, open source editing of television channels.

The ultimate conclusion however is that JetPack continues to tilt power away from the website creators to viewers. Webpage owners will have still less control over how their websites get viewed, used and understood. Effectively anyone who can persuade people to download their JetPack plugin can reappropriate a website – be it BarrackObama.com, FoxNews.com, eBay, or even little old eaves.ca – for their own purposes without the permission of the website owner. How the web eco-system and website developers in particular react to this loss of control will be interesting. Such speculation is difficult. Perhaps there will be no reaction. But one threat is that certain websites place content within proprietary systems like Flash where it would be more difficult for JetPack to alter their contents. More difficult to imagine, but worth discussion, is that some sites might simply not permit Firefox browsers to view their site.

In the interim three obstacles need to be overcome before JetPack realizes its full potential. Currently, only a relatively small community of technically minded people can develop JetPack add-ons. However, once Jetpack becomes an integral part of the Firefox browser this community will grow. Second, at present installing a JetPack plugin triggers a stern security warning that will likely scare many casual users away. Mozilla has hinted at developing a trusted friends system to help users determining whether a plug-in is safe. Such trust systems will probably be necessary to make JetPack a mainstream technology. If such a community can be built, and a system for sorting out trusted and untrustworthy plugins can be developed, then Jetpack might redefine our web experience.

We are in for some interesting times with the launch of Firefox 3.5 and new technologies like JetPack around the corner!

Jetpack is available at jetpack.mozillalabs.com

Diederik van Liere helped write this post and likes to think the world is one big network.