Category Archives: open source

How to predict the "Fixability" of a Bugzilla Submission

bugzilla iconMy friend Diederik van Liere has written a very, very cool jet-pack add-on that calculates the probability a bug report will result in a fixed bug.

The skinny on it is that Diederik’s app bases its prediction on the bug reporter’s experience, their past success rate, the presence of a stack trace and whether the bug reporter is a Mozilla affiliate. These variables appear to be strong and positive predictors of whether a bug will be fixed. The add-on can be downloaded here and its underlying methodology is explained in this blog post.

One way the add-on could be helpful is that it would enable the mozilla community to focus its resources on the most promising bug reports. Volunteer coders with limited time who want to show up and and take ownership over a specific bug would probably find this add-on handy as it would help them spend their precious volunteer time on bugs that are likely well thought through, documented effectively and submitted by someone who will be accessible and able to provide them with input if necessary.

The danger of course, is that a tool like this might further enhance (what I imagine is) a power-law like distribution of bug submitters. The add-on would allow those who are already the most effective bug submitters to get still more attention while first time submitters or those who are still learning may not receive as much sufficient attention (coaching, feedback, support) to improve. Indeed, one powerful way the tool might be used (and which I’m about to talk to Diederik about) is to determine if there are classes of bug submitters who are least likely to be successful. If we can find some common traits among them it might be possible to identify ways to better support them and/or enable them to contribute to the community more effectively. Suddenly a group of people who have expressed interest but have been inadvertently marginalized (not on purpose) could be brought more effectively into the community. Such a group might be the lowest hanging fruit in finding the next million mozillians.

Structurelessness, feminism and open: what open advocates can learn from second wave feminists

Just finished reading feminist activist Jo Freeman’s article, written in 1970, called The Tyranny of Structurelessness. She argues there is no such thing as a structureless group, and that structurelessness tends to be invoked to cover up or obscure — and cannot eliminate — the role, nature, ownership and use of power within a group.

The article is worth reading, especially for advocates of open (open-source and openspace/unconference). Occasionally I hear advocates of open source — and more frequently hear organizers of unconferences/openspaces — argue that because of the open, unstructured nature of the process, they are more democratic than alternatives. Freeman’s article is worth reading as it serves as a useful critique of the limits of open as well as a reminder that open groups, organizations and processes are neither structureless, nor inherently democratic. Claiming either is at best problematic; at worst it places the sustainability of the organization or effort in jeopardy. Moreover, recognizing this reality doesn’t make being open less powerful or useful, but it does allow us to think critically and have honest conversations about to what structures we do want and how we should manage power.

It’s worth recognizing that Freeman wrote this article because she did want feminist organizations to be more democratic (whereas I do not believe open source or unconferences need to be democratic), but this does not make her observations less salient. For example, Freeman’s article opens with an attack on the very notion of structurelessness:

“…to strive for a ‘structureless’ group is as useful and as deceptive, as to aim at an ‘objective’ news story, ‘value-free’ social science or a ‘free’ economy. A ‘laissez-faire’ group is about as realistic as a ‘laissez-faire’ society; the idea becomes a smokescreen for the strong or the lucky to establish unquestioned hegemony over others. This hegemony can easily be established because the idea of ‘structurelessness’ does not prevent the formation of informal structures, but only formal ones.”

This is an important recognition of fact, one that challenges the perspective held by many “open” advocates. In many respects, unconferences and some open source projects are reactions to the challenges and limitations of structure — a move away from top-heavy governance that limits creativity, stifles action and slows the flow of information. I have personally (and on many occasions) been frustrated by the effect that the structure of government bureaucracies can have on new ideas. I have seen how, despite a clear path for how to move an idea to action, the process nonetheless ends up snuffing the idea out before it can be acted upon — or deforms it to the point of uselessness.

But I have also experienced the inverse. I’ve personally experienced the struggle of trying to engage/penetrate an open source community. Who I should talk to, how to present my ideas, where to present them — all often have rules (of which, within Mozilla, I was usually informed by friends on the inside — while occasionally I discovered the rules awkwardly, after grossly violating them). Most open source communities I know of — such as Mozilla or Canada25 —  never claimed (thankfully) to be democratic, but there is an important lesson here. Recognizing the dangers of too much (or rather the wrong) structure is important. But that should not blind us to the other risk — the danger outlined above by Freeman for feminists in 1970: that in our zeal to avoid bad structure, we open advocates begin to pretend that there is no structure, or no need for structure. This is simply never the case. No matter what, a group structure exists, be it informally or formally. The question is rather how we can design a flexible structure that meets our needs and enables those whom we want to participate, to participate easily.

The danger is real. I’ve been to unconferences where there are those who have felt like insiders and others who have known they were outsiders. The same risk – I imagine – exists for open source projects. This isn’t a problem in and of itself – unless those who become insiders start to be  chosen not solely on account of their competence or contribution, but because of their similarities, shared interests, or affableness to the current set of insiders. Indeed, in this regard Freeman talks very intelligently about “elites”:

“Elites are not conspiracies. Seldom does a small group of people get together and try to take over a larger group for its own ends. Elites are nothing more and nothing less than a group of friends who also happen to participate in the same political activities. They would probably maintain their friendship whether or not they were involved in political activities; they would probably be involved in political activities whether or not they maintained their friendships. It is the coincidence of these two phenomena which creates elites in any groups and makes them so difficult to break.”

This is something I have witnessed both within an open source community and at an unconference. And this is not bad per se. One wants the organizers and contributors in open projects to align themselves with the values of the project. At the same time, however, it becomes easy for us to create proxies for shared values — for example, older people don’t get unconferences so we don’t ask them, or gloss over their offers  to help organize. Those who disagree with us becomes labelled trolls. Those who disagree sharply (and ineffectively) are labelled crazy, evil or stupid (or assumed to be suffering from asperger’s syndrom). The challenge here is twofold. First, we need to recognize that while we all strive to be meritocratic when engaging and involving people we are often predisposed to those who act, talk and think like us. For those interested in participation (or, for example, finding the next million mozillians) this is of real interest. If an open source community or an unconference does want to grow (and I’m not saying this should always be a goal), it will probably have to grow beyond its current contributor base. This likely means letting in people who are like those already participating.

The second challenge isn’t to make open source communities more democratic (as Freeman wished for the feminist movement) but to ensure that we recognize that there is power, we acknowledge which individuals hold it, and we make clear how they are held accountable and how that power is transitioned.  This can even be by dictate — but my sense is that whatever the structure, it needs to be widely understood by those involved so they can choose, at a minimum, to opt out (or fork) if they do not agree. As Freeman notes, acting like there is no power, no elite or no structure does not abolish power. “All it does is abdicate the right to demand that those who do exercise power and influence be responsible for it.”

In this regard a few thoughts about structure come to mind:

  1. Clarity around what creates power and influence. Too often participants may not know what allows one to have influence in an open setting. Be clear. If, in an open source community, code is king, state it. And then re-state it. If, in an unconference, having a baseline of knowledge on the conference subject is required, state it. Make it as clear as possible to participants what is valued and never pretend otherwise.
  2. Be clear on who holds what authority, why, and how they are accountable. Again, authority does not have to be derived democratically, but it should be as transparent as possible. “The bargain” about how a group is being governed should be as clear to new contributors and participants as possible so that they know what they are signing for. If that structure is not open to change except by an elite, be honest about it.
  3. Consider encoding ideas 1 and 2 into a social contract that makes “the bargain” completely clear. Knowing how to behave is itself not unimportant. One problem with the “code is king” slogan is that it says nothing about behaviour. By this metric a complete jerk who contributes great code (but possibly turns dozens if not hundreds of other coders off of the project) could become more valued then a less effective contributor who helps new coders become more effective contributors. Codifying and enforcing a minimum rule-set allows a common space to exist.
  4. Facilitate an exit. One of the great things about unconferences and open source is the ability to vote with one’s feet and/or fork. This means those who disagree with the elite (or just the group in general) can create an alternative structure or strike up a new conversation. But ensure that the possibility for this alternative actually exists. I’ve been to unconferences where there was not enough space to create a new conversation – and so dominating conveners tortured the participants with what interested them, not the group. And while many open source projects can be forked, practically doing so is sometimes difficult. But forking – either an open source project or a conference conversation – is an important safety valve on a project. It empowers participants by forcing elites to constantly ensure that community members (and not just the elites) are engaged or risk losing them. I suspect that it is often those who are most committed (a good thing) but feel they do not have another choice (a bad thing) who come to act like resentful trolls, disrupting the community’s work.

Again, to be clear, I’m using Freeman’s piece to highlight that even in “open” systems there are structures and power that needs to be managed. I’m not arguing for unconferences or open source communities to be democratic or given greater structure or governance. I believe in open, transparency and in lightest structures possible for a task. But I also believe that, as advocates of open, we must constantly be testing ourselves and our assumptions, as well as acknowledging and debating practises and ideas that can help us be more effective.

Open Source Journalism at the Guardian

crowd sourcedA few months ago I wrote a piece called the Death of Journalism which talked about how – even if they find a new revenue model – newspapers are in trouble because they are fundamentally opaque institutions. This built on a piece Taylor Owen wrote called Missing the Link about why newspapers don’t understand (or effectively use) the internet.

Today Nicolas T.  sent me this great link that puts some of the ideas found in both pieces into practice. Apparently, in the wake of the MP expense scandal in the United Kingdom, the Guardian has obtained 700,000 documents of MPs’ expenses to identify individual claims. Most MPs probably imagined they can hide their expenses in the sea of data, for what newspaper could devote the resources to searching through them all?

No newspaper could, if by “newspaper” you mean only its staff and not its community of readers. The Guardian, interestingly, has taken on this larger community definition and has crowd sourced the problem by asking its readers to download and read one or a few documents and report back any relevant information.

What makes this exciting is it is one example of how – by being transparent and leveraging the interest and wisdom of their readership – newspapers and media outlets can do better, more indepth, cheaper and more effective journalism. Think of it. First, what was once an impossible journalist endeavor is now possible. Second, a level of accountability previously unimaginable has been created. And third, a constituency of traditional (and possibly new) Guardian readers has been engaged – likely increasing their loyalty.

Indeed, in effect the Guardian has deputizing its readers to be micro-journalists. This is the best example to date of a traditional (or mainstream) media institution warming (or even embracing) at least a limited concept of “the citizen journalists.” I suspect that as institutions find ways to leverage readers and citizen journalists and that the lines between journalist and reader will increasingly blur. Actually it will have to.

Why is that?

Because for the Guardian model to work, they had to strike an agreement (a bargain – as Clay Shirky calls it) with their community. I don’t think anyone would have been satisfied to do this work and then simply hand it back to the Guardian without the right to access their work, or the work of other micro-journalists. Indeed, following the open source model, the guardian has posted the results for every document read and analyzed. This means that the “raw data” and analysis is available to anyone. Anyone of these micro-journalists can now, in turn, read the assemblage of document reviews and write their own story about the MPs expenses. Indeed, I’m willing to wager the some of the most interesting stories about these 700,000 pages will not be written by staff of The Guardian but by other parties assessing the raw data.

So what does that make the Guardian? Is it a repository, a community coordinator, an editorial service…? And what does it make those who write those stories who aren’t employed by the Guardian? Caring about, or getting caught up in these terms and definitions is interesting, but ultimately, it doesn’t matter. The fact is that journalism is being reinvented and this is one compelling model of why the new model can tackle problems the old one couldn’t even contemplate.

ChangeCamp Vancouver

This weekend ChangeCamp comes to Vancouver. If you are interested definitely sign up early.

I’ll be there of course. But better still Shari Wallace (Director of IT for the City of Vancouver) and I will be running a session together from 3-4 pm to brainstorm what data the City of Vancouver should prioritize on opening up. It’s an opportunity for coders to suggest what might help them build the local apps they’ve always wanted to build.

That, and numerous other sessions will try to help us dive deeper into The Long Tail of Public Policy

So what is ChangeCamp and where will it be?

Saturday June 20th 2009 |  8:30 am – 5:30 pm

555 Seymour Street, Vancouver, BC (BCIT Downtown Campus)

$20 in advance | $25 at the door

Vancouver ChangeCamp is a participatory web-enabled face-to-face event that brings together citizens, technologists, designers, academics, social entrepreneurs, policy wonks, political players, change-makers and government employees to answer these questions:

  • How can we help government become more open and responsive?
  • How do we as citizens organize to get better outcomes ourselves?

The event is a partly structured unconference. One track of the conference will introduce the kinds of projects that harness new ideas and tools for social change. Other tracks at the conference will be participant-driven, with the agenda created collaboratively at the start of the event, allowing participants to share their experiences and expertise.

Hope to see you there!

Will Firefox’s JetPack let us soar too high?

Recently Mozilla introduced Jetpack, a Firefox add-on that makes it possible to post-process webpages within the web browser. For the non-techies out there, this means that one can now create small software programs that, if installed, can alter a webpages content by changing, adding or removing parts of it before it is displayed on your computer screen.

For the more technically minded, this post-processing of web pages is made possible because JetPack plugins have access to the Document Object Model (DOM). Since the DOM describes the structure and content of a web page, the software can manipulate the webpage’s content after the page is received from the web server but before it is displayed to the user. As a result static web pages, even the ones you do not control, can become dynamic mashups.

This may seem insignificant but it has dramatic implications. For example, imagine a JetPack plugin that overlays a website – say of BarrackObama.com or FoxNews.com – that causes text bubbles that counterspin a story when your mouse hovers over it. The next republican nominee could encourage supporters to download such a hypothetical plugin and then direct their supporters to Obama’s website where each story could be re-spun and links to donating money to the republican campaign could be proffered. They would, in short, dynamically use Obama’s webpage and content as a way to generate money and support. TPM could create a similar Jetpack plugin for the FoxNews website which would do something similar to the title and body text of articles that were false or misleading.

Such plugins would have a dramatic impact on the web experience. First, they would lower costs for staying informed. Being informed would cease to be a matter of spending time searching for alternative sources, but a matter of installing the appropriate JetPack plugin. Second, every site would now be “hijackable” in that, with the right plugin a community could evolve that would alter its content without the permission of the site owner/author. On the flip side, it could also provide site owners with powerful community engagement tools: think open source editing of newspapers, open source editing of magazines, open source editing of television channels.

The ultimate conclusion however is that JetPack continues to tilt power away from the website creators to viewers. Webpage owners will have still less control over how their websites get viewed, used and understood. Effectively anyone who can persuade people to download their JetPack plugin can reappropriate a website – be it BarrackObama.com, FoxNews.com, eBay, or even little old eaves.ca – for their own purposes without the permission of the website owner. How the web eco-system and website developers in particular react to this loss of control will be interesting. Such speculation is difficult. Perhaps there will be no reaction. But one threat is that certain websites place content within proprietary systems like Flash where it would be more difficult for JetPack to alter their contents. More difficult to imagine, but worth discussion, is that some sites might simply not permit Firefox browsers to view their site.

In the interim three obstacles need to be overcome before JetPack realizes its full potential. Currently, only a relatively small community of technically minded people can develop JetPack add-ons. However, once Jetpack becomes an integral part of the Firefox browser this community will grow. Second, at present installing a JetPack plugin triggers a stern security warning that will likely scare many casual users away. Mozilla has hinted at developing a trusted friends system to help users determining whether a plug-in is safe. Such trust systems will probably be necessary to make JetPack a mainstream technology. If such a community can be built, and a system for sorting out trusted and untrustworthy plugins can be developed, then Jetpack might redefine our web experience.

We are in for some interesting times with the launch of Firefox 3.5 and new technologies like JetPack around the corner!

Jetpack is available at jetpack.mozillalabs.com

Diederik van Liere helped write this post and likes to think the world is one big network.

The Open Cities Blog on the Creative Exchange

Excited to let everyone know that I’ll be blogging at the Creative Exchange on Open Cities. I’ll continue to blog here 4 times a week and the pieces I post there I’ll cross-post here as well.

It’s an opportunity to talk about how openess and transparency can/will change our cities to a wider audience.

Wish me luck. Here was my first post.

Creating Open Cities

Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an “architecture of participation,” and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.

Tim O’Reilly

To the popular press “hacker” means someone who breaks into computers. Among programmers it means a good programmer. But the two meanings are connected. To programmers, “hackers” connotes mastery in the most literal sense: someone who can make a computer do what he wants-whether the computer wants to or not.

Paul Graham, Hackers & Painters

Welcome to the Open Cities blog on CCE. My name is David Eaves and I’ve been writing, speaking, and thinking about open, citizen engagement and public policy for a number of years. Most recently, I worked to help push forward the City of Vancouver motion that requires the city to share more data, adopt open standards, and treat open source and proprietary software equally.

Cities have always been platforms – geographic and legal platforms upon which people collaborate to create enterprises, exchange ideas, educate themselves, celebrate their culture, start families, found communities, and raise children. Today the power of information technology is extending this platform, granting us new ways to collaborate and be creative. As Clay Shirky notes in Here Comes Everybody, this new (dis)order is powerful. For the meaning and operation of cities, it will be transformative.

How transformative? The change created by information technology is driving what will perhaps be seen as the greatest citizen-led renewal of urban spaces in our history. Indeed, I believe it may even be creating a new type of city, one whose governance models, economies and notions of citizenship are still emerging, but different from their predecessors. These new cities are Open Cities: cities that, like the network of web 2.0, are architected for participation and so allow individuals to create self-organized solutions and allow governments to tap into the long-tail of public policy.

And just in the nick of time. To succeed in the 21st century, cities will have to simultaneously thrive in a global economy, adapt to climate change, integrate a tsunami of rural and/or foreign migrants, as well as deal with innumerable other challenges and opportunities. These issues go far beyond the capacity and scope of almost any government – not to mention the all-too-often under-resourced City Hall.

Open Cities address this capacity shortfall by drawing on the social capital of their citizens. Online, city dwellers are hacking the virtual manifestation of their city which, in turn, is giving them the power to shape the physical space. Google transit, DIYcity, Apps for Democracy are great urban hacks, they allow cities to work for citizens in ways that were previously impossible. And this is only the beginning.

Still more exciting, hacking is a positive sum game. The more people hack their city – not in the poorly misunderstood popular press meaning of breaking into computers but in (sometimes artful, sometimes amateur) way of making a system (read city) work for their benefit – the more useful data and services they create and remix. Ultimately, Open Cities will be increasingly vibrant and safe because they are hackable. This will allow their citizens to unleash their creativity, foster new services, find conveniences and efficiencies, notice safety problems, and build communities.

In short, the cities that harness the collective ingenuity, creativity, and energy of its citizenry will thrive. Those that don’t – those that remain closed – won’t. And this divide – open vs. closed – could become the new dividing line of our age. And it is through this lens that this blog will look at the challenges and opportunities facing cities, their citizens, and institutions. Let’s see who’s open, how they’re getting open, and what it will all mean.

Open data in local education: broader lessons for government, citizens and NGOs

Last months I remember reading a couple of news stories about a provincial government ministry in Canada that was forced to become less transparent.

Forced?

Yes, this was not a voluntary move. A specific group of people pressured the government, wanting it to remove data it had made public as well as make it harder for the public to repurpose and make use of the data. So what happened? And what lessons should governments, NGOs and citizens take away from this incident?

The story revolves around the Ontario Ministry of Education which earlier this year created a website that mashed up performance data (e.g. literacy and math scores) with demographic information (e.g. percentage of pupils from low-income households and percentage of gifted students). The real problem – according to a group representing teachers, parents and stakeholders – occurred when the Ministry enabled a feature that allowed the website’s users to compare up schools to one another.

The group, called People for Education, protested that the government was encouraging a “shopping-mentality” in the public school system.

Of course, many parents already shop for schools. I remember, as a kid, hearing about how houses on one side a street, but within the catchment area of my high school, were more expensive than houses on the other side of the same street, but within the catchment area of another school. Presently however, this type of shopping is reserved for the wealthy and connected (e.g. the privileged). Preventing people from comparing schools online won’t eliminate or even discourage this activity, it will simply preference those who are able to do it, further reinforcing inequity.

The real problem however, is that the skills and analysis involved in school shopping are the same as those required in accessing and being engaged in, the performance of one’s local school. Parents, and taxpayers in general, have a right to know their childrens school’s performance – especially in comparison to similar schools. If parents don’t have information to analyze and compare, how can they know what systemic issues they should ask their childrens teachers about? More importantly, how can they know what issues to press their local school board about?

Ironically, People for Education states on its “About Us” page that it works towards a vision of a strong public education system by a) doing research; b) providing clear, accessible information to the public and c) engaging people to become actively involved in education issues in their own community.

And yet, asking the government to remove the comparison feature runs counter to all three of its activities. Limiting how the Ministry’s data can be used (and as we’ll see later, suggesting that this data shouldn’t be shared):

  • prevents parents, and other analysts such as professors or politicians, from doing their own research
  • runs counter to the goal of providing clear and accessible information to the public. Indeed, it makes information harder to access.
  • makes it harder for parents to know how they should get involved and what issues they should champion to improve their local school

What is interesting about this story is that it reveals the core values and underlying motivation of different actors. In this case People for Education – which I believe to be a well intentioned a positive contributor to the issue of education – is nonetheless revealed to have a conservative side to it.

It fears a world where citizens and parents are equipped with information and knowledge about schools. On the one hand it may fear the types of behaviours this could foster (such as school shopping). However, it may also fear a weakening of its monopoly as “expert” and advocate on educational issues. If parents can look at the data directly, and form their own analysis and conclusions, they may find that they don’t agree with People for Education. Open data would allow those it represents to self-organize, challenging the hierarchy and authority of the organization.

For whatever reason, we see an NGO bending over backwards to advocate for an outcome that runs directly counter to the very vision and activities it was founded to serve. More ironically, this result in some paradoxical messaging as an organization that champions Ontario’s school system essentially arguing that it doesn’t trust the products of that system – the citizens of Ontario – to use the information and tools provided by the Ministry that was responsible for their education. It is an unsustainable position – particularly for a group that was originally founded as a bottom up, grass-roots organization.

So what lessons are there here?

For government:

A key mistake made by the Ontario Ministry of Education is that it didn’t open up the data enough. While the website allowed users to look at school performance data they could only do this on the Ministry’s website using the Ministry’s tools and interface. Had the data been available as an API or in downloadable format someone else could have taken the data and created the system for comparing schools. People for Education were mostly upset that the Ministry’s website encouraged a “shopping-mentality.” Had the Ministry simply shared the data then People for Education could build their own interface using criteria and tools they though relevant. The Fraser Institute or a multitude of other organizations could build their own as well, and people could have used the tools and websites they found most useful and relevant. Let People for Education go head to head with the Fraser Institute and whoever else. This is not a battle the government need fight.

Lesson: Always provide the data – a goal that is hard to argue against – but sometimes, leave it to others to conduct the analysis. A marketplace of ideas will emerge, and citizens can choose what works best for them.

For NGOs in general

First, understand what open data means for your cause. One of the news articles had this highly disturbing quote from the Executive Director of People for Education:

Among her complaints about the type of information available, Ms. Kidder took issue with the ministry’s contention the Web site merely consolidated information already available to the public. “You can’t walk into your child’s school and say ‘What’s the average income of parents at this school?’ ” she said. “It’s not true at all [that this is public information].”

This is a shocking statement. In actuality, all the data assembled by the Ministry is publically available. It was just that, until now, it had remained scattered and isolated. Just because it was hard to find (and thus reserved for an elite few) or located on the school property (and thus easy for parents to locate) does not mean it didn’t exist or wasn’t public.

Lesson: Transparency is the new objectivity. People increasingly don’t trust anyone – governments, the media, or even NGOs. They want to see the analysis themselves, not take your word for it. Be prepared for this world.

Second, be careful about taking positions that will deny your supporters – and those you represent – tools with which to educate themselves. Organizations that are perceived as trying to constrain the flow of information so as to retain influence and control risk imploding. I won’t repeat this lesson in detail but Clay Shirky’s case study about the Vatican, written up in Here Comes Everybody, is a powerful example.

For educators in particular

In the past, educators have been deeply concerned with ranking systems. This is understandable. Ranking systems are often a blunt tool. Comparing apples to oranges can be foolish – but then, sometimes it is helpful. The question is to know when it is helpful and ensure it is used accordingly.

The fact is, ranking is an outcome of data. The two simply cannot be separated. The moment there is data, there is ranking. A ranking by school size, number of teachers, or amount of gym equipment is not different than a ranking of class sizes, literacy rates, disciplinary trends, or graduation rates. What matters is not the rank, but the conclusions, meaning and significance we apply to these rankings. Here, the role for groups like People for Education could be profound.

This is because we can’t be in favour of transparency and accessible information on the one hand and against ranking on the other. The two come hand in hand. What we can be opposed to are poor ranking systems.

Every profession gets assessed, and teaching should be no different. The challenge is that there is much more to teaching than what gets reflected in the data collected. This means that groups like People for Education shouldn’t be against transparency and open data – they should be trying to complexify and nuance the discussion. Once the data is publicly available anyone can create a ranking system of their own choosing – but this gives us an opportunity to have a public discussion about it. One way to do this is to create one’s own tools for measuring schools. You don’t like the Ministry of Education’s system? Create your own. Use it to talk to parents about the right questions to ask and to promote the qualitative ways to evaluate their childrens’ schools performance. It’s an open world. But that doesn’t mean it needs to be feared – it is rife with opportunity.

why collaborative skills matter in open source

For the past several years now I’ve been talking about how community management – broadly defined as enhancing a community’s collaborative skills, establishing and modeling behaviour/culture and embedding development tools and communications mediums with prompts that “nudge” us towards collaborative behaviour – is imperative to the success of open source communities. (For those interested in this, my FSOSS 2008 on the subject has been slidecasted here, and is on on google video here.

Re-reading Shirkly’s latest book, Here Comes Everybody, has re-affirmed my thinking. Indeed, it’s made me more aggressive. Why? Consider these two paragraphs:

This ability of the traditional management structure to simplify coordination helps answer one of the most famous questions in all of economics: If markets are such a good idea, why do we have organizations at all? Why can’t all exchanges of value happen in the market? This question originally was posed by Ronald Coase in 1937 in his famous paper “The Nature of the Firm,” wherein he also offered the first coherent explanation of the value of hierarchical organization. Coase realized that workers could simply contract with one another, selling their labor, and buying the labor of others in turn, in a market, without needing any managerial oversight. However, a completely open market for labor, reasoned Coase, would underperform labor in firms because of the transaction costs, and in particular the costs of discovering the options and making and enforcing agreements among the participating parties. The more people are involved in a given task, the more potential agreements need to be negotiated to do anything, and the greater the transaction costs…

And later, Shirky essentially describes the thesis of his book:

But what if transaction costs don’t fall moderately? What if they collapse. This scenario is harder to predict from Coase’s original work, at it used to be purely academic. Now’s it not, because it is happening, or rather it has already happened, and we’re starting to see the results.

My conclusion: the lower the transaction costs, the greater the playing field will favour self-organizations systems like open source communities and the less it will favour large proprietary producers.

This is why open source communities should (and do) work collectively to reduce transaction costs among their members. Enabling the further collapse of transaction costs tilts the landscape in our favour. Sometimes, this can be down in the way we architect the software. Indeed, this is why – in FirefoxAdd-Ons are so powerful. The Add-On functionality dramatically reduces transaction costs by creating a dependable and predictable platform, essentially allowing coders to work in isolation from one another (the difference between collaborative vs. cooperative). This strategy has been among the most successful. It is important and should be pursued, but it cannot help collapse transaction costs for all parts of a project – especially the base code.

But what more can be done? There are likely diminishing returns to re-architecting the software and in finding new, easier ways, to connect developers to one another. The areas I think offer real promise include:

  • fostering cultures within open source communities that reward collaborative (low transaction cost) behaviour,
  • promoting leaders who model collaborative (low transaction cost) behaviour
  • developing tools and communications mediums/methods that prompt participants to improve the noise to signal, minimize misunderstandings, limit unnecessary conflict, and help resolve differences quickly and effectively (the idea being that all of these outcomes lower transactions costs).

This is why I continue to think about how to port over the ideas, theories and tools from the negotiation/collaboration field, into the open source space.

For open source communities, eliminating transaction costs is a source of strategic advantage – one that we should find ways to exploit ruthlessly.

Neo-Progressivism watch: online collectivism as the 3rd way that works

Just finished reading Kevin Kelly’s piece The New Socialism: Global Collectivist Society Is Coming Online in Wired Magazine. It talks about the same themes Taylor and I were trying to surface Progressivism’s End and I suspect we agree with Kelly’s in many regards.

Taylor and I talked about how the left (now old left) killed progressive politics and how progressive politics is re-emerging in new forms (I had wanted to use Mozilla as a mini-case, but came to it too late). Kelly’s piece deals less with the past and focuses exclusively on the nascent politics that is emerging in the online space:

We’re not talking about your grandfather’s socialism. In fact, there is a long list of past movements this new socialism is not. It is not class warfare. It is not anti-American; indeed, digital socialism may be the newest American innovation. While old-school socialism was an arm of the state, digital socialism is socialism without the state. This new brand of socialism currently operates in the realm of culture and economics, rather than government—for now.

When masses of people who own the means of production work toward a common goal and share their products in common, when they contribute labor without wages and enjoy the fruits free of charge, it’s not unreasonable to call that socialism.

Maybe. I think the socialism label takes the argument a bit far. Kelly’s piece portrays open source and collective online projects as disconnected from capitalism. Certainly in the case of open-source, this is a strained argument. While motivations vary, many people who fund and contribute to Firefox do so because having an open browser allows the web – and all the commerce conducted on it – to be open and competitive. Same with Linux, between 75%-90% of contributors are paid by their employers to contribute. As Amanda McPherson, director of marketing at the Linux Foundation notes: “They’re not the guys in the basements, the hobbyists.” Consequently, many open-source projects are about preserving an open platform so that value can shift to another part of the system. It is about allowing for better, more efficient and more open markets – not about ending them.

Still more difficult to believe is Kelly’s assertion that “The more we benefit from such collaboration, the more open we become to socialist institutions in government.” If there is one political philosophy that is emerging among the online coders and hackers it isn’t socialism – it is libertarianism. I see no evidence that socialism is making a comeback – this is where Kelly’s use of the term hurts him the most. If we are seeing anything it is the re-emergence of the values of progressive politics: a desire for meritocracy, openness, transparency, efficiency and equality of opportunity. The means of achieving this is shifting, but not back towards socialism of any form.

One area I strongly agree with Kelly is that neo-progressivism (or as he prefers, the new socialism) is strongly pragmatic:

On the face of it, one might expect a lot of political posturing from folks who are constructing an alternative to capitalism and corporatism. But the coders, hackers, and programmers who design sharing tools don’t think of themselves as revolutionaries. No new political party is being organized in conference rooms—at least, not in the US. (In Sweden, the Pirate Party formed on a platform of file-sharing. It won a paltry 0.63 percent of votes in the 2006 national election.)

Indeed, the leaders of the new socialism are extremely pragmatic. A survey of 2,784 open source developers explored their motivations. The most common was “to learn and develop new skills.” That’s practical. One academic put it this way (paraphrasing): The major reason for working on free stuff is to improve my own damn software. Basically, overt politics is not practical enough.

As we wrote in an early draft of Progressivism’s End:

Having lost, or never gained, hope in either partisan politics or the political institutions that underlie the modern state, much of this generation has tuned out.  Driven by outcomes, neo-progressive’s are tired of the malaise of New Deal institutions. Believing, but with a healthy dose of skepticism, in both the regulatory capacity of the state and the effectiveness of the market economy, they are put off by the absolutism of both the right and left.  And, valuing pragmatism over ideology, they are embarrassed by partisan bickering.

The simple fact is that in a world that moves quickly, it is easier than ever to quickly ascertain what works and what does not. This gives pragamatists a real advantage over theoretically driven ideologues who have a model of the world they want reality to conform to. Kelly may be right that, at some point, this neo-progressive (or new-socialist) movement will get political. But I suspect that will only be the case if a) their modes of production are threatened (hence the copyright was). I suspect they will simple (continue) ignore the political whenever possible – why get them involved if you can achieve results without them?

Open Cities: Popularity lessons for municipal politicians

Last Thursday I posted the Vancouver City motion that is being introduced today.

Prior to the posting the motion several of my friends wondered if the subject of open data, open cities and open source were niche issues, ones that wouldn’t attract the attention or care of the media, not to mention citizens. I’m not sure that this is, as of yet, a mainstream issue, but there is clear, vocal, engaged and growing constituency – that is surprisingly broad – supporting it.

For politicians (who are often looking for media attention), open-advocates (who are looking for ways to get politicians attention) and others (who usually care about and want access to, some of the data the city collects) there are real wins to be had by putting forward a motion such as this.

To begin with, let’s look at the media and broader attention this motion has garnered to date:

First, a search of twitter for the terms Vancouver and Open shows hundreds upon hundreds of tweet from around the world celebrating the proposal. Over the weekends, I tried to track down the various tweets relating to the motion and they number at least 500, and possibly even exceed 1000. What is most interesting is that some tweets included people saying – “that they wished they lived in Vancouver.”

As an aside, have no doubt, City Hall sees this initiative in part as an effort to attract and retain talent. Paul Graham, who created a multimillion dollar software company and is now a venture capitalist summed it up best “Great [programmers] also generally insist on using open source software. Not just because it’s better, but because it gives them more control… This is part of what makes them good: when something’s broken, they need to fix it. You want them to feel this way about the software they’re writing for you.” Vancouver is not broken – but it could always be improved, and  twitter confirms a suspicion I have: that programmers and creative workers in all industries are attracted to places that are open because it allows them to participate in improving where they live. Having a city that is attractive to great software programmers is a strategic imperative for Vancouver. Where there are great software programmers there will be big software companies and start ups.

Blogs, of course, have also started to get active. As sampling includes locals in the tech sector, such as David Asher of Mozilla, Duane Nickull of Adobe, as well as others interested in geographic data. Academics/public thinkers also took note on their blogs.

Then, the online tech magazines began to write about it too. ReadWriteWeb wrote this piece, ZDnet had this piece and my original blog post went to orange on Slashdot (a popular tech news aggregator).

Of course, traditional media was in the mix too. The Straight’s tech blog was onto the story very early with this piece, a national newspaper, the Globe and Mail, had this piece by Frances Bula (which has an unfortunate sensationalist title which has nothing to do with the content) and finally, today, the Vancouver Sun published this piece.

Still more interesting will be to see the number of supportive letters/emails and the diversity of their sources. I’ve already heard supportive letters coming from local technology companies, large international tech companies, local gardening groups and a real estate consulting firm. Each of these diverse actors sees ways to use city data to help lower the costs of their business, conduct better analysis or facilitate their charitable work.

In short, issues surrounding the open city – open data, open source software and open standards – are less and less restricted to the domain of a few technology enthusiasts. There is a growing and increasingly vocal constituency in support.

Update May 20th, 2009 more media links:

The Libertarian Western Standard wrote this positive piece (apparently this is the Vancouver City only good initiative).

I did a CBC radio interview on the afternoon of May 19th during the show On the Coast with Stephen Quinn. The CBC also published this piece on its news site.