Tag Archives: collaborative networks

Improving the tools of open source

It is really important to recognize that free software and open source spring not just from a set of licenses but from a set of practices and often those practices are embodied in the tools that we use. We think through the tools that we use and if you give people different tools they think differently.

– Tim O’Reilly, O’Reilly Radar update, OSCON 2007 (min 9:16 of 22:03)

For those coming to the Free Software and Open Source Symposium at Seneca College, and for those who are not, I wanted to riff off of O’Reilly because he is speaking precisely to something that I hope Dan Mosedale and I are going to dive into during our discussion.

The key is that while the four freedoms and the licenses are important they are not the sum total of open source. Open source communities work because of the tools and practices we’ve developed. More importantly – as Tim points out – these tools shape our behaviour. Consequently, we should never treat the tools or practices in open source as assumptions, but rather things that my must be questioned and whose benefits and limitations must be understood. It is also why we must envision and innovate new tools.

This is why I blog and write on community management and collaboration in opens source. I am trying to imagine ways to port over the ideas developed at the Harvard Negotiation practice into the open source space. I see a set of practices and tools that I believe could further enable, grow and foster effective communities. I believe it is a small, but important piece, to enabling the next generation of open source communities.

I know Dan enjoyed the presentation from last year and has some of his own thinking on this subject – with luck some interesting new insights will emerge which I promise to blog about.

Government Networks – Easy or Hard?

At the IPAC conference last week I did a panel on creating government networks. Prior to my contribution fellow panelist Dana Richardson, an ADM with the Government of Ontario, presented on her experience with creating inter-government networks. Her examples were interesting and insightful. More interesting still was her conclusion: creating networks is difficult.

Networked Snail - a metaphor for government

What makes this answer interesting is not it is correct (I’m not sure it is) but how it is a window into the problematic manner by which governments engage in network based activities.

While I have not studied Richardson’s examples I nonetheless have a hypothesis: these networks were difficult to create because they were between institutions. Consequently those being networked together weren’t be connected because they saw value in the network but because someone in their organization (likely their boss) felt it was important for them to be connected. In short, network participation was contrived and mandated.

This runs counter to what usually makes for an effective networks. Facebook, MySpace, the internet, fax machines, etc… these networks became successful not because someone ordered people to participate in them but because various individuals saw value in joining them and gauged their level of participation and activity accordingly. As more people joined, the more people found there was someone within the network with whom they wanted to connect – so they joined too.

This is because, often, a critical ingredient to successful networks is freedom of association. Motivated individuals are the best judges of what is interesting, useful and important to them. Consequently, when freedom of association exists, people will gravitate towards, and even form, epistemic communities with others that share or can give them, the knowledge and experience they value

I concede that you could be ordered to join a network, discover its utility, and then use it ever more. But in this day and age, when creating networks is getting easier and easier, people who want to self organize increasingly can and do. This means the obvious networks are already emerging/have already emerged. This brings us back to the problem. The reason mandated networks don’t work is because their participants either don’t know how to work together or don’t see the value in doing so. For governments (and really, any large organization), I suspect both are at play. Indeed, there is probably a significant gap between the number of people who are genuinely interested in their field of work (and so who join and participate in communities related to their work), and the number of people on payroll working for the organization in that field.

This isn’t to say mandated networks can’t be created or aren’t important. However, described this way Richardson’s statement becomes correct: they are hard to create. Consequently, you’d better be sure it is important enough to justify creating.

More interestingly however, you might find that you can essential create these networks without mandating them… just give your people the tools to find each other rather than forcing them together. You won’t get anywhere close to 100% participation, but those who see value in talking and working together will connect.

And if nobody does… maybe it is because they don’t see the value in it. If that is the case – all the networking in the world isn’t going to help. In all likelihood, you are probably asking the wrong question. Instead of: “how do we create a network for these people” try asking “why don’t they see the value in networking with one another.” Answer that, and I suspect you’ll change the equation.

more on segmenting open source communities

I wanted to following up on yesterday’s post about the topology and segmentation of open source communities  with one or two addition comments.

My friend Rahul R. reminded me of the fact that one critical reason we segment is to more effectively allocate time and resources. In a bell shaped vision of the open source community (to get this you really have to read yesterday’s post) it would make sense to allocate the bulk of time on the centre (or average) user. But in a power law distribution, with a massive majority of community participants poorly networked to the project a community may face an important dilemma. Consolidate and focus on current community members, or invest in enabling a group so massive it may seem impossible to have an impact.

But as I reflect on it, this segmentation may create a false choice.

Concentrating on the more connected and active members may not be beneficial. A community needs to cultivate a pipeline of future users. Focusing on current community leaders at the expense of future community leaders will damage the project’s long term viability. More importantly, as discussed yesterday, consolidating and insulating this group may acutally create barriers to entry for new community members by saturating the current key members relationship capacity.

The reverse however, concentrating on a mass of passive users, trying to transform them into more active community members is a daunting task (especially when you are considering a user base of 20-30 million, or even just a beta tester community of 100,000 people). While I think there are a number of exciting things that one can and should do to tackle this segment, it can, and does feel overwhelming. How can you have impact?

The key may be to leverage the super users (or super-nodes – those who are more likely to be connected to people throughout the community) to create a culture that is more inclusive and participatory. Over the long term, successful open source communities will be those capable of not only drawing in new members, but networking them with key operators, decision makers and influencers so that new branches of the community are seeded.

I suspect this does not necessarily occur on to its own. It requires an explicit strategy, supported by training, all of which must be aligned with the community’s values. This will be especially true as newer entrants will have a diverse background (and set of goals and values) then the original community members. Possibly the most effective way to achieve this is to inoculate the super nodes within the community with a degree of openness to diversity, and a capacity for relationship cultivation and management so as to create an open culture that functions well even with a diverse community.

So let’s segment the community- but let’s also use that segmentation to build skills, awareness, etc… in each segment that allows it to contribute to a strategy that transcends each individual segment. For an open source community, I would suggest that, at a minimum, this means offering some training around relationship management, dispute resolution, facilitation and mediation to its super-nodes – e.g. the people most directly shaping the community’s culture.

Open Source Communities – Mapping and Segmentation

I’ve just finished “Linked” by Albert-Laszlo Barabasi (review to come shortly) and the number of applications of his thesis are startling.

A New Map for Open Source Communities

The first that jumps to mind is how it nicely the book’s main point provides a map that explains both the growth and structure of open source communities. Most people likely assume that networks (such as an open source community) are randomly organized – with lots people in the network connected to lots of other people. Most likely, these connections would be haphazard, randomly created (perhaps when two people meet or work together) and fairly evenly distributed.

linked1

If an open-source community was organized randomly and experienced random growth, participants would join the community and over time connect with others and generate new relationships. Some participants would opt to create more relationships (perhaps because they volunteer more time), others would create fewer relationships (perhaps because they volunteer less time and/or stick to working with the same group of people). Over time, the law of averages should balance out active and non-active users.

New entrants would be less active in part because they possess fewer relationships in the community (let’s say 1 or 2 connections). However, these new entrants would eventually became more active as they made new relationships and became more connected. As a result they would join the large pool of average community members who would possess an average number of connections (say 10 other people) and who might be relatively active. Finally, at the other extreme we would find veterans and/or super active members. A small band of relatively well connected members who know a great deal of people (say 60 or even 80 people).

Map out the above described community and you get a bell curve (taken from the book Linked). A few users (nodes) with weak links and a few better connected than the average. The bulk of the community lies in the middle with most people possessing more or less the same number of links and contributing more or less the same amount as everyone else. Makes sense, right?

Or maybe not. People involved in open-source communities probably will remark that their community participation levels does not look like this. This, according to Barabasi, should not surprise us. Many networks aren’t structured this way. The rules that govern the growth and structure of many network – rules that create what Barabasi terms “scale-free networks” – create something that looks, and acts, very differently.

In the above graph we can talk about about the average user (or node) with confidence. And this makes sense… most of us assume that there is such thing as an average user (in the case of opensource movements, it’s probably a “he,” with a comp-sci background, and an avid Simpson’s fan). But in reality, most networks don’t have an average node (or user). Instead they are shaped by what is called a “power law distribution.” This means that there is no “average” peak, but a gradually diminishing curve with many, many, many small nodes coexisting with a few extremely large nodes.

linked2

In an open source community this would mean that there are a few (indeed very few, in relation to the community’s size) power users and a large number of less active or more passive users.

Applying this description to the Firefox community we should find the bulk of users at the extreme left. People who – for example – are like me. They use Firefox and have maybe even registered a bug or two on Firefox’s Bugzilla webpage. I don’t know many people in the community and I’m not all the active. To my right are more active members, people who probably do more – maybe beta test or even code – and who are better connected in the community. At the very extreme and the super-users (or super nodes). These are people who contribute daily or are like Mike Shaver (bio, blog) and Mike Beltzner (bio, blog): paid employees of the Mozilla corporation with deep connections into the community.

Indeed, Beltzner’s presentation on the FireFox community (blog post here, presentation here and relevant slides posted below) lists a hierarchy of participation level that appears to mirror a power law distribution.

mbslide3

I think we can presume that those at the beginning of the slide set (e.g Beltzner, the 40 member Mozilla Dev Team and the 100 Daily Contributors) are significantly more active and connected within the community than the Nightly Testers, Beta Testers and Daily Users. So the FireFox community (or network) may be more accurately described by a Power Law Distribution.

Implications for Community Management

So what does this mean for open source communities? If Barabasi’s theory of networks can be applied to open source communities – there are at least 3 issues/ideas worth noting:

1. Scaling could be a problem

If open source communities do indeed look like “scale-free networks” then it maybe be harder then previously assumed to cultivate (and capitalize on) a large community. Denser “nodes” (e.g. highly networked and engaged participants) may not emerge. Indeed the existence of a few “hyper-nodes” (super-users) may actually prevent new super-users (i.e. new leaders, heavy participants) from arising since new relationships will tend to gravitate towards existing hubs.

Paradoxically, the problem may be made worse by the fact that most humans can only maintain a limited number of relationships at any given time. According to Barabasi, new users (or nodes) entering the community (or network) will generally attempt to forge relationships with hub-like individuals (this is, of course, where the information and decision-making resides). However, if these hubs are already saturated with relationships, then these new users will have hard time forging the critical relationships that will solidify their connection with the community.

Indeed, I’ve heard of this problem manifesting itself in open source communities. Those central to the project (the hyper nodes) constantly rely on the same trusted people over and over again. As a result the relationships between these individuals get denser while the opportunities for forging new relationships (by proving yourself capable at a task) with critical hubs diminishes.

2. Segmentation model

Under a Bell Shaped curve model of networks it made little sense to devote resource and energy to supporting and helping those who participate least because they made up a small proportion on the community. Time and energy would be devoted to enabling the average participant since they represented the bulk of the community’s participants.

A Power Law distribution radically alters the makeup of the community. Relatively speaking, there are an incredibly vast number of users/participants who are only passively and/or loosely connected to the community compared to the tiny cohort of active members. Indeed, as Beltzner’s slides point out 100,000 Beta testers and 20-30M users vs. 100 Daily Contributors and 1000 Regular Contributors.

The million dollar question is how do we move people up the food chain? How do we convert users and Beta testers and contributors and daily contributors? Or, as Barabasi might put it: how do increase nodes density generally and the number of super-nodes specifically?Obviously Mozilla and others already do this, but segmenting the community – perhaps into the groups laid out by Beltzner – and providing them with tools to not only perform well at that level, but that enable them to migrate up the network hierarchy is essential. One way to accomplish this task would be to have more people contributing to a given task, however, another possibility (one I argue in an earlier blog post) is to simply open source more aspects of the project, including items such as marketing, strategy, etc…

3. Grease the networks nodes

Finally, another way to over come the potential scaling problem of open source is to improve the capacity of hubs to handle relationships thereby enabling them to a) handle more and/or b) foster new relationships more effectively. This is part of what I was highlighting on my post about relationship management as the core competency of open source projects.

Conclusion

This post attempts to provide a more nuanced topology of open source communities by describing them as scale-free networks. The goal is not to ascertain that there is some limit to the potential of open source communities but instead to flag and describe possible structural limitations so as to being a discussion on how they can be addressed and overcome. My hope is that others will find this post interesting and use its premise to brainstorm ideas for how we can improve these incredible communities.

As a final note, given the late hour, I’m confident there may be a typo or two in the text, possible even a flawed argument. Please don’t hesitate to point either out. I’d be deeply appreciative. If this was an interesting read you may find – in addition to the aforementioned post on community management – this post on collaboration vs cooperation in open source communities to be interesting.

Open Cities – A Success…

Finally beginning to relax after a hectic week of speeches, work and helping out with the Open Cities unconference.

Open Cities was dynamite – it attracted an interesting cross section of people from the arts, publishing, IT, non-profit and policy sectors (to name a few). This was my first unconference and so the most interesting take away was seeing how an openly conducted (un)conference – one with virtually no agenda or predetermined speakers – can work so well. Indeed, it worked better than most conferences I’ve been to. (Of course, it helps when it is being expertly facilitated by someone like Misha G.)

Here’s a picture chart of the agenda coming together mid-morning (thank you to enigmatix1 for the photos)

There was no shortage of panels convened by the participants. I know Mark K. is working on getting details from each of them up on the Open Cities wiki as quickly as possible. Hopefully these can be organized more succinctly in the near future (did I just volunteer myself?).

There were several conversation I enjoyed – hope to share more on them over the coming days – but wanted to start with the idea of helping grow the Torontopedia. The conversation was prompted by several people asking why Toronto does not have its own wiki (it does). Fortunately, Himy S. – who is creating the aforementioned Torontopedia – was on hand to share in the conversation.

A Toronto wiki – particularly one that leverages Google Maps’ functionality could provide an endless array of interesting content. Indeed the conversation about what information could be on such a wiki forked many times over. Two ideas seemed particularly interesting:

The first idea revolved around getting the city’s history up on a wiki. This seemed like an interesting starting point. Such information, geographically plotted using Google Maps, would be a treasure trove for tourists, students and interested citizens. More importantly, there is a huge base of public domain content, hidden away in the city’s archives, that could kick start such a wiki. The ramp up costs could be kept remarkably low. The software is open sourced and the servers would not be that expensive. I’m sure an army of volunteer citizens would emerge to help transfer the images, stories and other media online. Indeed I’d wage a $100,000 grant from the Trillium Foundation, in connection with the City Archives, Historica and/or the Dominion Institute, as well as some local historical societies could bring the necessary pieces together. What a small price to pay to give citizens unrestricted access to, and the opportunity to add to, they stories and history of their city.

The interesting part about such a wiki is that it wouldn’t have to be limited to historical data. Using tags, any information about the city could be submitted. As a result, the second idea for the wiki was to get development applications and proposals online so citizens can learn about how or if their neighborhoods will be changing and how they have evolved.

Over the the course of this discussion I was stunned to learn that a great deal of this information is kept hidden by what – in comparison to Vancouver at least – is a shockingly secretive City Hall. In Vancouver, development applications are searchable online and printed out on giant billboards (see photo) and posted on the relevant buildings.Development application According to one participant, Toronto has no such requirements! To learn anything about a development proposal you must first learn about it (unclear how this happens) and then go down to City Hall to look at a physical copy of the proposal (it isn’t online?). Oh, and you are forbidden to photocopy or photograph any documents. Heaven forbid people learn about how their neighbourhood might change…

Clearly a wiki won’t solve this problem in its entirety – as long as Toronto City Hall refuses to open up access to its development applications. However, collecting the combined knowledge of citizens on a given development will help get more informed and hopefully enable citizens to better participate in decisions about how their neighbourhood will evolve. It may also create pressure on Toronto City Hall to start sharing this information more freely.

To see more photo’s go to flickr and search the tags for “open cities.”

Crisis Management? Try Open Source Public Service

Does anyone still believe that government services can’t be designed to rely on volunteers? Apparently so. We continue to build whole systems so that we don’t have to rely on people (take the bus system for example, it doesn’t rely on constant customer input – indeed I think it actively discourages it).

So I was struck the other day when I stumbled into an unfortunate situation that reminded me of how much one of our most critical support system relies on ordinary citizens volunteering their time and resources to provide essential information.

Last Sunday night, through the review mirror, I witnessed a terrible car accident.

A block behind me, two cars hit head on at a 90 degree angle – with one car flipping end over end and landing on its roof in the middle of the intersection.

Although it was late in the evening there were at least 20-30 people on the surrounding streets… and within 5 seconds of the crash I could saw over the soft glow of over 15 cellphone LCD screens light up the night. Within 60 seconds, I could hear the ambulance sirens.

It was a terrible situation, but also an excellent example of how governments already rely on open system – even to deliver essential, life saving services. 911 services rely on unpaid, volunteer citizens to take the time and expend the (relatively low) resources to precisely guide emergency resources. It is an interesting counterpoint to government officials who design systems that pointedly avoid citizen feedback. More importantly, if we trust on volunteers to provide information to improve an essential service, why don’t we trust them to provide a constant stream of feedback on other government services?

OpenCities and Seneca College

As many of you know I’m deeply interested in Open-Source systems and so was super thrilled when David Humphrey invited me over to Seneca College for a reception at the Centre for Development of Open Technology (CDOT). Who knew such a place existed. And in Toronto no less! There is something in the air around Toronto and open-source systems… why is that?

This is exactly one of the questions those of us planning OpenCities are hoping it answers… (as our more formal blurb hints at)

What is OpenCities Toronto 2007? Our goal is to gather 80 cool people to ask how do we collaboratively add more open to the urban landscape we share? What happens when people working on open source, public space, open content, mash up art, and open business work together? How do we make Toronto a magnet for people playing with the open meme?

Registration for OpenCities starts today. If you have any questions please feel free to ask in the comment box below, or, drop me an email. I’m doubly pumped since the whole event will be taking place at the Centre for Social Innovation – I can’t imagine a better space. (If you wondering – do I live in Toronto or Vancouver, I don’t blame you, I sometimes wonder myself).

Don't Ban Facebook – Op-ed in today's G&M

You can download the op-ed here.

The Globe and Mail published an op-ed I wrote today on why the government shouldn’t ban face book, but hire it.

The point is that Web 2.0 technologies, properly used, can improve communication and coordination across large organizations and communities. If the government must ban Facebook then it should also hire it to provide a similar service across its various ministries. If not it risks sending a strong message that it wants its employees to stay in your little box.

One thing I didn’t get into in the op-ed is the message this action sends to prospective (younger) employees. Such a ban is a great example of how the government sees its role as manager. Essential the public service is telling its employees “we don’t trust that you will do your job and will waste your (and our) time doing (what we think are) frivolous things. Who wants to work in an environment where there own boss doesn’t trust them? Does that sound like a learning environment? Does it sound like a fun environment?

Probably not.

—–

Facebook Revisited

DAVID EAVES
SPECIAL TO GLOBE AND MAIL
MAY 17, 2007 AT 12:38 AM EDT

Today’s federal and provincial governments talk a good game about public-service renewal, reducing hierarchy, and improving inter-ministry co-operation. But actions speak louder than words, and our bureaucracies’ instincts for secrecy and control still dominate their culture and frame their understanding of technology.

Last week, these instincts revealed themselves again when several public-service bureaucracies — including Parliament Hill and the Ontario Public Service — banned access to Facebook.

To public-service executives, Facebook may appear to be little more than a silly distraction. But it needn’t be. Indeed, it could be the very opposite. These technology platforms increasingly serve as a common space, even a community, a place where public servants could connect, exchange ideas and update one another on their work. Currently, the public service has a different way of achieving those goals: It’s called meetings, or worse, e-mail. Sadly, as anyone who works in a large organizations knows, those two activities can quickly consume a day, pulling one away from actual work. Facebook may “waste time” but it pales in comparison to the time spent in redundant meetings and answering a never-ending stream of e-mails.

An inspired public service shouldn’t ban Facebook, it should hire it.

A government-run Facebook, one that allowed public servants to list their interests, current area of work, past experiences, contact information and current status, would be indispensable. It would allow public servants across ministries to search out and engage counterparts with specialized knowledge, relevant interests or similar responsibilities. Moreover, it would allow public servants to set up networks, where people from different departments, but working on a similar issue, could keep one another abreast of their work.

In contrast, today’s public servants often find themselves unaware of, and unable to connect with, colleagues in other ministries or other levels of government who work on similar issues. This is not because their masters don’t want them to connect (although this is sometimes the case) but because they lack the technology to identify one another. As a result, public servants drafting policy on interconnected issues — such as the Environment Canada employee working on riverbed erosion and the Fisheries and Oceans employee working on spawning salmon — may not even know the other exists.

One goal of public-sector renewal is to enable better co-operation. Ian Green, the Public Policy Forum chair of Public Service
Governance noted in an on-line Globe and Mail commentary (Ensuring Our Public Service Is A Force For Good In The Lives Of Canadians — May 8) that governments face “increasingly complex and cross-cutting issues … such as environmental and health policy.” If improving co-ordination and the flow of information within and across government ministries is a central challenge, then Facebook isn’t a distraction, it’s an opportunity.

Better still, implementing such a project would be cheap and simple. After all, the computer code that runs Facebook has already been written. More importantly, it works, and, as the government is all too aware, government employees like using it. Why not ask Facebook to create a government version? No expensive scaling or customization would be required. More importantly, by government-IT standards, it would be inexpensive.

It would certainly be an improvement over current government online directories. Anyone familiar with the federal government’s Electronic Directory Services (GEDS) knows it cannot conduct searches based on interests, knowledge or experience. Indeed, searches are only permissible by name, title, telephone and department. Ironically, if you knew any of that information, you probably wouldn’t need the search engine to begin with.

Retired public servants still talk of a time when ministries were smaller, located within walking distance of one another, and where everyone knew everyone else. In their day — 60 years ago — inter-ministerial problems were solved over lunch and coffee in a shared cafeteria or local restaurant. Properly embraced, technologies like Facebook offer an opportunity to recapture the strengths of this era.

By facilitating communication, collaboration and a sense of community, the public services of Canada may discover what their
employees already know: Tools like Facebook are the new cafeterias, where challenges are resolved, colleagues are kept up to date, and inter-ministerial co-operation takes place. Sure, ban Facebook if you must. But also hire it. The job of the public services will be easier and Canadians interests will be more effectively served.

David Eaves is a frequent speaker and consultant on public policy and negotiation. He recently spoke at the Association of Professional Executives conference on Public Service Renewal.

Messina and Firefox

So I know I’m late to the party but wanted to contribute some thoughts to the Messina debate on Mozilla.

What I find most interesting are not the specifics of the discussion, but the principles beings discussed and the manner by which they are being discussed.

Break Messina piece down and he is essentially making two assertions:

1. “I don’t understand Mozilla’s strategy” and (unsurprisingly!) here are my ideas
2. “Let the community rule”

The response, has been fairly quiet. Some were clearly frustrated. Others saw it as an opportunity to raise their own pet issues. What I haven’t seen (on Planet Mozilla) is a post that really engages Chris’ ideas and says “I don’t agree with Chris on ‘a’ or ‘b’, but he’s right about ‘c.’ ” To be fair, it’s hard to react well to criticism – especially from someone you count on as an ally. When you spend your day fighting billion dollar beasts you don’t exactly want to spend time and energy defending your rear.

However, the silence risks increasing the gap between Mozilla and those who agree with Chris (which judging from his blog may or may not be a fair number of people). I was struck that one commentator said: “I didn’t know somebody could talk like this about Firefox until now.” Such a comment should be a red flag. If the community has some sacred cows or self-censors itself, that’s a bad sign. For this, and other reasons, the thrust of Messina-like rant’s may have significant implications for the future of Mozilla.

The problem is that as the Mozilla community grows and the choices for where to concentrate resources become less and less ‘obvious,’ the community members will increasingly want be part of the strategic decision-making process. When the objective is clear – build a better open browser – its easy to allocate my scarce economic resources towards the project because the aim is obvious (so I either buy-in or I don’t). But as success takes on more nebulous meaning, I need to understand why I should allocate my time and energy. Indeed, I’m more likely to do so if a) I understand the goal and b) I know I can help contribute to deciding what the goal should be.

In this regard Mozilla need to constantly re-examine how it manages strategy and engages with its community (which I know it does!). Personally, I agree with Messina that Mozilla is not a browser company. Indeed, in a previous (not entirely well formed) post, I argue that Mozilla’s isn’t even a software company. Mozilla’s is a community management organization. Consequently its core competency is not coding, but community management. The concern (I think) I share with Messina (if I read between the lines of his rant) is that as Mozilla grows and becomes more successful the decisions it must make also become increasingly complex and involve higher stakes. The fear is that Mozilla will react by adopting more corporate decision-making processes because a) its familiar – everybody knows how this process works and b) its easy – one can consult, but ultimately decisions reside with a few people who trust (or at least employ) one another.

However, if Mozilla is a community management organization then the opposite is true. Mozilla needs a way to treat its strategy like its code, open to editing and contribution. I know it already strives to do this. But what does open-strategy vs. 2.0 look like? What does community management 2.0 look like? Can Mozilla make its community integral to its strategy development? I believe that at its core, Mozilla’s success will depend on its capacity to facilitate these discussions (I may even use the dreaded term… dialogues). This may feel time consuming and onerous, but it pales in comparison to the cost of losing community members (or not attracting them in the first place).

If Mozilla can crack this problem then rants like Messina’s won’t be a threat, they’ll be an opportunity. Or at least he’ll a place where he can channel them.

New Book Review: Robert Axelrod's "The Evolution of Cooperation"

About 6 weeks ago during a trip to Ottawa David Brock urged me (for a second time!) to pick up a copy of Robert Axelrod’s “The Evolution of Cooperation.” As the title suggests it is a book about the conditions under which cooperation might emerge. While I’m willing to concede that this book may not be for everyone’s cup of tea, it is still a fine cup I belive many would enjoy. Indeed, given how frustrating and empty game theory felt while I was in grad school I wish I’d had this book at my desk.

I’ve written a review of the book you can find here. In short, I’m glad I moved it to the top of the batting order – it was completely worth it. Thanks D-Rock.