Tag Archives: community management

What Werewolf teaches us about Trust & Security

After sharing the idea behind this post with Bruce Schneier, I’ve been encouraged to think a little more about what Werewolf can teach us about trust, security and rational choices in communities that are, or are at risk of, being infiltrated by a threat. I’m not a security expert, but I do spend a lot of time thinking about negotiation, collaboration and trust, and so thought I’d pen some thoughts. The more I write below, the more I feel Werewolf could be a fun teaching tool. This is something I hope we can do “research” on at Berkman next week.

For those unfamiliar with Werewolf (also known as mafia), it’s very simple:

At the start of the game each player is secretly assigned a role by a facilitator. Typically there are 3 werewolves (who make up one team) and around 15 villagers, including one seer and one healer (who make up the other team).

Each turn of the game has two alternating phases. The first phase is “night,” during which everyone covers their eyes. The facilitator then “wakes” the werewolves who agree on a single villager they “murder.” The werewolves then return to sleep. The seer “wakes” up and points at one sleeping player and the facilitator informs the seer if that that player is a werewolf or villager. The seer then goes back to sleep. Finally the healer “wakes” up and selects one person to “heal.” If that person was chosen to be murdered by the werewolves during the night they are saved and do not die.

The second phase is “day”; this starts with everyone “waking up” (uncovering their eyes). The facilitator identifies who has been murdered (assuming they were not healed). That person is immediately eliminated from the game. The surviving players – e.g. the remaining villagers and the werewolves hidden among them – then debate who among them is a werewolf. The “day” ends with a vote to eliminate a suspect (who is also immediately removed from the game).

Play continues until all of the werewolves have been eliminated, or until the werewolves outnumber the villagers.

You can see why Werewolf raises interesting questions about trust systems. Essentially, the game is about whether or not the villagers can figure out who is lying: who is claiming to be a villager but is actually a werewolf. This creates a lot of stress and theatre. With the right people, it is a lot of fun.

There are, however, a number of interesting lessons that come out of Werewolf that make it a fun tool for thinking about trust, organization and cooperation. And many strategies – including some that are quite ruthless – are quite rational under these conditions. Here are some typical strategies:

1. Kill the Newbies

If you are playing werewolf for the first time and people find out, the village will kill you. For first time players – and I remember this well – it sucked. It felt deeply unfair… but on further analysis it is also rational.

Villagers have only a few rounds to figure out who are the werewolves, and there are strategies and tactics that greatly improve their odds. The less familiar you are with those strategies the more you threaten the group’s ability to defeat the werewolves. This makes the calculus for dealing with newbies easy: at best the group is eliminating a werewolf, at worst they are eliminating someone who hurts the odds of them winning. Hence, they get eliminated.

I’m assuming that similar examples of this behaviour take place when a network gets compromised. Maybe new nodes are cut off quickly, leaving the established nodes to start testing one another to see if they can be trusted. Of course, the variable could be different; a threat could spark a network to kill connections to all connections that, say, have outdated firmware. The point is, that such activities, while sweeping, unfair and likely punishing many “innocent” members, can feel quite rational for those part of the group or network.

2. Noise Can be Helpful

The most important villager is the seer, since they are the only one that can know – with certainty – who is a werewolf and a villager. Their challenge is to communicate this information to other villagers without revealing who they are to the werewolves (who would obviously kill them during the next night).

Good seers first ask the facilitator if the person next to them is a villager, then the person to the other side and then slowly moving out (see figure 1 below). If the person next to them is a villager they can then confide in them (e.g. round 1). Good seers can start to build a “chain” of verified villagers (round 2-3) who, as a voting block can protect one another and kill suspected (or better identified) werewolves at the end of each “day.”

Figure 1

Figure 1

This strategy, however, is predicated on the seer being able to safely communicate with those on their left and right. Naturally, werewolves are on the lookout for this behaviour. A player that keeps discreetly talking to those on their left and right makes themselves a pretty obvious target for the werewolves. Thus it is essential during each round that everyone talk to the person to their left and right, regardless of whether they have anything relevant to say or not. Getting everyone talk creates noise that anonymizes communication and interferes with the werewolves’ ability to target the seer.

This is a wonderful example of a simple counter-surveillance tactic. Everybody engages in a behaviour so that it is impossible to find the one person doing it who matters. It was doubly interesting for me as I’ve normally seen noise (e.g. unnecessary communication) as a problem – and rarely as a form of counter-power.

Moreover, in a hostile environment, this form of trust building needs to happen discreetly. The werewolves have the benefit of being both anonymous (hidden) from the villagers but are highly connected (they know who the other werewolves are). The above strategy focuses on destroying the werewolves by using creating a parallel network of villagers who are equally anonymous and highly connected but, over time, greater in number.

3. Structured and Random Stress Tests

The good news for villagers is that many people are terrible liars. Being a werewolf is hard, in part because it is fun. You have knowledge and power. Many people get giddy (literally!). They laugh or smirk or overly compensate by being silent. And some… are liable to say something stupid.

As a result, in the first round players will often insist that everyone introduce themselves and say their role. E.g. “Hi my name is David Eaves and I’m a villager.” You’d be surprised how many people screw up. On rare occasions people will omit their role, or stumble over it, or pause to think about it. This is a surefire way of getting eliminated. It comes back to lesson 1. With poor information, any information that might mean you are a werewolf is probably worth acting on. Werewolf: it’s a harsh, ruthless world.

This may be a interesting example of why ritual and consistency can become prized in a community. It is also a caution about the high transaction costs created by low-trust environments (e.g. ones where you worry the person you are talking to is lying). I’ve heard of (and have experienced first hand) border guards employing a form of the above strategy. This includes yelling at someone and intimidating them to the point where they confess to some error. If a a small transgression is admitted to, this can be used as leverage to gain larger confessions or to simply remove the person from the network (or, say, deny them entry into the country).

However, I suspect this strategy has diminishing returns. People who haven’t screwed up in the first two rounds probably aren’t going to. However, I suspect perpetuating this strategy  is something werewolves love. This is because it is an approach that is devoid of fact. Ultimately any minor deviation from an undefined “right” answer becomes justification for eliminating people – thus the werewolves can convince villagers to eliminate people for trivial reasons, and not spend their time looking at who is eliminating who, and who is coming to the aid of who in debate, patterns that are likely more effective at revealing the werewolves.

A note on physical setup

Virtually every time I’ve played werewolf it has been in a room, with the players sitting around a large table. This has meant that a given player can only talk, discreetly, with the player to their left and right. I have once played in a living room where people basically were in an unstructured heap.

What’s interesting is that I suspect that unstructured groups aid the werewolves. The seer strategy outlined in section 2 would be much more difficult to execute in a room where people could roam. A group of people that clustered around a single player would quickly become obvious. There are probably strategies that could be devised to overcome this, but they would probably be more complicated to execute, and so would create further challenges for the villagers.

So perhaps some rigidity to the structure of a community or network can go a long way to making it easier to build trust. This feels right to me, but I’m not sure what more to add on this.

All of this is a simple starting point (I’m sure I have few readers left at this point). But it would be fun to think of more ways that Werewolf could be used as a fun teaching tool around networks, trust and power. Definitely interested in hearing more thoughts.

Mozillians: Announcing Community Metrics DashboardCon – January 21, 2014

Please read background below for more info. Here’s the skinny.

What

A one day mini-conference, held (tentatively) in Vancouver on January 14th  San Francisco on January 21st and 22nd, 2014 (remote participating possible) for Mozillians about community metrics and dashboards.

Update: Apologies for the change of date and location, this event has sparked a lot of interest and so we had to change it so we could manage the number of people.

Why?

It turns out that in the past 2-3 years a number of people across Mozilla have been tinkering with dashboards and metrics in order to assess community contributions, effectiveness, bottlenecks, performance, etc… For some people this is their job (looking at you Mike Hoye) for others this is something they arrived at by necessity (looking at you SUMO group) and for others it was just a fun hobby or experiment.

Certainly I (and I believe co-collaborators Liz Henry and Mike Hoye) think metrics in general and dashboards in particular can be powerful tools, not just to understand what is going in the Mozilla Community, but as a way to empower contributors and reduce the friction to participating at Mozilla.

And yet as a community of practice, I’m not sure those interested in converting community metrics into some form of measurable output have ever gathered together. We’ve not exchanged best practices, aligned around a common nomenclature or discussed the impact these dashboards could have on the community, management and other aspects of Mozilla.

Such an exercise, we think, could be productive.

Who

Who should come? Great question. Pretty much anyone who is playing around with metrics around community, participation, or something parallel at Mozilla. If you are interested in participating please contact sign up here.

Who is behind this? I’ve outlined more in the background below, but this event is being hosted by myself, Mike Hoye (engineering community manager) and Liz Henry (bugmaster)

Goal

As you’ve probably gathered the goals are to:

  • Get a better understanding of what community metrics and dashboards exist across Mozilla
  • Learn about how such dashboards and metrics are being used to engage, manage or organize communities and/or influence operations
  • Exchange best around both the development of and use/application of dashboards and metrics
  • Stretch goal – begin to define some common definitions for metrics that exists across mozilla to enable portability of metrics across dashboards.

Hope this sounds compelling. Please feel free to email or ping me if you have questions.

—–

Background

I know that my cocollaborators – Mike Hoye and Liz Henry have their own reasons for ending up here. I, as many readers know, am deeply interested in understanding how open source communities can combine data and analytics with negotiation and management theory to better serve their members. This was the focus on my keynote at OSCON in 2012 (posted below).

For several years I tried with minimal success to create some dashboards that might provide an overview of the community’s health as well as diagnose problems that were harming growth. Despite my own limited success, it has been fascinating to see how more and more individuals across Mozilla – some developers, some managers, others just curious observers – have been scrapping data they control of can access to create dashboards to better understand what is going on in their part of the community. The fact is, there are probably at least 15 different people running community oriented dashboards across Mozilla – and almost none of us are talking to one another about it.

At the Mozilla Summit in Toronto after speaking with Mike Hoye (engineering community manager) and Liz Henry (bugmaster) I proposed that we do a low key mini conference to bring together the various Mozilla stakeholders in this space. Each of us would love to know what others at Mozilla are doing with dashboards and to understand how they are being used. We figured if we wanted to learn from others who were creating and using dashboards and community metrics data – they probably do to. So here we are!

In addition to Mozillians, I’d also love to invite an old colleague, Diederik van Liere, who looks at community metrics for the Wikimedia foundation, as his insights might also be valuable to us.

http://www.youtube.com/watch?v=TvteDoRSRr8

Mission Driven Orgs: Don’t Alienate Alumni, Leverage Them (I’m looking at you, Mozilla)

While written for Mozilla, this piece really applies to any mission-driven organization. In addition, if you are media, please don’t claim this is written by Mozilla. I’m a contributor, and Mozilla is at its best when it encourages debate and discussion. This post says nothing about Mozilla official policy and I’m sure there Mozillians who will agree and disagree with me.

The Opportunity

Mozilla is an amazing organization. With a smaller staff, and aided by a community of supporters, it not only competes with the Goliaths of Silicon Valley but uses its leverage whenever possible to fight for users’ rights. This makes it simultaneously a world leading engineering firm and, for most who work there, a mission driven organization.

That was on full display this weekend at the Mozilla Summit, taking place concurrently in Brussels, Toronto and Santa Clara. Sadly, so was something else. A number of former Mozillians, many of whom have been critical to the organization and community were not participating. They either weren’t invited, or did not feel welcome. At times, it’s not hard to see why:

You_chose_Facebook

Again this is not an official Mozilla response. And that is part of the problem. There has never been much of an official or coordinated approach to dealing with former staff and community members. And it is a terrible, terrible lost opportunity – one that hinders Mozilla from advancing its mission in multiple ways.

The main reason is this: The values we Mozillians care about may be codified in the Mozilla Manifesto, but they don’t reside there. Nor do they reside in a browser, or even in an organization. They reside in us. Mozilla is about creating power by foster a community of people who believe in and advocate for an open web.

Critically, the more of us there are, the stronger we are. The more likely we will influence others. The more likely we will achieve our mission.

And power is precisely what many of our alumni have in spades. Given Mozilla’s success, its brand, and its global presence, Mozilla’s contributors (both staff and volunteers) are sought-after – from startups to the most influential companies on the web. This means there are Mozillians influencing decisions – often at the most senior levels – at companies that Mozilla wants to influence. Even if these Mozillians only injected 5% of what Mozilla stands for into their day-to-day lives, the web would still be a better place.

So it begs the question: What should Mozilla’s alumni strategy be? Presently, from what I have seen, Mozilla has no such strategy. Often, by accident or neglect, alumni are left feeling guilty about their choice. We let them – and sometimes prompt them to – cut their connections not just with Mozilla but (more importantly) with the personal connection they felt to the mission. This at a moment when they could be some of the most important contributors to our mission. To say nothing about continuing to contribute their expertise to coding, marketing or any number of other skills they may have.

As a community, we need to accept that as amazing as Mozilla (or any non-profit) is, most people will not spend their entire career there nor volunteer forever. Projects end. Challenges get old. New opportunities present themselves. And yes, people burn out on mission – which no longer means they don’t believe in it – they are just burned out. So let’s not alienate these people, let’s support them. They could be a killer advantage one of our most important advantages. (I mean, even McKinsey keeps an alumni group, and that is just so they can sell to them… we can offer so much more meaning than that. And they can offer us so much more than that).

How I would do it

At this point, I think it is too late to start a group and hope people will come. I could be wrong, but I suspect many feel – to varying degrees – alienated. We (Mozilla) will probably have to do more than just reach out a hand.

I would find three of the most respected, most senior Mozillians who have moved on and I’d reach out privately and personally. I’d invite them to lunch individually. And I’d apologize for not staying more connected with them. Maybe it is their fault, maybe it is ours. I don’t care. It’s in our interests to fix this, so let’s look inside ourselves and apologize for our contribution as a way to start down the path.

I’d then ask them if them if they would be willing to help oversee an alumni group. If they would reach out to their networks and, with us, bring these Mozillians back into the fold.

There is ample opportunity for such a group. They could be hosted once a year and be shown what Mozilla is up to and what it means for the companies they work for. They could open doors to C-suite offices. They could mentor emerging leaders in our community and they could ask for our advice as they build new products that will impact how people use the web. In short, they could be contributors.

Let’s get smart about cultivating our allies – even those embedded in organizations with don’t completely agree with. Let’s start thinking about how we tap into and help keep alive the values that made them Mozillians in the first place, and find ways to help them be effective in promoting them.

Making Bug Fixing more Efficient (and pleasant) – This Made Me Smile

The other week I was invited down to the Bay Area Drupal Camp (#BadCamp) to give a talk on community management to a side meeting of the 100 or so core Drupal developers.

I gave a hour long version of my OSCON keynote on the Science of Community Management and had a great time engaging what was clearly a room of smart, caring people who want to do good things, ship great code, and work well with one anther. As part of my talk I ran them through some basic negotiation skills – particularly around separating positions (a demand) from interests (the reasons/concerns that created that demand). Positions are challenging to work with as they tend to lock people into what they are asking and makes outcomes either binary or fosters compromises that may make little sense, where as interests (which you get by being curious and asking lots of whys) can create the conditions for make creative, value generative outcomes that also strengthen the relationship.

Obviously, understanding the difference is key, but so is acting on it, e.g. asking questions are critical moments to try to open up the dialogue and uncover interests.

Seems like someone was listening during the workshop since I just sent this link to a conversation about a tricky drupal bug (Screen shot below)

Drupal-bug-fixing2

I love the questions. This is exactly the type of skill and community norms I think we need to build tino more of bug tracking environments/communities, which can sometimes be pretty hostile and aggressive, something that I think turns off many potentially good contributors.

Why Banning Anonymous Comments is Bad for Postmedia and Bad for Society

Last night I discovered that my local newspaper – the Vancouver Sun – was going to require users log in with Facebook to comment. It turns out that this will be true of all Postmedia newspapers.

I’m stunned that a newspaper ownership would make such a move. Even more so that editors and journalists would support it. We should all be disappointed when the fourth estate is unable to recognize it is dis-empowering those who are most marginalized. Especially when there are better alternatives at ones disposal. (For those interested in this I also recommend reading Mathew Ingram’s post, Anonymity Has Value, In Comments and Elsewhere from over a year ago.)

So what’s wrong with forcing users to sign in via Facebook to comment?

First, you have to be pretty privileged to believe that forcing people to use their real names will improve comments. Yes, there are a lot of people who use anonymity to troll or say stupid things, but there are also many people who – for very legitimate reasons – don’t want to use their real name.

What supporters of banning anonymity are saying is not just that they oppose trolls (I do too!) but that, for the sake of “accountability” we must also know the name of recovering sexual abuse victim who wants to share their personal perspective on a story in the comments. Or that we (and thus also their boss) should get to know the name of an employee who wants to share information about illegal or unethical practices they have seen at their work in a comment. It also means that a comment you make, ten years hence, can be saved on a newspapers website, traced back to your Facebook account and so used by a prospective employer to decide if you should get a job.

What ending anonymity is really about is power. Now, those who can comment will (even more so) be disproportionately those who have the income and social security to know they can voice their concern in public, safely. So I’m confident that this move will reduce trolls – but it will also snuff out the voices of those who are most marginalized. And journalists clearly understand the power dynamics of our society and the important role anonymity plays in balancing them  this is why they use anonymous sources to get scoops and dig up stories. So how newspapers as an institution, and journalists as a profession see narrowing the opportunity for those most marginalized to challenge power and authority in the comments section as being consistent with their mission, I cannot explain.

There are, of course, far better ways of handling comments. The CBC does a quite decent job of letting people vote up and down comments – this means I rarely see the worst trolls and many thoughtful comments rise to the top. The Globe does an adequate job at this as well. Mechanisms such as these are far less draconian the “outlawing” anonymity and preserve room for those most impacted or marginalized.

But let me go further. Journalists and editors often complain about the comments section as being wild. Well how often to they take even the tiniest bit of energy to engage their commentators? There are plenty of sites that allow anonymous comments with fantastic results – see flickr or reddit – but this is because those sites invested in creating norms and engaging their users. When has a journalist or commentator in this country ever decided to invest themselves in engaging their readers and commenters on a regular and ongoing basis in the comments section? While I’m sure there are important exceptions, by and large the answer is almost never. Indeed, I’m always stunned by the number of journalists and commentators I talk to who more or less hold much of their audience in contempt – seeing them as wild. No wonder the comment section has run amok – we can pretend otherwise but the commenters know you don’t respect them. If newspapers are not happy with their comment sections, they really have no one to blame but themselves. This is after all, the community they created, the norms they fostered, the result of investments that they made. Shluffing it all off to Facebook both runs counter to their mission but is also a shirking of responsibility (and business opportunity) of the highest order.

Of course, handing the problem to Facebook won’t solve it either. It was suggested, at last count, that over 80 million facebook accounts are fake. Expect that number to go up. But of course, the people who will be most happy to create that fake account are going to be the trolls who want to use it regularly, not the lone commentator who has an important perspective about a story but doesn’t want to tell the world who they are out of fear of social stigma or worse.

What’s worse, Postmedia has now essentially farmed its privacy policy out to Facebook. Presently that means that, in theory, you can’t be anonymous. But what will it mean in the future? Postmedia can’t tell you. They can’t even influence it.

For an organization managing discussions as sensitive as newspapers do – that is a pretty shocking stance to take. Who knows what future decisions about privacy Facebook is going to make. But here’s what I do know, I trusted the National Post a hell of a lot more to manage my comments and identity than I do Facebook because their missions are totally different. In the end, this could be bad not just for comments, but for Postmedia. Many people are already pretty uncomfortable with Facebook’s policies. I expect more will become so. Even if they don’t comment, I suspect readers will be drawn to sites that engage them more effectively – a newspapers that has outsourced its engagement to Facebook will probably lose out.

I get that Postmedia believes its job of managing comments will become easier because it has outsourced identity management to Facebook – but it has come at a real cost, one that I think is unacceptable for a newspaper. In the end, I think the quality of engagement and of discussion at Postmedia will suffer. That will be bad for it, but it will also be bad for society in general.

And that is sad news for all of us.

Added @ 9:27am PST. Note: Some Postmedia journalists want to make clear that this decision was a corporate one, not theirs.

Community Managers: Expectations, Experience and Culture Matter

Here’s an awesome link to grind home my point from my OSCON keynote on Community Management, particularly the part where I spoke about the importance of managing wait times – the period between when a volunteer/contributor takes and action and when they get feedback on that action.

In my talk I referenced code review wait times. For non-developers, in open source projects, a volunteer (contributor) will often write a patch which they must be reviewed by someone who oversees the project before it gets incorporated into the software’s code base. This is akin to a quality assurance process – say, like if you are baking brownies for the church charity event, the organizer probably wants to see the brownies first, just to make sure they aren’t a disaster. The period between which you write the patch (or make the brownies) and when the project manager reviews them and say they are ok/not ok, that’s the wait time.

The thing is, if you never tell people how long they are going to have to wait – expect them to get unhappy. More importantly, if, while their waiting, other contributors come and make negative comments about their contributions, don’t be surprised if they get even more unhappy and become less and less inclined to submit patches (or brownies, or whatever makes your community go round).

In other words while your code base may be important but expectations, experience and culture matter, probably more. I don’t think anyone believes Drupal is the best CMS ever invented, but its community has a pretty good expectations, a great experience and fantastic culture, so I suspect it kicks the ass of many “technically” better CMS’s run by lesser managed communities.

Because hey, if I’ve come to expect that I have to wait an infinite or undetermined amount of time, if the experience I have interacting with others suck and if the culture of the community I’m trying to volunteer with is not positive… Guess what. I’m probably going to stop contributing.

This is not rocket science.

And you can see evidence of people who experience this frustration in places around the net. Edd Dumbill sent me this link via hacker news of a frustrated contributor tired of enduring crappy expectations, experience and culture.

Heres what happens to pull requests in my experience:

  • you first find something that needs fixing
  • you write a test to reproduce the problem
  • you pass the test
  • you push the code to github and wait
  • then you keep waiting
  • then you wait a lot longer (it’s been months now)
  • then some ivory tower asshole (not part of the core team) sitting in a basement finds a reason to comment in a negative way.
  • you respond to the comment
  • more people jump on the negative train and burry your honestly helpful idea in sad faces and unrelated negativity
  • the pull dies because you just don’t give a fuck any more

If this is what your volunteer community – be it software driven, or for poverty, or a religious org, or whatever – is like, you will bleed volunteers.

This is why I keep saying things like code review dashboards matter. I bet if this user could at least see what the average wait time is for code review he’d have been much, much happier. Even if that wait time were a month… at least he’d have known what to expect. Of course improving the experience and community culture are harder problems to solve… but they clearly would have helped as well.

Most open source projects have the data to set up such a dashboard, it is just a question of if we will.

Okay, I’m late for an appointment, but really wanted to share that link and write something about it.

NB: Apologies if you’ve already seen this. I accidentally publishes this as a page, not a post on August 24th, so it escaped most people’s view.

OSCON Community Management Keynote Video, Slides and some Bonus Material

Want to thank everyone who came to my session and who sent me wonderful feedback from both the keynote and the session. I was thrilled to see ZDnet wrote a piece about the keynote as well as have practioners, such as Sonya Barry, the Community Manager for Java write things like this about the longer session:

Wednesday at OSCON we kicked off the morning with the opening plenaries. David Eaves’ talk inspired me to attend his longer session later in the day – Open Source 2.0 – The Science of Community Management. It was packed – in fact the most crowded session I’ve ever seen here. People sharing chairs, sitting on every available spot on the floor, leaning up against the back wall and the doors. Tori did a great writeup of the session, so I won’t rehash, but if you haven’t, you should read it – What does this have to do with the Java Community? Everything. Java’s strength is the community just as much as the technology, and individual project communities are so important to making a project successful and robust.

That post pretty much made my day. It’s why we come to OSCON, to hopefully pass on something helpful, so this conference really felt meaningful to me.

So, to be helpful I wanted to lay out a bunch of the content for those who were and were not there in a single place, plus a fun photo of my little guy – Alec – hanging out at #OSCON.

A Youtube video of the keynote is now up – and I’ve posted my slides here.

In addition, I did an interview in the O’Reilly boothif it goes up on YouTube, I’ll post it.

There is no video of my longer session, formally titled Open Source 2.0 – The Science of Community Management, but informally titled Three Myths of Open Source Communities, but Jeff Longland helpfully took these notes and I’ll try to rewrite it as a series of blog posts in the near future.

Finally, I earlier linked to some blog posts I’ve written about open source communities, and on open source community management as these are a deeper dive on some of the ideas I shared.

Some other notes about OSCON…

If you didn’t catch Robert “r0ml” Lefkowitz’s talk: How The App Store Killed Free Software, And Why We’re OK With That which, contrary to some predictions was neither trolling nor link bait but a very thoughtful talk which I did not entirely agree with but has left me with many, many things to think about (a sign of a great talk) do try to see if an audio copy can be tracked down.

Jono Bacon, Brian Fitzpatrick and Ben Collins-Sussman are all menches of the finest type – I’m grateful for their engagement and support given I’m late arriving at a party they all started. While you are reading this, check out buying Brian and Ben’s new book – Team Geek: A Software Developer’s Guide to Working Well with Others.

Also, if you haven’t watched Tim O’Reilly’s opening keynote, The Clothesline Paradox and the Sharing Economy, take a look. My favourite part is him discussing how we break down the energy sector and claim “solar” only provides us with a tiny fraction of our energy mix (around the 9 minutes mark). Of course, pretty much all energy is solar, from the stuff we count (oil, hydroelectic, etc.. – its all made possible by solar) or the stuff we don’t count like growing our food, etc.. Loved that.

Oh, and this ignite talk on Cryptic Crosswords by Dan Bentley from OSCON last year, remains one of my favourite. I didn’t get to catch is talk this year on why the metric system sucks – but am looking forward seeing it once it is up on YouTube.

Finally, cause I’m a sucker dad, here’s early attempts to teach my 7 month old hitting the OSCON booth hall. As his tweet says “Today I may be a mere pawn, but tomorrow I will be the grandmaster.”

Alec-Chess

Lessons for Open Source Communities: Making Bug Tracking More Efficient

This post is a discussion about making bug tracking in Bugzilla for the Mozilla project more efficient. However, I believe it is applicable to any open source project or even companies or governments running service desks (think 311).

Almost exactly a year ago I wrote a blog post titled: Some thoughts on improving Bugzilla in which I made several suggestions for improving the work flow in bugzilla. Happily a number of those ideas have been implemented.

One however, remains outstanding and, I believe, creates an unnecessary amount of triage work as well as a terrible experience for end users. My understanding is that while the bug could not be resolved last year for a few reasons, there is growing interest (exemplified originally in the comment field of my original post) to tackle it once again. This is my attempt at a rallying cry to get that process moving.

Those who are already keen on this idea and don’t want to read anything more below, this refers to bug 444302.

The Challenge: Dealing with Support Requests that Arrive in Bugzilla

I first had this idea last summer while talking to the triage team at the Mozilla Summit. These are the guys who look at the firehose of bugs being submitted to Mozilla every day. They have a finite amount of time, so anything we can do to automate their work is going to help them, and the project, out significantly.

Presently, I’m told that Mozilla gets a huge number of bugs submitted that are not actually bugs, but support issues. This creates several challenges.

First, it means that support related issues, as opposed to real problems with the software, are clogging up the bug tracking system. This increases the amount of noise in the system – making it harder for everyone to find the information they need.

Second, it means the triage teams has to spend time filtering bugs that are actually support issues. Not a good use of their time.

Third, it means that users who have real support issues but submit them accidentally though Bugzilla, get a terrible experience.

This last one is a real problem. If you are a user, feeling frustrated (and possibly not behaving as your usual rational self – we’ve all been there) because your software is not working the way you expect, and then you submit what a triage person considers a support issue (Resolve-Invalid)  you get an email that looks like this:


If I’m already cheesed that my software isn’t doing what I want, getting an email that says “Invalid” and “Verified” is really going to cheese me off. That of course presumes I even know what this email means. More likely, I’ll be thinking that some ancient machine in the bowels of mozilla using software created in the late 1990s received my plea and has, in its 640K confusion, has spammed me. (I mean look at it… from a user’s perspective!)

The Proposal: Re-Automating the Process for a better result

Step 1: My sense is that this issue – especially problem #3 – could be resolved by simply creating a new resolution field. I’ve opted to call it “Support” but am happy to name it something else.

This feels like a simple fix and it would quickly move a lot of bugs that are cluttering up bugzilla… out.

Step 2: Query the text of bugs marked “support” against Mozilla’s database. Then insert the results in an email that goes back to the user. I’m imagining something that might look like this:

SUMO-transfer-v2

Such an email has several advantages:

First, if these are users who’ve submitted inappropriate bugs and who really need support, giving them a bugzilla email isn’t going to help them, they aren’t even going to know how to read it.

Second, there is an opportunity to explain to them where they should go for help – I haven’t done that explicitly enough in this email – but you get the idea.

Because, because we’ve done a query of the Mozilla support database (SUMO) we are able to include some support articles that might resolve their issue.

Fourth, if this really is a bug from a more sophisticated user, we give them a hyperlink back to bugzilla so they can make a note or comment.

What I like about this is it is customized engagement at a low cost. More importantly, it helps unclutter things while also making us more responsive and creating a better experience for users.

Next Steps:

It’s my understanding that this is all pretty doable. After last year’s post there were several helpful comments. Including this one from Bugzilla expert Gervase Markham:

The best way to implement this would be a field on SUMO where you paste a bug number, and it reaches out, downloads the Bugzilla information using the Bugzilla API, and creates a new SUMO entry using it. It then goes back and uses the API to automatically resolve the Bugzilla bug – either as SUPPORT, if we have that new resolution, or INVALID, or MOVED (which is a resolution Bugzilla has had in the past for bugs moved elsewhere), or something else.

The SUMO end could then send them a custom email, and it could include hyperlinks to appropriate articles if the SUMO engine thought there were any.

And Tyler Downer noted in this comment that there maybe be a dependency bug (#577561) that would also need resolving:

Gerv, I love you point 3. Exactly what I had in mind, have SUMO pull the relevant data from the bug report (we just need BMO to autodetect firefox version numbers, bug 577561 ;) and then it should have most of the required data. That would save the user so much time and remove a major time barrier. They think “I just filed a bug, now they want me to start a forum thread?” If it does it automatically, the user would be so much better served.

So, if there is interest in doing this, let me know. I’m happy to support any discussion, should it take place on the comment stream of the bug, the comments below, or somewhere else that might be helpful (maybe I should dial in on this call?). Regardless, this feels like a quick win, one that would better serve Mozilla users, teach them to go to the right place for support (over time) and improve the Bugzilla workflow. It might be worth implementing even for a bit, and we can assess any positive or negative feedback after 6 months.

Let me know how I can help.

Additional Resources

Bug 444302: Provide a means to migrate support issues that are misfiled as bugs over to the support.mozilla.com forums.

My previous post: Some thoughts on improving Bugzilla. The comments are worth checking out

Mozilla’s Bugzilla Wiki Page

Developing Community Management Metrics and Tools for Mozilla

Background – how we got here

Over the past few years I’ve spent a great deal of time thinking about how we can improve both the efficiency of open source communities and contributors experience. Indeed, this was the focus, in part, of my talk at the Mozilla Summit last summer. For some years Diederik Van Liere – now with the Wikimedia foundation’s metrics team – and I have played with Bugzilla data a great deal to see if we could extract useful information from it. This led us to engaging closely with some members of the Mozilla Thunderbird team – in particular Dan Mosedale who immediately saw its potential and became a collaborator. Then, in November, we connected with Daniel Einspanjer of Mozilla Metrics and began to imagine ways to share data that could create opportunities to improve the participation experience.

Yesterday, thank’s to some amazing work on the part of the Mozilla Metrics team (listed at bottom of the post), we started sharing some of work at the Mozilla all hands. Specifically, Daniel demoed the first of a group of dashboards that describe what is going on in the Mozilla community, and that we hope, can help enable better community management. While these dashboards deal with the Mozilla community in particular I nonetheless hope they will be of interest to a number of open source communities more generally. (presently the link is only available to Mozilla staffers until the dashboard goes through security review – see more below, along with screen shots – you can see a screencast here).

Why – the contributor experience is a key driver for success of open source projects

My own feeling is that within the Mozilla community the products, like Firefox, evolve quickly, but the process by which people work together tends to evolve more slowly. This is a problem. If Mozilla cannot evolve and adopt new approaches with sufficient speed then potential and current contributors may go where the experience is better and, over time, the innovation and release cycle could itself cease to be competitive.

This task is made all the more complicated since Mozilla’s ability to fulfill its mission and compete against larger, better funded competitors depends on its capacity to tap into a large pool of social capital – a corps of paid and unpaid coders whose creativity can foster new features and ideas. Competing at this level requires Mozilla to provide processes and tools that can effectively harness and coordinate that energy at minimal cost to both contributors and the organization.

As I discussed in my Mozilla Summit talk on Community Management, processes that limit the size or potential of our community limit Mozilla. Conversely, making it easier for people to cooperate, collaborate, experiment and play enhances the community’s capacity. Consequently, open source projects should – in my opinion – constantly be looking to reduce or eliminate transactions costs and barriers to cooperation. A good example of this is how Github showed that forking can be a positive social contribution. Yes it made managing the code base easier, but what it really did was empower people. It took something everyone thought would kill open source projects – forking – and made it a powerful tool of experimentation and play.

How – Using data to enable better contributor experience

Unfortunately, it is often hard to quantitatively asses how effectively an open source community manages itself. Our goal is to change that. The hope is that these dashboards – and the data that underlies them – will provide contributors with an enhanced situational awareness of the community so they could improve not just the code base, but the community and its processes. If we can help instigate a faster pace of innovation of change in the processes of Mozilla, then I think this will both make it easier to improve the contributor experience and increase the pace of innovation and change in the software. That’s the hope.

That said, this first effort is a relatively conservative one. We wanted to create a dashboard that would allow us to identify some broader trends in the Mozilla Community, as well as provide tangible, useful data to Module Owners – particularly around identifying contributors who may be participating less frequently.


This dashboard is primarily designed to serve two purposes. First is to showcase what dashboards could be with the hope of inspiring the Mozilla community members to use it and, more importantly, to inspire them to build their own. The second reason was to provide module owners with a reliable tool with which to more effective manage their part of the community.  So what are some of the ways I hope this dashboard might be helpful? One important feature is the ability to sort contributors by staff or volunteer. An open source communities volunteer contributors should be a treasured resource. One nice things about this dashboard is that you can not only see just volunteers, but you can get a quick sense of those who haven’t submitted a patch in a while.

In the picture below I de-selected all Mozilla employees so that we are only looking at volunteer contributors. Using this view we can see who are volunteers who are starting to participate less – note the red circle marked “everything okay?” A good community manager might send these people an email asking if everything is okay. Maybe they are moving on, or maybe they just had a baby (and so are busy with a totally different type of patch – diapers), but maybe they had a bad experience and are frustrated, or a bunch of code is stuck in review. These are things we would want to know, and know quickly, as losing these contributors would be bad. In addition, we can also see who are the emerging power contributors – they might be people we want to mentor, or connect with mentors in order to solidify their positive association with our community and speed up their development. In my view, this should be core responsibilities of community managers and this dashboard makes it much easier to execute on these opportunities.

main-dasboard-notes
Below you can see how zooming in more closely allows you to see trends for contributors over time. Again, sometimes large changes or shifts are for reasons we know of (they were working on features for a big release and its shipped) but where we don’t know the reasons maybe we should pick up the phone or email this person to check to see if everything is okay.

user-dashboard-notes

Again, if this contributor had a negative experience and was drifting away from the community – wouldn’t we want to know before they silently disappeared and moved on? This is in part the goal.

Some of you may also like the fact that you can dive a little deeper by clicking on a user to see what specific patches that user has worked on (see below).

User-deep-dive1

Again, these are early days. My hope is that other dashboards will provide still more windows into the community and its processes so as to show us where there are bottlenecks and high transaction costs.

Some of the features we’d like to add to this or other dashboards include:

  • a code-review dashboard that would show how long contributors have been waiting for code-review, and how long before their patches get pushed. This could be a powerful way to identify how to streamline processes and make the experience of participating in open source communities better for users.
  • a semantic analysis of bugzilla discussion threads. This could allow us to flag threads that have become unwieldy or where people are behaving inappropriately so that module owners can better moderate or problem solve them
  • a dashboard that, based on your past bugs and some basic attributes (e.g. skillsets) informs newbies and experienced contributors which outstanding bugs could most use their expertise
  • Ultimately I’d like to see at least 3 core dashboards – one for contributors, one for module owners and one for overall projects – emerge and, as well as user generated dashboards developed using Mozilla metrics data.
  • Access to all the data in Bugzilla so the contributors, module owners, researchers and others can build their own dashboards – they know what they need better than we do

What’s Next – How Do I Get To Access it and how can I contribute

Great questions.

At the moment the dashboard is going through security review which it must complete before being accessible. Our hope is that this will be complete by the end of Q2 (June).

More importantly, we’d love to hear from contributors, developers and other interested users. We have a standing weekly call every other Friday at 9am PST where we discuss development issues with this and the forthcoming code-review dashboard, contributors needs and wanted features, as well as use cases. If you are interested in participating on these calls please either let me know, or join the Mozilla Community Metrics Google group.

Again, a huge shout out is deserved by Daniel Einspanjer and the Mozilla Metrics team. Here is a list of contributors both so people know who they are but also in case anyone has question about specific aspects of the dashboard:
Pedro Alves – Team Lead
Paula Clemente – Dashboard implementor
Nuno Moreira – UX designer
Maria Roldan – Data Extraction
Nelson Sousa – Dashboard implementor

Rethinking Wikipedia contributions rates

About a year ago news stories began to surface that wikipedia was losing more contributors that it was gaining. These stories were based on the research of Felipe Ortega who had downloaded and analyzed millions the data of contributors.

This is a question of importance to all of us. Crowdsourcing has been a powerful and disruptive force socially and economically in the short history of the web. Organizations like Wikipedia and Mozilla (at the large end of the scale) and millions of much smaller examples have destroyed old business models, spawned new industries and redefined the idea about how we can work together. Understand how the communities grow and evolve is of paramount importance.

In response to Ortega’s research Wikipedia posted a response on its blog that challenged the methodology and offered some clarity:

First, it’s important to note that Dr. Ortega’s study of editing patterns defines as an editor anyone who has made a single edit, however experimental. This results in a total count of three million editors across all languages.  In our own analytics, we choose to define editors as people who have made at least 5 edits. By our narrower definition, just under a million people can be counted as editors across all languages combined.  Both numbers include both active and inactive editors.  It’s not yet clear how the patterns observed in Dr. Ortega’s analysis could change if focused only on editors who have moved past initial experimentation.

This is actually quite fair. But the specifics are less interesting then the overall trend described by the Wikmedia Foundation. It’s worth noting that no open source or peer production project can grow infinitely. There is (a) a finite number of people in the world and (b) a finite amount of work that any system can absorb. At some point participation must stabilize. I’ve tried to illustrate this trend in the graphic below.

Open-Source-Lifecyclev2.0021-1024x606

As luck would have it, my friend Diederik Van Liere was recently hired by the Wikimedia Foundation to help them get a better understanding of editor patterns on Wikipedia – how many editors are joining and leaving the community at any given moment, and over time.

I’ve been thinking about Diederik’s research and three things have come to mind to me when I look at the above chart:

1. The question isn’t how do you ensure continued growth, nor is it always how do you stop decline. It’s about ensuring the continuity of the project.

Rapid growth should probably be expected of an open source or peer production project in the early stage that has LOTS of buzz around it (like Wikipedia was back in 2005). There’s lots of work to be done (so many articles HAVEN’T been written).

Decline may also be reasonable after the initial burst. I suspect many open source lose developers after the product moves out of beta. Indeed, some research Diederik and I have done of the Firefox community suggests this is the case.

Consequently, it might be worth inverting his research question. In addition to figuring out participation rates, figure out what is the minimum critical mass of contributors needed to sustain the project. For example, how many editors does wikipedia need to at a minimum (a) prevent vandals from destroying the current article inventory and/or at the maximum (b) sustain an article update and growth rate that sustains the current rate of traffic rate (which notably continues to grow significantly). The purpose of wikipedia is not to have many or few editors, it is to maintain the world’s most comprehensive and accurate encyclopedia.

I’ve represented this minimum critical mass in the graphic above with a “Maintenance threshold” line. Figuring out the metric for that feels like it may be more important than participation rates independently as such as metric could form the basis for a dashboard that would tell you a lot about the health of the project.

2. There might be an interesting equation describing participation rates

Another thing that struck me was that each open source project may have a participation quotient. A number that describes the amount of participation required to sustain a given unit of work in the project. For example, in wikipedia, it may be that every new page that is added needs 0.000001 new editors in order to be sustained. If page growth exceeds editors (or the community shrinks) at a certain point the project size outstrips the capacity of the community to sustain it. I can think of a few variables that might help ascertain this quotient – and I accept it wouldn’t be a fixed number. Change the technologies or rules around participation and you might make increase the effectiveness of a given participant (lowering the quotient) or you might make it harder to sustain work (raising the quotient). Indeed, the trend of a participation quotient would itself be interesting to monitor… projects will have to continue to find innovative ways to keep it constant even as the projects article archive or code base gets more complex.

3. Finding a test case – study a wiki or open source project in the decline phase

One things about open source projects is that they rarely die. Indeed, there are lots of open source projects out there that are the walking zombies. A small, dedicated community struggles to keep a code base intact and functioning that is much too large for it to manage. My sense is that peer production/open source projects can collapse (would MySpace count as an example?) but the rarely collapse and die.

Diederik suggested that maybe one should study a wiki or open source project that has died. The fact that they rarely do is actually a good thing from a research perspective as it means that the infrastructure (and thus the data about the history of participation) is often still intact – ready to be downloaded and analyzed. By finding such a community we might be able to (a) ascertain what “maintenance threshold” of the project was at its peak, (b) see how its “participation quotient” evolved (or didn’t evolve) over time and, most importantly (c) see if there are subtle clues or actions that could serve as predictors of decline or collapse. Obviously, in some cases these might be exogenous forces (e.g. new technologies or processes made the project obsolete) but these could probably be controlled for.

Anyways, hopefully there is lots here for metric geeks and community managers to chew on. These are only some preliminary thoughts so I hope to flesh them out some more with friends.