Tag Archives: mozilla

Visualizing Firefox Plugins Memory Consumption

A few months ago the Mozilla Labs and the Metrics Team, together with the growing Mozilla Research initiative, launched an Open Data Visualization Competition.

Using data collected from Test Pilot users (people who agreed to share anonymous usage data with Mozilla and test pilot new features) Mozilla asked its community to think of creative visual answers to the question: “How do people use Firefox?”

As an open data geek and Mozilla supporter the temptation to try to do something was too great. So I teamed up with my old data partner Diederik Van Liere and we set out to create a visualization. Our goals were simple:

  • have fun
  • focus on something interesting
  • create something that would be useful to Firefox developers and/or users
  • advance the cause for creating a Firefox open data portal

What follows is the result.

It turns out that – in our minds – the most interesting data set revolved around plugin memory consumption. Sure this sounds boring… but plugins (like Adobe reader, Quicktime or Flash) or are an important part of the browser experience – with them we engage in a larger, richer and more diverse set of content.  Plugins, however, also impact memory consumption and, consequently, browser performance. Indeed, some plugins can really slow down Firefox (or any browser). If consumers had a better idea of how much performance would be impacted they might be more selective about which plugins they download, and developers might be more aggressive in trying to make their plugins more efficient.

Presently, if you run Firefox you can go to the Plugin Check page to see if your plugins are up to date. We thought: Wouldn’t it be great if that page ALSO showed you memory consumption rates? Maybe something like this (note the Memory Consumption column, it doesn’t exist on the real webpage, and you can see a larger version of this image here):

Firefox data visualization v2

Please understand (and we are quite proud of this). All of the data in this mockup is real. Memory consumptions are estimates we derived by analyzing the Test Pilot data.

How, you might ask did we (Diederik) do that?

GEEK OUT EXPLANATION: Well, we (Diederik) built a dataset of about 25,000 different testpilot users and parsed the data to see which plugins were installed and how much memory was consumed around the time of initialization. This data was analyzed using ordinary least squares regression where the dependent variable is memory consumption and the different plugins are the explanatory variables. We only included results that are highly significant.

The following table shows our total results (you can download a bigger version here).

Plugin_memory_consumption_chart v2

Clearly, not all plugins are created equal.

Our point here isn’t that we have created the definitive way of assessing plugin impact on the browser, our point is that creating a solid methodology for doing so is likely witihin Mozilla’s grasp. More importantly, doing this could help improve the browsing experience. Indeed, it would probably be even wiser to do something like this for Add-ons, which is where I’m guessing the real lag time around the browsing experience is created.

Also, with such a small data set we were only able to calculate the memory usage for a limited number of plugins and generally those that are more obscure. Our methodology required having several data points from people who are and who aren’t using a given plugin and so with many popular plugins we didn’t have enough data from people who weren’t using it… a problem however, that would likely be easily solved with access to more data.

Finally, I hope this contest and our submission helps make the case for why Mozilla needs an open data portal. Mozilla collects and incredible amount of data of which it does not have the resources to analyze internally. Making it available to the community would do to data what Mozilla has done to code – enable others create value that could affect the product and help advance the open web. I had a great meeting earlier this week with a number of the Mozilla people about this issue, I hope that we can continue to make progress.

Let's do an International Open Data Hackathon

Let’s do it.

Last summer, I met Pedro Markun and Daniela Silva at the Mozilla Summit. During the conversation – feeling the drumbeat vibe of the conference – we agreed it would be fun to do an international event. Something that could draw attention to open data.

A few weeks before I’d met Edward Ocampo-Gooding, Mary Beth Baker and Daniel Beauchamp at GovCamp Ottawa. Fresh from the success of getting the City of Ottawa to see the wisdom of open data and hosting a huge open data hackathon at city hall they were thinking “let’s do something international.” Yesterday, I tested the idea on the Open Knowledge Foundation’s listserve and a number of great people from around the world wrote back right away and said… “We’re interested.”

This idea has lots of owners, from Brazil to Europe to Canada, and so my gut check tells me, there is interest. So let’s take the next step. Let’s do it.

Why.

Here’s my take on three great reasons now is a good time for a global open data hackathon:

1) Build on Success: There are a growing number of places that now have open data. My sense is we need to keep innovating with open data – to show governments and the public why it’s serious, why it’s fun, why it makes life better, and above all, why it’s important. Let’s get some great people together with a common passion and see what we can do.

2) Spread the Word: There are many places without open data. Some places have developed communities of open data activists and hackers, others have nascent communities. In either case these communities should know they are not alone, that there is an international community of developers, hackers and advocates who want to show them material and emotional support. They also need to demonstrate, to their governments and the public, why open data matters. I think an global open data hackathon can’t hurt, and can make help a whole lot. Let’s see.

3) Make a Better World: Finally, there is a growing amount of global open data thanks to the World Bank’s open data catalog and its Apps for Development competition. The Bank is asking for developers to build apps that, using this data (plus any other data you want) will contribute to reaching the Millennium Development Goals by 2015. No matter who, or where, you are in the world this is a cause I believe we can all support. In addition, for communities with little available open data, the bank has a catalog that might provide at least some that is of interest.

So with that all said, I think we should propose hosting a global open data hackathon that is simple and decentralized: locally organized, but globally connected.

How.

The basic premises for the event would be simple, relying on 5 basic principles.

1. It will happen on Saturday, Dec 4th. (after a fair bit of canvassing of colleagues around the world this seems to be a date a number can make work). It can be as big or as small, as long or as short, as you’d like it.

2. It should be open. Daniel, Mary Beth and Edward have done an amazing job in Ottawa attracting a diverse crowd of people to hackathons, even having whole families come out. Chris Thorpe in the UK has done similarly amazing work getting young and diverse group hacking. I love Nat Torkington’s words on the subject. Our movement is stronger when it is broader.

3. Anyone can organize a local event. If you are keen help organize one in your city and/or just participate add your name to the relevant city on this wiki page. Where ever possible, try to keep it to one per city, let’s build some community and get new people together. Which city or cities you share with is up to you as it how you do it. But let’s share.

4. You can hack on anything that involves open data. Could be a local app, or a global apps for development submission, scrape data from a government website and make it available in a useful format for others or create your own data catalog of government data.

5. Let’s share ideas across cities on the day. Each city’s hackathon should do at least one demo, brainstorm, proposal, or anything that it shares in an interactive way with at members of a hackathon in at least one other city. This could be via video stream, skype, by chat… anything but let’s get to know one another and share the cool projects or ideas we are hacking on. There are some significant challenges to making this work: timezones, languages, culture, technology… but who cares, we are problem solvers, let’s figure out a way to make it work.

Again, let’s not try to boil the ocean. Let’s have a bunch of events, where people care enough to organize them, and try to link them together with a simple short connection/presentation.Above all let’s raise some awareness, build something and have some fun.

What’s next?

1. If you are interested, sign up on the wiki. We’ll move to something more substantive once we have the numbers.

2. Reach out and connect with others in your city on the wiki. Start thinking about the logistics. And be inclusive. Someone new shows up, let them help too.

3. Share with me your thoughts. What’s got you excited about it? If you love this idea, let me know, and blog/tweet/status update about it. Conversely, tell me what’s wrong with any or all of the above. What’s got you worried? I want to feel positive about this, but I also want to know how we can make it better.

4. If there is interest let’s get a simple website up with some basic logo that anyone can use to show they are part of this. Something like the OpenDataOttawa website comes to mind, but likely simpler still, just laying out the ground rules and providing links to where events are taking place. Might even just be a wiki. I’ve registered opendataday.org, not wedded to it, but it felt like a good start. If anyone wants to help set that up, please let me know. Would love the help.

5. Localization. If there is bandwidth locally, I’d love for people to translate this blog post and repost it locally. (let me know as I’ll try cross posting it here, or at least link to it). It is important that this not be an english language only event.

6. If people want a place to chat with other about this, feel free to post comments below. Also the Open Knowledge Foundation’s Open Government mailing list is probably a good resource.

Okay, hopefully this sounds fun to a few committed people. Let me know what you think.

Rethinking Wikipedia contributions rates

About a year ago news stories began to surface that wikipedia was losing more contributors that it was gaining. These stories were based on the research of Felipe Ortega who had downloaded and analyzed millions the data of contributors.

This is a question of importance to all of us. Crowdsourcing has been a powerful and disruptive force socially and economically in the short history of the web. Organizations like Wikipedia and Mozilla (at the large end of the scale) and millions of much smaller examples have destroyed old business models, spawned new industries and redefined the idea about how we can work together. Understand how the communities grow and evolve is of paramount importance.

In response to Ortega’s research Wikipedia posted a response on its blog that challenged the methodology and offered some clarity:

First, it’s important to note that Dr. Ortega’s study of editing patterns defines as an editor anyone who has made a single edit, however experimental. This results in a total count of three million editors across all languages.  In our own analytics, we choose to define editors as people who have made at least 5 edits. By our narrower definition, just under a million people can be counted as editors across all languages combined.  Both numbers include both active and inactive editors.  It’s not yet clear how the patterns observed in Dr. Ortega’s analysis could change if focused only on editors who have moved past initial experimentation.

This is actually quite fair. But the specifics are less interesting then the overall trend described by the Wikmedia Foundation. It’s worth noting that no open source or peer production project can grow infinitely. There is (a) a finite number of people in the world and (b) a finite amount of work that any system can absorb. At some point participation must stabilize. I’ve tried to illustrate this trend in the graphic below.

Open-Source-Lifecyclev2.0021-1024x606

As luck would have it, my friend Diederik Van Liere was recently hired by the Wikimedia Foundation to help them get a better understanding of editor patterns on Wikipedia – how many editors are joining and leaving the community at any given moment, and over time.

I’ve been thinking about Diederik’s research and three things have come to mind to me when I look at the above chart:

1. The question isn’t how do you ensure continued growth, nor is it always how do you stop decline. It’s about ensuring the continuity of the project.

Rapid growth should probably be expected of an open source or peer production project in the early stage that has LOTS of buzz around it (like Wikipedia was back in 2005). There’s lots of work to be done (so many articles HAVEN’T been written).

Decline may also be reasonable after the initial burst. I suspect many open source lose developers after the product moves out of beta. Indeed, some research Diederik and I have done of the Firefox community suggests this is the case.

Consequently, it might be worth inverting his research question. In addition to figuring out participation rates, figure out what is the minimum critical mass of contributors needed to sustain the project. For example, how many editors does wikipedia need to at a minimum (a) prevent vandals from destroying the current article inventory and/or at the maximum (b) sustain an article update and growth rate that sustains the current rate of traffic rate (which notably continues to grow significantly). The purpose of wikipedia is not to have many or few editors, it is to maintain the world’s most comprehensive and accurate encyclopedia.

I’ve represented this minimum critical mass in the graphic above with a “Maintenance threshold” line. Figuring out the metric for that feels like it may be more important than participation rates independently as such as metric could form the basis for a dashboard that would tell you a lot about the health of the project.

2. There might be an interesting equation describing participation rates

Another thing that struck me was that each open source project may have a participation quotient. A number that describes the amount of participation required to sustain a given unit of work in the project. For example, in wikipedia, it may be that every new page that is added needs 0.000001 new editors in order to be sustained. If page growth exceeds editors (or the community shrinks) at a certain point the project size outstrips the capacity of the community to sustain it. I can think of a few variables that might help ascertain this quotient – and I accept it wouldn’t be a fixed number. Change the technologies or rules around participation and you might make increase the effectiveness of a given participant (lowering the quotient) or you might make it harder to sustain work (raising the quotient). Indeed, the trend of a participation quotient would itself be interesting to monitor… projects will have to continue to find innovative ways to keep it constant even as the projects article archive or code base gets more complex.

3. Finding a test case – study a wiki or open source project in the decline phase

One things about open source projects is that they rarely die. Indeed, there are lots of open source projects out there that are the walking zombies. A small, dedicated community struggles to keep a code base intact and functioning that is much too large for it to manage. My sense is that peer production/open source projects can collapse (would MySpace count as an example?) but the rarely collapse and die.

Diederik suggested that maybe one should study a wiki or open source project that has died. The fact that they rarely do is actually a good thing from a research perspective as it means that the infrastructure (and thus the data about the history of participation) is often still intact – ready to be downloaded and analyzed. By finding such a community we might be able to (a) ascertain what “maintenance threshold” of the project was at its peak, (b) see how its “participation quotient” evolved (or didn’t evolve) over time and, most importantly (c) see if there are subtle clues or actions that could serve as predictors of decline or collapse. Obviously, in some cases these might be exogenous forces (e.g. new technologies or processes made the project obsolete) but these could probably be controlled for.

Anyways, hopefully there is lots here for metric geeks and community managers to chew on. These are only some preliminary thoughts so I hope to flesh them out some more with friends.

Rethinking Freedom of Information Requests: from Bugzilla to AccessZilla

Last week I gave a talk at the Conference for Parliamentarians hosted by the Information Commission as part of Right to Know Week.

During the panel I noted that, if we are interested in improving response times for Freedom of Information (FOI) requests (or, in Canada, Access to Information (ATIP) requests) why doesn’t the Office of the Information Commissioner use a bugzilla type software to track requests?

Such a system would have a number of serious advantages, including:

  1. Requests would be public (although the identity of the requester could remain anonymous), this means if numerous people request the same document they could bandwagon onto a single request
  2. Requests would be searchable – this would make it easier to find documents already released and requests already completed
  3. You could track performance in real time – you could see how quickly different ministries, individuals, groups, etc… respond to FOI/ATIP requests, you could even sort performance by keywords, requester or time of the year
  4. You could see who specifically is holding up a request

In short such a system would bring a lot of transparency to the process itself and, I suspect, would provide a powerful incentive for ministries and individuals to improve their performance in responding to requests.

For those unfamiliar with Bugzilla it is an open source software application used by a number of projects to track “bugs” and feature requests in the software. So, for example, if you notice the software has a bug, you register it in Bugzilla, and then, if you are lucky and/or if the bug is really important, so intrepid developer will come along and develop a patch for it. Posted below, for example, is a bug I submitted for Thunderbird, an email client developed by Mozilla. It’s not as intuitive as it could be but you can get the general sense of things: when I submitted the bug (2010-01-09), who developed the patch (David Bienvenu), it’s current status (Fixed), etc…

ATIPPER

Interestingly, an FOI or ATIP request really isn’t that different than a “bug” in a software program. In many ways, bugzilla is just a complex and collaborative “to do” list manager. I could imagine it wouldn’t be that hard to reskin it so that it could be used to manage and monitor access to information requests. Indeed, I suspect there might even be a community of volunteers who would be willing to work with the Office of the Information Commissioner to help make it happen.

Below I’ve done a mock up of what I think revamped Bugzilla, (renamed AccessZilla) might look like. I’m put numbers next to some of the features so that I can explain in detail about them below.

ATIPPER-OIC1

So what are some of the features I’ve included?

1. Status: Now an ATIP request can be marked with a status, these might be as simple as submitted, in process, under review, fixed and verified fixed (meaning the submitter has confirmed they’ve received it). This alone would allow the Information Commissioner, the submitter, and the public to track how long an individual request (or an aggregate of requests) stay in each part of the process.

2.Keywords: Wouldn’t it be nice to search of other FOR/ATIP requests with similar keywords? Perhaps someone has submitted a request for a document that is similar to your own, but not something you knew existed or had thought of… Keywords could be a powerful way to find government documents.

3. Individual accountability: Now you can see who is monitoring the request on behalf of the Office of the information commissioner and who is the ATIP officer within the Ministry. If the rules permitted then potential the public servants involved in the document might have their names attached here as well (or maybe this option will only be available to those who log on as ATIP officers.

4. Logs: You would be able to see the last time the request was modified. This might include getting the documents ready, expressing concern about privacy or confidentiality, or simply asking for clarification about the request.

5. Related requests: Like keywords, but more sophisticated. Why not have the software look at the words and people involved in the request and suggest other, completed requests, that it thinks might similar in type and therefor of interest to the user. Seems obvious.

6. Simple and reusable resolution: Once the ATIP officer has the documentation, they can simply upload it as an attachment to the request. This way not only can the original user quickly download the document, but anyone subsequent user who stumbles upon the request during a search could download the documents. Better still any public servant who has unclassified documents that might relate to the request can simply upload them directly as well.

7. Search: This feels pretty obvious… it would certainly make citizens life much easier and be the basic ante for any government that claims to be interested in transparency and accountability.

8. Visualizing it (not shown): The nice thing about all of these features is that the data coming out of them could be visualized. We could generate realt time charts showing average response time by ministry, list of respondees by speed from slowest to fastest, even something as mundane as most searched keywords. The point being that with visualizations is that a governments performance around transparency and accountability becomes more accessible to the general public.

It may be that there is much better software out there for doing this (like JIRA), I’m definitely open do suggestions. What I like about bugzilla is that it can be hosted, it’s free and its open source. Mostly however, software like this creates an opportunity for the Office of the Information Commissioner in Canada, and access to information managers around the world, to alter the incentives for governments to complete FOI/ATIP requests as well as make it easier for citizens to find out information about their government. It could be a fascinating project to reskin bugzilla (or some other software platform) to do this. Maybe even a Information Commissioners from around the world could pool their funds to sponsor such a reskinning of bugzilla…

My Mozilla Summit 2010 Talk: Making the Army of Awesome more Awesome

This summer I had the enormous pleasure and privilege of both being at the Mozilla Summit and of being selected to give a lightening talk.

Embedded below is the talk – it’s five minutes so won’t take long to watch and is a short and updated version of my community management presentation. There are tons of people to thank for this talk, Diederik Van Liere, David Ascher and Mike Beltzner come to mind immediately, but there are many others as well. It also builds off a number of posts, including some old gems like this one and this one.

I’ve embedded a YouTube video of it, and the slide deck is a little further down.

Saving Cities Millions: Introducing CivicCommons.com

Last year, after speaking at the MISA West conference I blogged about an idea I’d called Muniforge (It was also published in the Municipal Information Systems Association’s journal Municipal Interface but behind a paywall). The idea was to create a repository like SourceForge that could host open source software code developed by and/or for cities to share with one another. A few months later I followed it up with another post Saving Millions: Why Cities should Fork the Kuali Foundation which chronicled how a coalition of universities have been doing something similar (they call it community source) and have been saving themselves millions of dollars.

Last week at the Gov 2.0 Summit in Washington, DC my friends over at OpenPlans, with who I’ve exchanged many thoughts about this idea, along with the City of Washington DC brought this idea to life with the launch of Civic Commons. It’s an exciting project that has involved the work of a lot of people: Phillip Ashlock at OpenPlans who isn’t in the video below deserves a great deal of congratulations, as does the team over at Code for America who were also not on the stage.

At the moment Civic Commons is a sort of whitepages for open sourced civic government applications and policies. It doesn’t actually host the software it just points you to where the licenses and code reside (say, for example, at GitHub). There are lots of great tools out there for collaborating on software that don’t need replicating, instead Civic Commons is trying to foster community, a place where cities can find projects they’d like to leverage or contribute to.

The video below outlines it all in more detail. If you find it interesting (or want to skip it and get to that action right away) take a look at the Civic Commons.com website, there are already a number of applications being shared and worked on. I’m also thrilled to share that I’ve been asked to be an adviser to Civic Commons, so more on that and what it means for non-American cities, after the video.

One thing that comes through when looking at this video is the sense this is a distinctly American project. Nothing could be further from the truth. Indeed, during a planning meeting on Thursday I mentioned that a few Canadian cities have contacted me about software applications they would like to make open source to share with other municipalities, everyone and especially Bryan Sivak (CIO for Washington, DC) was keen that other countries join and partake in Civic Commons.

It may end up that municipalities in other countries wish to create their own independent project. That is fine (I’m in favour of diverse approaches), but in the interim I’m keen to have some international participation early on so that processes and issues it raises will be addressed and baked into the project early on. If you work at a city and are thinking that you’d like to add a project feel free to contact me, but also don’t be afraid to just go straight to the site and add it directly!

Anyway, just to sum up, I’m over the moon excited about this project and hope it will turn out. I’ve been hoping something like this would be launched since writing about Muniforge and am excited to both see it happening and be involved.

Bugzilla – progress made and new thoughts

A few weeks ago I published a post entitled Some Thoughts on Improving Bugzilla. The post got a fair bit a traction and received a large number of supportive comments. But what was best, about the post, about open source, about Mozilla, is that it drew me into a serious of conversations with people who wanted to make some of it reality.

Specifically, I’d like to thank Guy Pyrzak over at Bugzilla and Clint Talbert at Mozilla both of whom spent hours entertaining and conversing about these ideas with me, problem solving them and, if we are really honest, basically doing all the heavy lifting to transform them from ideas on this blog into real changes.

So in this post I have two things to share. First is an update on progress from the ideas in the last post (which will be this post) as well as some new thoughts about how Mozilla instance of Bugzilla could be further improved (which will be my next post).

So here we go…

Update!

1. Simplifying Menus

First up I made some suggestions around simplifying the bugzilla landing page. These were pretty cosmetic, but they make the landing page a little less intimidating to a new user and, frankly, nicer for everyone. We are presently drafting up the small changes to the code that would require this change and getting ready to submit it as a proposal. Status – Yellow.

2. Gather more information about our users (and, while I’m at it, some more simplifying)

Second, I outlined some ideas for streamlining the process of joining bugzilla and on the data we collect about users.

On the first part, which is about the steamlined pages (designed to help ensure that true bug submitters end up in bugzilla and not those seeking support) here too we will be submitting some new proposed pages shortly. Status – Yellow

On the second part I suggested that we ask users if they English is their second language and that we mark new bugzilla accounts with a “new” symbol. Guy is coding up an extension to Bugzilla that will both of these. Once done, I’ll suggest to Mozilla that they include this extension in their instance. Status – Green.

3. Make Life Easier for Users and the Triage Guys

I thought we could make life more efficient for triage and users if we added a status where bugs could be declared “RESOLVED-SUPPORT.” There’s been some reception to this idea. However, the second part of this idea is that once a bug is tagged as such a script automatically should scan the support database, find articles with a strong word correlation to the bug description and email the bug submitter links to those pages. Once again, Guy has stepped forward to develop such an extension which hopefully will be working in the not to distant future. Status – Green.

4. Make Bugzilla Celebrate, enhance our brand and build community

But probably the most exciting part is the final suggestion. That we send (at least non-developers) much nicer emails celebrating that the bug they submitted has been patched. It turns out (hardly surprising) that I wasn’t the first person to think that Bugzilla should be able to send HTML emails. Indeed, that feature request was first made back in 2001 and, when I blogged about this the other week, had not be updated since 2006. Once again, Guy has proven to be unbelievably helpful. It turns out that due to some changes to bugzilla many of the blocks to patching this had disappeared and so he has been working on the code. Status – Green.

Lots here for many people to be proud of. Hopefully some of these ideas will go live in the not too distant future. That said, still many hurdles to clear and if you are a decision maker on any of these and would like to talk about these ideas, please do not hesitate to contact me.

Creating Open Data Apps: Lessons from Vantrash Creator Luke Closs

Last week, as part of the Apps for Climate Action competition (which is open to anyone in Canada), I interviewed the always awesome Luke Closs. Luke, along with Kevin Jones, created VanTrash, a garbage pick up reminder app that uses open data from the City of Vancouver. In it, Luke shares some of the lessons learned while creating an application using open data.

As the deadline for the Apps for Climate Action competition approaches (August 8th) we thought this might help those who are thinking about throwing their hat in the ring last minute.

Some key lessons from Luke:

  • Don’t boil the ocean: Keep it simple – do one thing really, really well.
  • Get a beta up fast: Try to scope something you can get a rough version working in day or evening – that is a sure sign that it is doable
  • Beta test: On friends and family. A lot.
  • Keep it fun: do something that develops a skill or let’s you explore a technology you’re interested in

Some thoughts on improving Bugzilla

One of the keys to making an open source project work is getting feedback from users and developers about problems (bugs) in the code or system. Mozilla (the organization behind Firefox and Thunderbird) uses Bugzilla, but organizations have developed a variety of systems for dealing with this issue. For example, many cities use 311. I’m going to talk about Bugzilla and Mozilla in this case, but I think the lessons can be applied more broadly for some of my policy geek friends.

So first, some first principles. Why does getting the system right matter? A few reasons come to mind:

  1. Engagement: For many people Bugzilla is their first contact with “the community.” We should want users to have a good experience so they feel some affinity towards us and we should want developers to have a great experience so that they want to deepen their level of participation and engagement.
  2. Efficiency: If you have the wrong or incomplete information it is hard (or impossible) to solve a problem, wasting the precious time of volunteer contributors.

I also concede that these two objectives may not always be congruent. Indeed, at times there may be trade offs between them… but I think there is a lot that can be done to improve both.

I’ve probably got more ideas than can fit (or should fit) into one post so I’m going to unload a few. I’ve got more that relate to the negotiation and empathetic approaches I talked about at the Mozilla Summit.

One additional thought. Please feel free to dump all over these. Some changes many not be as simple as I’ve assumed. Others may break or contravene important features I’m not aware of. Happy to engage people on these, please do not see them as an end point, but rather a beginning. My main goal with this first batch of suggestions was to find things that felt easier to do and so could be implemented quickly if there was interest and would help reduce transactions costs right away.

1. Simplifying Menus

First. I thought there were some simple changes that could render the interface cleaner and friendlier. It’s pretty text heavy – which is great for advanced users, but less inviting for newer users. More importantly however, we could streamline things to make it easier for people to onboard.

Take for example, the landing page of Bugzilla. It is unclear to me why “Open a new Account” should be on this page. Advanced users will know they want to file a bug, novices (who may be on the wrong site and who should be looking for support) might believe they have to open and account to get support. So why not eliminate the option altogether. You are going to get it anyways if you click on “File a bug.”

Bugzilla-landing-page

Current

Bugzilla-landing-page-v2

Proposed

In addition, I got rid of the bottom menu bar (which I don’t think is necessary on this screenƒclu given all the features were along the top as well). I also ditched the Release Notes and User Guide for Bugzilla as I had doubts about whether users were, at this point and on this screen, looking for those things)

2. Gather more information about our users (and, while I’m at it, some more simplifying)

Once you choose to file a bug you get prompted to either log in or create an account. At this point, if you want to create an account. I thought this page was hard to read with the text spanning the whole width, plus, there is some good info we could gather about users at this point (the point it feels they are mostly likely going to add to their profile).

Current

Bugzilla-registration-v2

Proposed

Couple things a like about this proposed screen.

One, if you are a lost user just looking for support we likely snag you before you fill out a bugzilla account. My feeling is the bugzilla is a scary place that most users shouldn’t end up in… we need to give people lots of opportunities to opt for support before diving in, in case that is what they really need.

Second, in this proposed version we tell people to read the bugzilla guidelines and suggest using an alternate email before they punch their email into the email field box.

In addition, we ask the user for their real name now (as opposed to relying on them to fill it out later). This nudge feels important as the more people with real names on the site, the more I think people will develop relationships with one another. Finally we ask people if English is their second language and if this is their first open source project.

Finally, with the extra data fields we can help flag users as ESL or new and thus in need of more care, patience and help as they on-ramp (see screen shots below). We could even modify the Bugzilla guidelines to inform people to provide newbies and ESL’s with appropriate respect and support.

Bugzilla-Raw1

Current

Bugzilla-New

Proposed

Proposed

I imagine that your “newbie” status would disappear either when you want (some sort of preference in your profile) or after you’ve engaged in a certain amount of activity.

3. Make life easier for users and the triage guys

Here is an idea I had talking with some of the triage guys at the Mozilla Summit.

Let’s suppose that someone submits a bug that isn’t really a bug but a support issue. I’m informed that this happens with a high degree of frequency. Would it be nice if, with a click of a mouse, the triage guys could move that bug out of Bugzilla and into a separate database (ideally this would be straight into SUMO, but I respect that this might not be easy – so just moving it to a separate database and de-cluttering bugzilla would be a great first start – the SUMO guys could then create a way to import it). My sense is that this simply requires creating a new resolution field – I’ve opted to call it “Support” but am happy to name it something else.

Current

Status-v2

Proposed

This feels like a simple fix and it would quickly move a lot of bugs that are cluttering up bugzilla… out. This is important as searches for bugs often return many results that are support oriented, making it harder to find the bugs you are actually searching for. Better still, it would get them somewhere where they could more likely help users (who are probably waiting for us to respond).

Of course, presently bugzilla will auto generate an email that looks like the first one and this isn’t going to help. So what if we did something else?

unresolved

Current

SUMO-transfer-v2

Proposed

Here is the auto-generated email I think we should be sending users whos bugs get sent to SUMO. I’ve proposed a few things.

First, if these are users who’ve submitted inappropriate bugs and who really need support, giving them a bugzilla email isn’t going to help them, they aren’t even going to know how to read it.

Second, there is an opportunity to explain to them where they should go for help – I haven’t done that explicitly enough in this email – but you get the idea

Third, when the bug gets moved to SUMO it might be possible to do a simple key word analysis of the bug and, from that, determine what are the most likely support articles they are looking for. Why don’t we send them the top 3 or 5 as hyperlinks in the email?

Fourth, if this really is a bug from a more sophisticated user, we give them a hyperlink back to bugzilla so they can make a note or comment.

What I like about this is it is customized engagement at a low cost. More importantly, it helps unclutter things while also making us more responsive and creating a better experience for users.

4. Make Bugzilla Celebrate, enhance our brand and build community

Okay, so here’s the thing that really bugs me about bugzilla. If we want to be onramping people and building community, shouldn’t we celebrate people’s successes? At the moment this is the email you get from Bugzilla when a bug you’ve submitted gets patched:

BORING! Here, at the moment of maximum joy, especially for casual or new bugzilla participants we do nothing to engage or celebrate.

This, is what I think the auto-generated bugzilla email should look like.

Congrats-v2

Yes, I agree that hard core community members probably won’t care about these types of bugs, but for more casual participants this is an opportunity to explain how open source and mozilla works (the graphic) as well as a chance to educate them. I’ve even been more explicit about this by offering links to a) explain the open web, b) learn about mozilla and open source; and c) donate to the foundation (given this is a moment of pride for many non-developer end users)

Again, I’m not overly attached to this design per se, it would just be nice to have something fun, celebratory and mozillaesque.

Okay, it is super late and I’m on an early flight tomorrow. Would love feedback on all or any of this for those who’ve made it this far. I’ll be sharing more thoughts, especially on empathetic nudges and community management in bugzilla ASAP.

Awesome Interactions: More on my Mozilla Summit 2010 Ignite Talk

Last week I had the distinct pleasure of being at the Mozilla Summit.

This is a gathering of about 650ish people from innumerable countries around the world to talk about Mozilla, the future of the open web, the various Mozilla products (such as Firefox and Thunderbird). As Mozilla is a distributed community of thousands of people from around the world and the summit only takes place every two years, it as one participant memorably put it, “an opportunity to engage in two years of pent up water cooler talk.”

As a follow up for the summit I’ve two things I wanted to share.

First, for those who enjoyed my Ignite talk on community management entitled Making the Army of Awesome More Awesome I’ve uploaded my slides to slideshare. (Not the most flattering picture of me giving the talk, but the only one I could find on flickr…)

I’m hoping, once the summit organizers have taken a much deserved break, to get video and audio of the presentation and I’ll create slidecast and post it to this blog.

Also, if you found the talk engaging, there is a longer version of that talk where I dive a little deeper into some of the theory I mention at the end. It is a talk David Humphrey kindly asked me to give back at FSOSS a few yeas ago called Community Management: Open Source’s Core Competency.

Second, both in my talk and during the incredible time I had speaking with a number of people in the Mozilla community I brainstormed a ton of ideas. I’m committed to documenting those and sharing them. Here’s a list of some of them, in the coming week I hope to blog on each, and ideally all of these.

1. Improve the link between Bugzilla and SUMO

2. Auto-generate help topics in the Help pull down menu

3. Ask people when they download Firefox or Thunderbird if they’ll volunteer to do bug confirmation

4. Add “Newbie” to new Bugzilla registers

5. Add “ESL” (English as a second language) to Bugzilla accounts that request it

6. Rethinking data.mozilla.org and fostering a research community

7. Segment Bugzilla submitters into groups that might be engaged differently

8. Reboot Diederik Van Liere’s jet pack add-on that predicts bug patch success

8b. Add on a crowdsourcing app to call out negative language or behaviour

9. Retool questions asked in Bugzilla to “nudge” users to better responses

10. Develop an inquire, paraphrase, acknowledge and advocate crowdsourcing identfier jet pack add on for bugzilla

11. plus more, but lets start with these…