My friend Diederik van Liere has written a very, very cool jet-pack add-on that calculates the probability a bug report will result in a fixed bug.
The skinny on it is that Diederik’s app bases its prediction on the bug reporter’s experience, their past success rate, the presence of a stack trace and whether the bug reporter is a Mozilla affiliate. These variables appear to be strong and positive predictors of whether a bug will be fixed. The add-on can be downloaded here and its underlying methodology is explained in this blog post.
One way the add-on could be helpful is that it would enable the mozilla community to focus its resources on the most promising bug reports. Volunteer coders with limited time who want to show up and and take ownership over a specific bug would probably find this add-on handy as it would help them spend their precious volunteer time on bugs that are likely well thought through, documented effectively and submitted by someone who will be accessible and able to provide them with input if necessary.
The danger of course, is that a tool like this might further enhance (what I imagine is) a power-law like distribution of bug submitters. The add-on would allow those who are already the most effective bug submitters to get still more attention while first time submitters or those who are still learning may not receive as much sufficient attention (coaching, feedback, support) to improve. Indeed, one powerful way the tool might be used (and which I’m about to talk to Diederik about) is to determine if there are classes of bug submitters who are least likely to be successful. If we can find some common traits among them it might be possible to identify ways to better support them and/or enable them to contribute to the community more effectively. Suddenly a group of people who have expressed interest but have been inadvertently marginalized (not on purpose) could be brought more effectively into the community. Such a group might be the lowest hanging fruit in finding the next million mozillians.
Dating a Bug: Defect Reports are Like Online DatingDating a bug? No, I'm not talking about a night out with Jay Underwood's character in the film Uncle Buck; I'm talking about software defects. When you sign on to a dating web site like PlentyofFish, apart from having your retina scarred by the painful design, you are treated to a stream of information about statistical probability of being contacted if you, say, complete a survey (improve your chances by 230%), add a picture (9x more responses), or a series of other actions you can take to increase your odd of being noticed. In your post, you note that the JetPack plug-in could be used to determine which bugs might not be noticed, and therefore not closed. This would be good insight, and could be supplemented with ideas borrowed from Markus Frind (the guy who gave us PlentyofFish). If defect management tools included features that made defect identification and triage a bit more like dating, I think they'd be more successful at two things, which would contribute to better software: 1. Soliciting good bug reports 2. Helping neophytes learn what makes for good bug reportsIn many ways, a good defect report is like scintillating profile on a dating web site. It begins with a cheekily clever headline that is still descriptive. Like functional specifications that are fun to read, interesting defect reports will get fixed more quickly. The defect report should be full of detail, but tersely cogent rather than merely long-winded. A couple of screen shots should be included if they're relevant (if you're on a dating web site for the visually impaired, you might not care if there is a picture, and maybe it's the same if you're reporting a defect with the API). Above all, you should include some way to follow-up in case someone wants to ask you questions about it.
Well, I'd be happy to be proved wrong, but from some previous stats I saw (a long time ago, and I don't know where) that the majority of bugs that get fixed are filed by the developers who fix them (or at least people working on the same team).To a large extent, the planning of development work is not driven by looking at lists of existing bugs – plans are made, and then bugs are filed to implement them. I know if I file a bug that fits with the ongoing work, or is a regression caused by it, it has a high chance of getting fixed (and this generally applies even if my report is poor – indeed, some developers have a habit of filing terrible bug reports on themselves).If the goal is to improve bug reports, then analysing the submitter rather than the report seems like the wrong thing to do. Probably a great way of getting your bug fixed is “be Mike Connor” or “be Brendan Eich”, but I guess they don't really want hundreds of people contacting them and asking them to file bug reports.If a goal is to get a higher proportion of bugs fixed, then the obvious way to do that is either to discourage people from filing bugs on things that aren't development priorities, or set the development priorities according to bug reports (old reports not filed by the people setting the development priorities…) – neither of those things seem particularly useful.Having said all that, I guess there may still be something interesting to see from the analysis…
Pingback: Mind. Prepare to be blown away. Big Data, Wikipedia and Government. | eaves.ca