Tag Archives: mozilla

Smarter Ways to Have School Boards Update Parents

Earlier this month the Vancouver School Board (VSB) released an iPhone app that – helpfully – will use push notifications to inform parents about school holidays, parent interviews, and scheduling disruptions such as snow days. The app is okay, it’s a little clunky to use, and a lot of the data – such as professional days – while helpful in an app, would be even more helpful as an iCal feed parents could subscribe to in their calendars.

That said, the VSB deserves credit for having the vision of developing an app. Positively, the VSB app team hopes to add new features, such as letting parents know about after school activities like concerts, plays and sporting events.

This is a great innovation and without a doubt, other school boards will want apps of their own. The problem is, this is very likely to lead to an enormous amount of waste and duplication. The last thing citizens want is for every school board to be spending $15-50K developing iPhone apps.

Which leads to a broader opportunity for the Minister of Education.

Were I the Education Minister, I’d have my technology team recreate the specs of the VSB app and propose an RFP for it but under an open source license and using phonegap so it would work on both iPhone and Android. In addition, I’d ensure it could offer reminders – like we do at recollect.net – so that people could get email or text messages without a smart phone at all.

I would then propose the ministry cover %60 percent of the development and yearly upkeep costs. The other 40% would be covered by the school boards interested in joining the project. Thus, assuming the app had a development cost of $40K and a yearly upkeep of $5K, if only one school board signed up it would have to pay $16K for the app (a pretty good deal) and $2K a year in upkeep. But if 5 school districts signed up, each would only pay $3.2K in development costs and $400 dollars a year in upkeep costs. Better still, the more that sign up, the cheaper it gets for each of them. I’d also propose a governance model in which those who contribute money for develop would have the right to elect a sub-group to oversee the feature roadmap.

Since the code would be open source other provinces, school districts and private schools could also use the app (although not participate in the development roadmap), and any improvements they made to the code base would be shared back to the benefit of BC school districts.

Of course by signing up to the app project school boards would be committing to ensure their schools shared up to date notifications about the relevant information – probably a best practice that they should be doing anyways. This process work is where the real work lies. However, a simple webform (included in the price) would cover much of the technical side of that problem. Better still the Ministry of Education could offer its infrastructure for hosting and managing any data the school boards wish to collect and share, further reducing costs and, equally important, ensuring the data was standardized across the participating school boards.

So why should the Ministry of Education care?

First, creating new ways to update parents about important events – like when report cards are issued so that parents know to ask for them – helps improve education outcomes. That should probably reason enough, but there are other reasons as well.

Second, it would allow the ministry, and the school boards, to collect some new data: professional day dates, average number of snow days, frequency of emergency disruptions, number of parents in a district interested in these types of notifications. Over time, this data could reveal important information about educational outcomes and be helpful.

But the real benefit would be in both cost savings and in enabling less well resourced school districts to benefit from technological innovation wealthier school districts will likely pursue if left to their own devices. Given there are 59 english school districts in BC, if even half of them spent 30K developing their own iPhone apps, then almost $1M dollars would be collectively spent on software development. By spending $24K, the ministry ensures that this $1M dollars instead gets spent on teachers, resources and schools. Equally important, less tech savvy or well equipped school districts would be able to participate and benefit.

Of course, if the City of Vancouver school district was smart, they’d open source their app, approach the Ministry of Education and offer it as the basis of such a venture. Doing that wouldn’t just make them head of the class, it’d be helping everyone get smarter, faster.

Design Matters: Looking at a Re-themed Bugzilla

I’ll be honest. There was a time when I thought design didn’t matter. To my credit, it was a long time ago… but I used to think, if the tool was good enough, the design won’t matter, people will use it cause it is helpful. (This may or may not have influenced some fashion choices earlier in life as well – I’d like to think things have improved – but not everyone may agree it has improved sufficiently).

Being useful may be sufficient (although take a look at the government website at the bottom of this post – it’s a very useful website). But it’s no excuse for not making things easier to use. Especially when you are running an open source community and want to encourage participation and ease people up the learning curve faster.

Hence why I enjoyed recently discovering ActiveState’s implementation of Bugzilla (re-themed by Tara Gibbs). Bugzilla is the software many open source projects use to identify, track and resolve bugs.

Here we have two identical pieces of software (so the “usefulness” is the same) but what makes ActiveState’s version of Bugzilla so nice are a few simple things they’ve done to make it more user friendly (doubly pleased to see them implement some ideas I’d blogged earlier as well – great minds think alike!).

So let’s start with the Mozilla instance of Bugzilla – as this was the one I was used to.

I’ve circled a couple of the key features to zero in on. Let me go through them as I want you to be thinking about them when you look at the ActiveState version:

  • Red circle: Notice that this has a lot of key items in it, but it is lost next to the “search” button, which pulls your attention away
  • Dark orange arrow: the search button! most often you won’t find a search box located here in an application.
  • Green circle: Tons of useful stuff down here, but arranged in one long horizontal list, that makes it hard to find what you’re look for (and another search box!)
  • Light orange arrow: another log out option… didn’t I see that somewhere else as well?

I want to be clear, the Bugzilla team at Mozilla is awesome. Recently hired they are trying to do a ton of stuff and this is not where I’d expect them to start (and they’ve been super responsive to everything I’ve blogged about so I’m a huge fan), I want to flag this because everyone, from software engineers to government officials need to recognize that we rely on good design to make our lives easier, to help with decision fatigue and streamline our work, every day.

Now check out the ActiveState version of the exact same software, but re-designed.

So, my cartoonish circles and arrows are mucking up the design and ascetics of both theses sites, so please forgive that. (I suggest opening them in adjacent tabs – Mozilla here, ActiveState here – so you can see them uninterrupted).

So, a few things:

  • Red circle: Now everything to do with the administration of your account is in the top, top right hand corner. This is where Google, Facebook and most websites put this info now, that’s why you’re expecting to find it there!
  • Dark orange arrow: now the search button is in the top right hand corner. Pretty much the same location it appears in Firefox (and safari, IE, chrome, OS X, etc…) and so where users have come to expect it.
  • Green circle: This part really is genius. Did you know there were saved searches in the above version? There are, but the feature didn’t stand out. This theme sorts the users options and displays them vertically within a menu: much, much easier to digest quickly.
  • Light orange arrow: Features appear only once! For example, the sign out and search feature do not appear at the top and bottom. This helps reduce clutter and allows the user to find things more quickly

My point is that a few minor changes can dramatically improve the usability of a website or tool. Is Mozilla’s bugzilla radically worse than ActiveState’s? No, but I definitely prefer ActiveState’s design. Moreover, when you are relying on volunteer contributors and attracting new contributors is something that matters to you, this is an important gateway and so you want it to be as seamless as possible.

What’s interesting is that often it is in the non-profit and government sector that design gets neglected because it is deemed a luxury, or the “substantive” people don’t think design matters and so ignore it.

The results can be disastrous.

I mean, especially if you are in government, then you’ve really got to be advocating for better design. Consider the website below. Remember, this may be the most important citizen facing website in the Canadian government – the one stop shop to find every service you need. It is better than most government website, and yet, you’ve got a site that is still maddeningly difficult to navigate. Where am I supposed to look??? Eyes… being… pulled… in… so… many… directions…

Personally, I think you could solve 80% of the problem with this page just by getting rid of the left hand column and put a search button in the top right hand corner. But I’m supremely confident that would violate some arcane website design rule the government has and so will remain a post for another day…

Open Source Data Journalism – Happening now at Buzz Data

(there is a section on this topic focused on governments below)

A hint of how social data could change journalism

Anyone who’s heard me speak in the last 6 months knows I’m excited about BuzzData. This week, while still in limited access beta, the site is showing hints its potential – and it still has only a few hundred users.

First, what is BuzzData? It’s a website that allows data to be easily uploaded and shared among any number of users. (For hackers – it’s essentially github for data, but more social). It makes it easy for people to copy data sets, tinker with them, share the results back with the original master, mash them up with other data sets, all while engaging with those who care about that data set.

So, what happened? Why is any of this interesting? And what does it have to do with journalism?

Exactly a month ago Svetlana Kovalyova of Reuters had her article – Food prices to remain high, UN warns – re-published in the Globe and Mail.  The piece essentially outlined that food commodities were getting cheaper because of local conditions in a number of regions.

Someone at the Globe and Mail decided to go a step further and upload the data – the annual food price indices from 1990-present – onto the BuzzData site, presumably so they could play around with it. This is nothing complicated, it’s a pretty basic chart. Nonetheless a dozen or so users started “following” the dataset and about 11 days ago, one of them, David Joerg, asked:

The article focused on short-term price movements, but what really blew me away is: 1) how the price of all these agricultural commodities has doubled since 2003 and 2) how sugar has more than TRIPLED since 2003. I have to ask, can anyone explain WHY these prices have gone up so much faster than other prices? Is it all about the price of oil?

He then did a simple visualization of the data.

FoodPrices

In response someone from the Globe and Mail entitled Mason answered:

Hi David… did you create your viz based on the data I posted? I can’t answer your question but clearly your visualization brought it to the forefront. Thanks!

But of course, in a process that mirrors what often happens in the open source community, another “follower” of the data shows up and refines the work of the original commentator. In this case, an Alexander Smith notes:

I added some oil price data to this visualization. As you can see the lines for everything except sugar seem to move more or less with the oil. It would be interesting to do a little regression on this and see how close the actual correlation is.

The first thing to note is that Smith has added data, “mashing in” Oil Price per barrel. So now the data set has been made richer. In addition his graph quite nice as it makes the correlation more visible than the graph by Joerg which only referenced the Oil Price Index. It also becomes apparent, looking at this chart, how much of an outlier sugar really is.

oilandfood

Perhaps some regression is required, but Smith’s graph is pretty compelling. What’s more interesting is not once is the price of oil mentioned in the article as a driver of food commodity prices. So maybe it’s not relevant. But maybe it deserves more investigation – and a significantly better piece, one that would provide better information to the public – could be written in the future. In either case, this discussion, conducted by non-experts simply looking at the data, helped surface some interesting leads.

And therein lies the power of social data.

With even only a handful of users a deeper, better analysis of the story has taken place. Why? Because people are able to access the data and look at it directly. If you’re a follower of Julian Assange of wikileaks, you might call this scientific journalism, maybe it is, maybe it isn’t, but it certainly is a much more transparent way for doing analysis and a potential audience builder – imagine if 100s or 1000s of readers were engaged in the data underlying a story. What would that do to the story? What would that do to journalism? With BuzzData it also becomes less difficult to imagine a data journalists who spends a significant amount of their time in BuzzData working with a community of engaged pro-ams trying to find hidden meaning in data they amass.

Obviously, this back and forth isn’t game changing. No smoking gun has been found. But I think it hints at a larger potential, one that it would be very interesting to see unlocked.

More than Journalism – I’m looking at you government

Of course, it isn’t just media companies that should be paying attention. For years I argued that governments – and especially politicians – interested in open data have an unhealthy appetite for applications. They like the idea of sexy apps on smart phones enabling citizens to do cool things. To be clear, I think apps are cool too. I hope in cities and jurisdictions with open data we see more of them.

But open data isn’t just about apps. It’s about the analysis.

Imagine a city’s budget up on Buzzdata. Imagine, the flow rates of the water or sewage system. Or the inventory of trees. Think of how a community of interested and engaged “followers” could supplement that data, analyze it, visualize it. Maybe they would be able to explain it to others better, to find savings or potential problems, develop new forms of risk assessment.

It would certainly make for an interesting discussion. If 100 or even just 5 new analyses were to emerge, maybe none of them would be helpful, or would provide any insights. But I have my doubts. I suspect it would enrich the public debate.

It could be that the analysis would become as sexy as the apps. And that’s an outcome that would warm this policy wonk’s soul.

Lessons for Open Source Communities: Making Bug Tracking More Efficient

This post is a discussion about making bug tracking in Bugzilla for the Mozilla project more efficient. However, I believe it is applicable to any open source project or even companies or governments running service desks (think 311).

Almost exactly a year ago I wrote a blog post titled: Some thoughts on improving Bugzilla in which I made several suggestions for improving the work flow in bugzilla. Happily a number of those ideas have been implemented.

One however, remains outstanding and, I believe, creates an unnecessary amount of triage work as well as a terrible experience for end users. My understanding is that while the bug could not be resolved last year for a few reasons, there is growing interest (exemplified originally in the comment field of my original post) to tackle it once again. This is my attempt at a rallying cry to get that process moving.

Those who are already keen on this idea and don’t want to read anything more below, this refers to bug 444302.

The Challenge: Dealing with Support Requests that Arrive in Bugzilla

I first had this idea last summer while talking to the triage team at the Mozilla Summit. These are the guys who look at the firehose of bugs being submitted to Mozilla every day. They have a finite amount of time, so anything we can do to automate their work is going to help them, and the project, out significantly.

Presently, I’m told that Mozilla gets a huge number of bugs submitted that are not actually bugs, but support issues. This creates several challenges.

First, it means that support related issues, as opposed to real problems with the software, are clogging up the bug tracking system. This increases the amount of noise in the system – making it harder for everyone to find the information they need.

Second, it means the triage teams has to spend time filtering bugs that are actually support issues. Not a good use of their time.

Third, it means that users who have real support issues but submit them accidentally though Bugzilla, get a terrible experience.

This last one is a real problem. If you are a user, feeling frustrated (and possibly not behaving as your usual rational self – we’ve all been there) because your software is not working the way you expect, and then you submit what a triage person considers a support issue (Resolve-Invalid)  you get an email that looks like this:


If I’m already cheesed that my software isn’t doing what I want, getting an email that says “Invalid” and “Verified” is really going to cheese me off. That of course presumes I even know what this email means. More likely, I’ll be thinking that some ancient machine in the bowels of mozilla using software created in the late 1990s received my plea and has, in its 640K confusion, has spammed me. (I mean look at it… from a user’s perspective!)

The Proposal: Re-Automating the Process for a better result

Step 1: My sense is that this issue – especially problem #3 – could be resolved by simply creating a new resolution field. I’ve opted to call it “Support” but am happy to name it something else.

This feels like a simple fix and it would quickly move a lot of bugs that are cluttering up bugzilla… out.

Step 2: Query the text of bugs marked “support” against Mozilla’s database. Then insert the results in an email that goes back to the user. I’m imagining something that might look like this:

SUMO-transfer-v2

Such an email has several advantages:

First, if these are users who’ve submitted inappropriate bugs and who really need support, giving them a bugzilla email isn’t going to help them, they aren’t even going to know how to read it.

Second, there is an opportunity to explain to them where they should go for help – I haven’t done that explicitly enough in this email – but you get the idea.

Because, because we’ve done a query of the Mozilla support database (SUMO) we are able to include some support articles that might resolve their issue.

Fourth, if this really is a bug from a more sophisticated user, we give them a hyperlink back to bugzilla so they can make a note or comment.

What I like about this is it is customized engagement at a low cost. More importantly, it helps unclutter things while also making us more responsive and creating a better experience for users.

Next Steps:

It’s my understanding that this is all pretty doable. After last year’s post there were several helpful comments. Including this one from Bugzilla expert Gervase Markham:

The best way to implement this would be a field on SUMO where you paste a bug number, and it reaches out, downloads the Bugzilla information using the Bugzilla API, and creates a new SUMO entry using it. It then goes back and uses the API to automatically resolve the Bugzilla bug – either as SUPPORT, if we have that new resolution, or INVALID, or MOVED (which is a resolution Bugzilla has had in the past for bugs moved elsewhere), or something else.

The SUMO end could then send them a custom email, and it could include hyperlinks to appropriate articles if the SUMO engine thought there were any.

And Tyler Downer noted in this comment that there maybe be a dependency bug (#577561) that would also need resolving:

Gerv, I love you point 3. Exactly what I had in mind, have SUMO pull the relevant data from the bug report (we just need BMO to autodetect firefox version numbers, bug 577561 ;) and then it should have most of the required data. That would save the user so much time and remove a major time barrier. They think “I just filed a bug, now they want me to start a forum thread?” If it does it automatically, the user would be so much better served.

So, if there is interest in doing this, let me know. I’m happy to support any discussion, should it take place on the comment stream of the bug, the comments below, or somewhere else that might be helpful (maybe I should dial in on this call?). Regardless, this feels like a quick win, one that would better serve Mozilla users, teach them to go to the right place for support (over time) and improve the Bugzilla workflow. It might be worth implementing even for a bit, and we can assess any positive or negative feedback after 6 months.

Let me know how I can help.

Additional Resources

Bug 444302: Provide a means to migrate support issues that are misfiled as bugs over to the support.mozilla.com forums.

My previous post: Some thoughts on improving Bugzilla. The comments are worth checking out

Mozilla’s Bugzilla Wiki Page

Why I’m Struggling with Google+

So it’s been a couple of weeks since Google+ launched and I’ll be honest, I’m really struggling with the service. I wanted to give it a few weeks before writing anything, which has been helpful in letting my thinking mature.

First, before my Google friends get upset, I want to acknowledge the reason I’m struggling has more to do with me than with Google+. My sense is that Google+ is designed to manage personal networks. In terms of social networking, the priority, like at Facebook, is on a soft version of the word “social” eg. making making the experience friendly and social, not necessarily efficient.

And I’m less interested in the personal experience than in the learning/professional/exchanging experience. Mark Jones, the global communities editor for Reuters, completely nailed what drives my social networking experience in a recent Economist special on the News Industry: “The audience isn’t on Twitter, but the news is on Twitter.” Exactly! That’s why I’m on Twitter. Cause that’s where the news is. It is where the thought leaders are interacting and engaging one another. Which is very different activity than socializing. And I want to be part of all that. Getting intellectually stimulated and engaged – and maybe even, occasionally, shaping ideas.

And that’s what threw me initially about Google+. Because of where I’m coming from, I (like many people) initially focused on sharing updates which begged comparing Google+ to Twitter, not Facebook. That was a mistake.

But if Google+ is about about being social above all else, it is going to be more like Facebook than Twitter. And therein lies the problem. As a directory, I love Facebook. It is great for finding people, checking up on their profile and seeing what they are up to. For some people it is good for socializing. But as a medium for sharing information… I hate Facebook. I so rarely use it, it’s hard to remember the last time I checked my stream intentionally.

So I’m willing to accept that part of the problem is me. But I’m sure I’m not alone so if you are like me, let me try to further breakdown why I (and maybe you too) are struggling.

Too much of the wrong information, too little of the right information.

The first problem with Google+ and Facebook is that they have both too much of the wrong information, and too little of the right information.

What do I mean by too much of the wrong? What I love about Twitter is its 140 character limit. Indeed, I’m terrified to read over at Mathew Ingram’s blog that some people are questioning this limit. I agree with Mathew: changing Twitter’s 140 character limit is a dumb idea. Why? For the same reason I thought it made sense back in March of 2009, before Google+ was even a thought:

What I love about Twitter is that it forces writers to be concise. Really concise. This in turn maximizes efficiency for readers. What is it Mark Twain said?  “I didn’t have time to write a short letter, so I wrote a long one instead.” Rather than having one, or even thousands or readers read something that is excessively long, the lone drafter must take the time and energy to make it short. This saves lots of people time and energy. By saying what you’ve got to say in 140 characters, you may work more, but everybody saves.

On the other hand, while I want a constraint over how much information each person can transmit, I want to be able to view my groups (or circles) of people as I please.

Consider the screen shot of TweetDeck below. Look how much information is being displayed in a coherent manner (of my choosing). It takes me maybe, maybe 30-60 seconds to scan all this. In one swoop I see what friends are up to, some of my favourite thought leaders, some columnists I respect… it is super fast and efficient. Even on my phone, switching between these columns is a breeze.

twitter

But now look at Google+. There are comments under each item…but I’m not sure I really care to see. Rather then the efficient stream of content I want, I essentially have a stream of content I didn’t ask for. Worse, I can see, what, maybe 2-5 items per screen, and of course I see multiple circles on a single screen.

Google+1

Obviously, some of this is because Google+ doesn’t have any applications to display it in alternative forms. I find the Twitter homepage equally hard to use. So some of this could be fixed if (and hopefully when) Google makes public their Google+ API.

But it can’t solve some underlying problems. Because an item can be almost as long as the author wants, and there can be comments, Google+ doesn’t benefit from Twitter’s 140 character limit. As one friend put it, rather than looking at a stream of content, I’m looking at a blog in which everybody I know is a writer submitting content and in which an indefinite number of comments may appear. I’ll be honest: that’s not really a blog I’m interested in reading. Not because I don’t like the individual authors, but because it’s simply too much information, shared inefficiently.

Management Costs are too high

And herein lies the second problem. The management costs of Google+ are too high.

I get why “circles” can help solve some of the problems outlined above. But, as others have written, it creates a set of management costs that I really can’t be bothered with. Indeed this is the same reason Facebook is essentially broken for me.

One of the great things about Twitter is that it’s simple to manage: Follow or don’t follow. I love that I don’t need people’s permission to follow them. At the same time, I understand that this is ideal for managing divergent social groups. A lot of people live lives much more private than mine or want to be able to share just among distinct groups of small friends. When I want to do this, I go to email… that’s because the groups in my life are always shifting and it’s simple to just pick the email addresses. Managing circles and keeping track of them feels challenging for personal use. So Google+ ends up taking too much time to manage, which is, of course, also true of Facebook…

Using circles to manage for professional reasons makes way more sense. That is essentially what I’ve got with Twitter lists. The downside here is that re-creating these lists is a huge pain.

And now one unfair reason with some insight attached

Okay, so going to the Google+ website is a pain, and I’m sure it will be fixed. But presently my main Google account is centered around my eaves.ca address and Google+ won’t work with Google Apps accounts so I have to keep flipping to a gmail account I loathe using. That’s annoying but not a deal breaker. The bigger problem is my Google+ social network is now attached to an email account I don’t use. Worse, it isn’t clear I’ll ever be able to migrate it over.

My Google experience is Balkanizing and it doesn’t feel good.

Indeed, this hits on a larger theme: Early on, I often felt that one of the promises of Google was that it was going to give me more opportunities to tinker (like what Microsoft often offers in its products), but at the same time offer a seamless integrated operating environment (like what Apple, despite or because of their control freak evilness, does so well). But increasingly, I feel the things I use in Google are fractured and disconnected. It’s not the end of the world, but it feels less than what I was hoping for, or what the Google brand promise suggested. But then, this is what everybody says Larry Page is trying to fix.

And finally a bonus fair reason that’s got me ticked

Now I also have a reason for actively disliking Google+.

After scanning my address book and social network, it asked me if I wanted to add Tim O’Reilly to a circle. I follow Tim as a thought leader on Twitter so naturally I thought – let’s get his thoughts via Google+ as well. It turns out however, that Tim does not have a Google+ account. Later when I decided to post something a default settings I failed to notice sent emails to everyone in my circles without a Google+ account. So now I’m inadvertently spamming Tim O’Reilly who frankly, doesn’t need to get crap spam emails from me or anyone. I’m feeling bad for him cause I suspect, I’m not the only one doing it. He’s got 1.5 million followers on Twitter. That could be a lot of spam.

My fault? Definitely in part. But I think there’s a chunk of blame that can be heaped on to a crappy UI that wanted that outcome. In short: Uncool, and not really aligned with the Google brand promise.

In the end…

I remember initially, I didn’t get Twitter; after first trying it briefly I gave up for a few months. It was only after the second round that it grabbed me and I found the value. Today I’m struggling with Google+, but maybe in a few months, it will all crystallize for me.

What I get, is that it is an improvement on Facebook, which seems to becoming the new AOL – a sort of gardened off internet that is still connected but doesn’t really want you off in the wilds having fun. Does Google+ risk doing the same to Google? I don’t know. But at least circles are clearly a much better organizing system than anything Facebook has on offer (which I’ve really failed to get into). It’s far more flexible and easier to set up. But these features, and their benefits, are still not sufficient to overcome the cost setting it up and maintaining it…

Ultimately, if everybody moves, I’ll adapt, but I way prefer the simplicity of Twitter. If I had my druthers, I’d just post everything to Twitter and have it auto-post over to Google+ and/or Facebook as well.

But I don’t think that will happen. My guess is that for socially driven users (e.g. the majority of people) the network effects probably keep them at Facebook. And does Google+ have enough features to pull the more alpha type user away? I’m not sure. I’m not seeing it yet.

But I hope they try, as a little more competition in the social networking space might be good for everyone, especially when it comes to privacy and crazy end-user agreements.

Using Data to Make Firefox Better: A mini-case study for your organization

I love Mozilla. Any reader of this blog knows it. I believe in its mission, I find the organization totally fascinating and its processes engrossing. So much so I spend a lot of time thinking about it – and hopefully, finding ways to contribute.

I’m also a big believer in data. I believe in the power of evidence-based public policy (hence my passion about the long-form census) and in the ability of data to help organizations develop better products, and people make smarter decisions.

Happily, a few months ago I was able to merge these two passions: analyzing data in an effort to help Mozilla understand how to improve Firefox. It was fun. But more importantly, the process says a lot about the potential for innovation open to organizations that cultivate an engaged user community.

So what happened?

In November 2010, Mozilla launched a visualization competition that asked: How do People Use Firefox? As part of the competition, they shared anonymous data collected from Test Pilot users (people who agreed to share anonymous usage data with Mozilla). Working with my friend (and quant genius) Diederik Van Liere, we analyzed the impact of add-on memory consumption on browser performance to find out which add-ons use the most memory and thus are most likely slowing down the browser (and frustrating users!). (You can read about our submission here).

But doing the analysis wasn’t enough. We wanted Mozilla engineers to know we thought that users should be shown the results – so they could make more informed choices about which add-ons they download. Our hope was to put pressure on add-on developers to make sure they weren’t ruining Firefox for their users. To do that we visualized the data by making a mock up of their website – with our data inserted.

FF-memory-visualizations2.001

For our efforts, we won an honourable mention. But winning a prize is far, far less cool than actually changing behaviour or encouraging an actual change. So last week, during a trip to Mozilla’s offices in Mountain View, I was thrilled when one of the engineers pointed out that the add-on site now has a page where they list add-ons that most slow down Firefox’s start up time.

Slow-Performing-Add-ons-Add-ons-for-Firefox_1310962746129

(Sidebar: Anyone else find it ironic that “FastestFox: Browse Faster” is #5?)

This is awesome! Better still, in April, Mozilla launched an add-on performance improvement initiative to help reduce the negative impact add-ons can have on Firefox. I have no idea if our submission to the visualization competition helped kick-start this project; I’m sure there were many smart people at Mozilla already thinking about this. Maybe it was already underway? But I like to believe our ideas helped push their thinking – or, at least, validated some of their ideas. And of course, I hope it continues to. I still believe that the above-cited data shouldn’t be hidden on a webpage well off the beaten path, but should be located right next to every add-on. That’s the best way to create the right feedback loops, and is in line with Mozilla’s manifesto – empowering users.

Some lessons (for Mozilla, companies, non-profits and governments)

First lesson. Innovation comes from everywhere. So why aren’t you tapping into it? Diederik and I are all too happy to dedicate some cycles to thinking about ways to make Firefox better. If you run an organization that has a community of interested people larger than your employee base (I’m looking at you, governments), why aren’t you finding targeted ways to engage them, not in endless brainstorming exercises, but in innovation challenges?

Second, get strategic about using data. A lot of people (including myself) talk about open data. Open data is good. But it can’t hurt to be strategic about it as well. I tried to argue for this in the government and healthcare space with this blog post. Data-driven decisions can be made in lots of places; what you need to ask yourself is: What data are you collecting about your product and processes? What, of that data, could you share, to empower your employees, users, suppliers, customers, whoever, to make better decisions? My sense is that the companies (and governments) of the future are going to be those that react both quickly and intelligently to emerging challenges and opportunities. One key to being competitive will be to have better data to inform decisions. (Again, this is the same reason why, over the next two decades, you can expect my country to start making worse and worse decisions about social policy and the economy – they simply won’t know what is going on).

Third, if you are going to share, get a data portal. In fact, Mozilla needs an open data portal (there is a blog post that is coming). Mozilla has always relied on volunteer contributors to help write Firefox and submit patches to bugs. The same is true for analyzing its products and processes. An open data portal would enable more people to help find ways to keep Firefox competitive. Of course, this is also true for governments and non-profits (to help find efficiencies and new services) and for companies.

Finally, reward good behaviour. If contributors submit something you end up using… let them know! Maybe the idea Diederik and I submitted never informed anything the add-on group was doing; maybe it did. But if it did… why not let us know? We are so pumped about the work they are doing, we’d love to hear more about it. Finding out by accident seems like a lost opportunity to engage interested stakeholders. Moreover, back at the time, Diederik was thinking about his next steps – now he works for the Wikimedia Foundation. But it made me realize how an innovation challenge could be a great way to spot talent.

Why not create an Open311 add-on for Ushahidi?

This is not a complicated post. Just a simple idea: Why not create an Open311 add-on for Ushahidi?

So what do I mean by that, and why should we care?

Many readers will be familiar with Ushahidi, non-profit that develops open source mapping software that enables users to collect and visualize data in interactive maps. It’s history is now fairly famous, as the Wikipedia article about it outlines: “Ushahidi.com’ (Swahili for “testimony” or “witness”) is a website created in the aftermath of Kenya’s disputed 2007 presidential election (see 2007–2008 Kenyan crisis) that collected eyewitness reports of violence sent in by email and text-message and placed them on a Google map.[2]“Ushahidi’s mapping software also proved to be an important resource in a number of crises since the Kenyan election, most notably during the Haitian earthquake. Here is a great 2 minute video on How how Ushahidi works.

ushahidi-redBut mapping of this type isn’t only important during emergencies. Indeed it is essential for the day to day operations of many governments, particularly at the local level. While many citizens in developed economies may be are unaware of it, their cities are constantly mapping what is going on around them. Broken infrastructure such as leaky pipes, water mains, clogged gutters, potholes, along with social issues such as crime, homelessness, business and liquor license locations are constantly being updated. More importantly, citizens are often the source of this information – their complaints are the sources of data that end up driving these maps. The gathering of this data generally falls under the rubric of what is termed 311 systems – since in many cities you can call 311 to either tell the city about a problem (e.g. a noise complaint, service request or inform them about broken infrastructure) or to request information about pretty much any of the city’s activities.

This matters because 311 systems have generally been expensive and cumbersome to run. The beautiful thing about Ushahidi is that:

  1. it works: it has a proven track record of enabling citizens in developing countries to share data using even the simplest of devices both with one another and agencies (like humanitarian organizations)
  2. it scales: Haiti and Kenya are pretty big places, and they generated a fair degree of traffic. Ushahidi can handle it.
  3. it is lightweight: Ushahidi technical footprint (yeap making that up right now) is relatively light. The infrastructure required to run it is not overly complicated
  4. it is relatively inexpensive: as a result of (3) it is also relatively cheap to run, being both lightweight and leveraging a lot of open source software
  5. Oh, and did I mention IT WORKS.

This is pretty much the spec you would want to meet if you were setting up a 311 system in a city with very few resources but interested in starting to gather data about both citizen demands and/or trying to monitor newly invested in infrastructure. Of course to transform Ushahidi into a process for mapping 311 type issues you’d need some sort of spec to understand what that would look like. Fortunately Open311 already does just that and is supported by some of the large 311 providers system providers – such as Lagan and Motorola – as well as some of the disruptive – such as SeeClickFix. Indeed there is an Open311 API specification that any developer could use as the basis for the add-on to Ushahidi.

Already I think many cities – even those in developing countries – could probably afford SeeClickFix, so there may already be a solution at the right price point in this space. But maybe not, I don’t know. More importantly, an Open311 module for Ushahidi could get local governments, or better still, local tech developers in developing economies, interested in and contributing to the Ushahidi code base, further strengthening the project. And while the code would be globally accessible, innovation and implementation could continue to happen at the local level, helping drive the local economy and boosting know how. The model here, in my mind, is OpenMRS, which has spawned a number of small tech startups across Africa that manage the implementation and servicing of a number of OpenMRS installations at medical clinics and countries in the region.

I think this is a potentially powerful idea for stakeholders in local governments and startups (especially in developing economies) and our friends at Ushahidi. I can see that my friend Philip Ashlock at Open311 had a similar thought a while ago, so the Open311 people are clearly interested. It could be that the right ingredients are already in place to make some magic happen.

Mind. Prepare to be blown away. Big Data, Wikipedia and Government.

Okay, super psyched about this. Back at the Strata Conference in Feb (in San Diego) I introduced my long time uber-quant friend and now Wikimedia Foundation data scientist Diederik Van Liere to fellow Gov2.0 thinker Nicholas Gruen (Chairman) and Anthony Goldbloom (Founder and CEO) of an awesome new company called Kaggle.

As usually happens when awesome people get together… awesomeness ensued. Mind. Be prepared to be blown.

So first, what is Kaggle? They’re a company that helps companies and organizations post their data and run competitions with the goal of having it scrutinized by the world’s best data scientists towards some specific goal. Perhaps the most powerful example of a Kaggle competition to date was their HIV prediction competition, in which they asked contestants to use a data set to find markers in the HIV sequence which predict a change in the severity of the infection (as measured by viral load and CD4 counts).

Until Kaggle showed up the best science to date had a prediction rate of 70% – a feat that had taken years to achieve. In 90 days contributors to the contest were able to achieve a prediction rate of 77%. A 10% improvement. I’m told that achieving an similar increment had previously taken something close to a decade. (Data geeks can read how the winner did it here and here.)

Diederik and Anthony have created a similar competition, but this time using Wikipedia participation data. As the competition page outlines:

This competition challenges data-mining experts to build a predictive model that predicts the number of edits an editor will make in the five months after the end date of the training dataset. The dataset is randomly sampled from the English Wikipedia dataset from the period January 2001 – August 2010.

The objective of this competition is to quantitively understand what factors determine editing behavior. We hope to be able to answer questions, using these predictive models, why people stop editing or increase their pace of editing.

This is of course, a subject matter that is dear to me as I’m hoping that we can do similar analysis in open source communities – something Diederik and I have tried to theorize with Wikipedia and actually do Bugzilla data.

There is a grand prize of $5000 (along with a few others) and, amazingly, already 15 participants and 7 submissions.

Finally, I hope public policy geeks, government officials and politicians are paying attention. There is power in data and an opportunity to use it to find efficiencies and opportunities. Most governments probably don’t even know how to approach an organization like Kaggle or to run a competition like this, despite (or because?) it is so fast, efficient and effective.

It shouldn’t be this way.

If you are in government (or any org), check out Kaggle. Watch. Learn. There is huge opportunity here.

12:10pm PST – UPDATE: More Michael Bay sized awesomeness. Within 36 hours of the wikipedia challenge being launched the leading submission has improved on internal Wikimedia Foundation models by 32.4%

Developing Community Management Metrics and Tools for Mozilla

Background – how we got here

Over the past few years I’ve spent a great deal of time thinking about how we can improve both the efficiency of open source communities and contributors experience. Indeed, this was the focus, in part, of my talk at the Mozilla Summit last summer. For some years Diederik Van Liere – now with the Wikimedia foundation’s metrics team – and I have played with Bugzilla data a great deal to see if we could extract useful information from it. This led us to engaging closely with some members of the Mozilla Thunderbird team – in particular Dan Mosedale who immediately saw its potential and became a collaborator. Then, in November, we connected with Daniel Einspanjer of Mozilla Metrics and began to imagine ways to share data that could create opportunities to improve the participation experience.

Yesterday, thank’s to some amazing work on the part of the Mozilla Metrics team (listed at bottom of the post), we started sharing some of work at the Mozilla all hands. Specifically, Daniel demoed the first of a group of dashboards that describe what is going on in the Mozilla community, and that we hope, can help enable better community management. While these dashboards deal with the Mozilla community in particular I nonetheless hope they will be of interest to a number of open source communities more generally. (presently the link is only available to Mozilla staffers until the dashboard goes through security review – see more below, along with screen shots – you can see a screencast here).

Why – the contributor experience is a key driver for success of open source projects

My own feeling is that within the Mozilla community the products, like Firefox, evolve quickly, but the process by which people work together tends to evolve more slowly. This is a problem. If Mozilla cannot evolve and adopt new approaches with sufficient speed then potential and current contributors may go where the experience is better and, over time, the innovation and release cycle could itself cease to be competitive.

This task is made all the more complicated since Mozilla’s ability to fulfill its mission and compete against larger, better funded competitors depends on its capacity to tap into a large pool of social capital – a corps of paid and unpaid coders whose creativity can foster new features and ideas. Competing at this level requires Mozilla to provide processes and tools that can effectively harness and coordinate that energy at minimal cost to both contributors and the organization.

As I discussed in my Mozilla Summit talk on Community Management, processes that limit the size or potential of our community limit Mozilla. Conversely, making it easier for people to cooperate, collaborate, experiment and play enhances the community’s capacity. Consequently, open source projects should – in my opinion – constantly be looking to reduce or eliminate transactions costs and barriers to cooperation. A good example of this is how Github showed that forking can be a positive social contribution. Yes it made managing the code base easier, but what it really did was empower people. It took something everyone thought would kill open source projects – forking – and made it a powerful tool of experimentation and play.

How – Using data to enable better contributor experience

Unfortunately, it is often hard to quantitatively asses how effectively an open source community manages itself. Our goal is to change that. The hope is that these dashboards – and the data that underlies them – will provide contributors with an enhanced situational awareness of the community so they could improve not just the code base, but the community and its processes. If we can help instigate a faster pace of innovation of change in the processes of Mozilla, then I think this will both make it easier to improve the contributor experience and increase the pace of innovation and change in the software. That’s the hope.

That said, this first effort is a relatively conservative one. We wanted to create a dashboard that would allow us to identify some broader trends in the Mozilla Community, as well as provide tangible, useful data to Module Owners – particularly around identifying contributors who may be participating less frequently.


This dashboard is primarily designed to serve two purposes. First is to showcase what dashboards could be with the hope of inspiring the Mozilla community members to use it and, more importantly, to inspire them to build their own. The second reason was to provide module owners with a reliable tool with which to more effective manage their part of the community.  So what are some of the ways I hope this dashboard might be helpful? One important feature is the ability to sort contributors by staff or volunteer. An open source communities volunteer contributors should be a treasured resource. One nice things about this dashboard is that you can not only see just volunteers, but you can get a quick sense of those who haven’t submitted a patch in a while.

In the picture below I de-selected all Mozilla employees so that we are only looking at volunteer contributors. Using this view we can see who are volunteers who are starting to participate less – note the red circle marked “everything okay?” A good community manager might send these people an email asking if everything is okay. Maybe they are moving on, or maybe they just had a baby (and so are busy with a totally different type of patch – diapers), but maybe they had a bad experience and are frustrated, or a bunch of code is stuck in review. These are things we would want to know, and know quickly, as losing these contributors would be bad. In addition, we can also see who are the emerging power contributors – they might be people we want to mentor, or connect with mentors in order to solidify their positive association with our community and speed up their development. In my view, this should be core responsibilities of community managers and this dashboard makes it much easier to execute on these opportunities.

main-dasboard-notes
Below you can see how zooming in more closely allows you to see trends for contributors over time. Again, sometimes large changes or shifts are for reasons we know of (they were working on features for a big release and its shipped) but where we don’t know the reasons maybe we should pick up the phone or email this person to check to see if everything is okay.

user-dashboard-notes

Again, if this contributor had a negative experience and was drifting away from the community – wouldn’t we want to know before they silently disappeared and moved on? This is in part the goal.

Some of you may also like the fact that you can dive a little deeper by clicking on a user to see what specific patches that user has worked on (see below).

User-deep-dive1

Again, these are early days. My hope is that other dashboards will provide still more windows into the community and its processes so as to show us where there are bottlenecks and high transaction costs.

Some of the features we’d like to add to this or other dashboards include:

  • a code-review dashboard that would show how long contributors have been waiting for code-review, and how long before their patches get pushed. This could be a powerful way to identify how to streamline processes and make the experience of participating in open source communities better for users.
  • a semantic analysis of bugzilla discussion threads. This could allow us to flag threads that have become unwieldy or where people are behaving inappropriately so that module owners can better moderate or problem solve them
  • a dashboard that, based on your past bugs and some basic attributes (e.g. skillsets) informs newbies and experienced contributors which outstanding bugs could most use their expertise
  • Ultimately I’d like to see at least 3 core dashboards – one for contributors, one for module owners and one for overall projects – emerge and, as well as user generated dashboards developed using Mozilla metrics data.
  • Access to all the data in Bugzilla so the contributors, module owners, researchers and others can build their own dashboards – they know what they need better than we do

What’s Next – How Do I Get To Access it and how can I contribute

Great questions.

At the moment the dashboard is going through security review which it must complete before being accessible. Our hope is that this will be complete by the end of Q2 (June).

More importantly, we’d love to hear from contributors, developers and other interested users. We have a standing weekly call every other Friday at 9am PST where we discuss development issues with this and the forthcoming code-review dashboard, contributors needs and wanted features, as well as use cases. If you are interested in participating on these calls please either let me know, or join the Mozilla Community Metrics Google group.

Again, a huge shout out is deserved by Daniel Einspanjer and the Mozilla Metrics team. Here is a list of contributors both so people know who they are but also in case anyone has question about specific aspects of the dashboard:
Pedro Alves – Team Lead
Paula Clemente – Dashboard implementor
Nuno Moreira – UX designer
Maria Roldan – Data Extraction
Nelson Sousa – Dashboard implementor

Honourable Mention! The Mozilla Visualization Challenge Update

Really pleased to share that Diederik and I earned an honourable mention for our submission to the Mozilla Open Data Competition.

For those who missed it – and who find opendata, open source and visualization interesting – you can read a description of and see images from our submission to the competition in this blog post I wrote a month ago.