Tag Archives: opensource

Government Procurement Failure: BC Ministry of Education Case Study

Apologies for the lack of posts. I’ve been in business mode – both helping a number of organizations I’m proud of and working on my own business.

For those interested in a frightening tale of inept procurement, poor judgement and downright dirty tactics when it comes to software procurement and government, there is a wonderfully sad and disturbing case study emerging in British Columbia that shows the lengths a government is willing to go to shut out open source alternatives and ensure that large, expensive suppliers win the day.

The story revolves around a pickle that the province of British Columbia found itself in after a previous procurement disaster. The province had bought a student record management system – software that records elementary and secondary students’ grades and other records. Sadly, the system never worked well. For example, student records generally all get entered at the end of the term, so any system must be prepared to manage significant episodic spikes in usage. The original British Columbia Electronic Student Information System (BCeSIS) was not up to the task and frequently crashed and/or locked out teachers.

To make matters worse, after spending $86M over 6 years it was ultimately determined that BCeSIS was unrecoverably flawed and, as the vendor was ending support, a new system needed to be created.

Interestingly, one of the Province’s school districts – the District of Saanich – decided it would self-fund an open source project to create an alternative to BCeSIS. Called OpenStudent, the system would have an open source license, would be created using locally paid open source developers, could be implemented in a decentralized way but still meet the requirements of the province and… would cost a fraction of that proposed by large government vendors.  The Times Colonist has a simple article that covers the launch of OpenStudent here.

Rather than engage Saanich, the province decided to take another swing at hiring a multinational to engage in a IT mega-project. An RFP was issued to which only companies with $100M in sales could apply. Fujitsu was awarded a 12 year contract with costs of up to $9.4M a year.

And here are the kickers:

So in other words, the province sprung some surprise requirements on the District of Saanich that forced it to kill an open source solution that could have saved tax payers millions and employed British Columbians, all while exempting a multinational from meeting the same requirements. It would appear that the province was essentially engaged in a strategy to kill OpenStudent, likely because any success it enjoyed would have created an ongoing PR challenge for the province and threatened its ongoing contract with Fujitsu.

While I don’t believe that any BC government official personally profited from this outcome, it is hard – very hard indeed – not to feel like the procurement system is deeply suspect or, at worst, corrupted. I have no idea if it is possible, but I do hope that these documents can serve as the basis for legal action by the District of Saanich against the Province of British Columbia to recapture some of their lost expenses. The province has clearly used its purchasing power to alter the marketplace and destroy competitors; whether this is in violation of a law, I don’t know. I do know, however, that it is in violation of good governance, effective procurement and general ethics. As a result, all BC tax payers have suffered.

Addendum: It has been suggested to me that that one reason the BC government may be so keen to support Fujitsu and destroy competing suppliers is because it needs to generate a certain amount of business for the company in order for it to maintain headcount in the province. Had OpenStudent proved viable and cheaper (it was estimated to cost $7-10 per student versus $20 for Fujistu’s service), Fujistu might have threatened to scale back operations which might have hurt service levels for other contracts. Unclear to me if this is true or not. To be clear I don’t hold Fujistu responsible for anything here – they are just a company trying to sell their product and offer the best service they can. The disaster described above has nothing to do with them (they may or may not offer amazing products, I don’t know); rather, it has everything to do with the province using its power to eliminate competition and choice.

New Zealand: The World’s Lab for Progressive Tech Legislation?

Cross posted with TechPresident.

One of the nice advantage of having a large world with lots of diverse states is the range of experiments it offers us. Countries (or regions within them) can try out ideas, and if they work, others can copy them!

For example, in the world of drug policy, Portugal effectively decriminalized virtually all drugs. The result has been dramatic. And much of it positive. Some of the changes include a decline in both HIV diagnoses amongst drug users by 17% and drug use among adolescents (13-15 yrs). For those interested you can read more about this in a fantastic report by the Cato Institute written by Glenn Greenwald back in 2009 before he started exposing the unconstitutional and dangerous activities of the NSA. Now some 15 years later there have been increasing demands to decriminalize and even legalize drugs, especially in Latin America. But even the United States is changing, with both the states of Washington and Colorado opting to legalize marijuana. The lessons of Portugal have helped make the case, not by penetrating the public’s imagination per se, but by showing policy elites that decriminalization not only works but it saves lives and saves money. Little Portugal may one day be remembered for changing the world.

I wonder if we might see a similar paper written about New Zealand ten years from now about technology policy. It may be that a number of Kiwis will counter the arguments in this post by exposing all the reasons why I’m wrong (which I’d welcome!) but at a glance, New Zealand would probably be the place I’d send a public servant or politician wanting to know more about how to do technology policy right.

So why is that?

First, for those who missed it, this summer New Zealand banned software patents. This is a stunning and entirely sensible accomplishment. Software patents, and the legal morass and drag on innovation they create, are an enormous problem. The idea that Amazon can patent “1-click” (e.g. the idea that you pre-store someone’s credit card information so they can buy an item with a single click) is, well, a joke. This is a grand innovation that should be protected for years?

And yet, I can’t think of single other OECD member country that is likely to pass similar legislation. This means that it will be up to New Zealand to show that the software world will survive just fine without patents and the economy will not suddenly explode into flames. I also struggle to think of an OECD country where one of the most significant industry groups – the Institute of IT Professionals appeared – would not only both support such a measure but help push its passage:

The nearly unanimous passage of the Bill was also greeted by Institute of IT Professionals (IITP) chief executive Paul Matthews, who congratulated [Commerce Minister] Foss for listening to the IT industry and ensuring that software patents were excluded.

Did I mention that the bill passed almost unanimously?

Second, New Zealanders are further up the learning curve around the dangerous willingness their government – and foreign governments – have for illegally surveilling them online.

The arrest of Kim Dotcom over MegaUpload has sparked some investigations into how closely the country’s police and intelligence services follow the law. (For an excellent timeline of the Kim Dotcom saga, check out this link). This is because Kim Dotcom was illegally spied on by New Zealand’s intelligence services and police force, at the behest of the United States, which is now seeking to extradite him. The arrest and subsequent fall out has piqued public interest and lead to investigations including the Kitteridge report (PDF) which revealed that “as many as 88 individuals have been unlawfully spied on” by the country’s Government Communications Security Bureau.

I wonder if the Snowden documents and subsequent furor probably surprised New Zealanders less than many of their counterparts in other countries since it was less a bombshell than another data point on a trend line.

I don’t want to overplay the impact of the Kim Dotcom scandal. It has not, as far as I can tell, lead to a complete overhaul of the rules that govern intelligence gathering and online security. That said, I suspect, it has created a political climate that amy be more (healthily) distrustful of government intelligence services and the intelligence services of the United States. As a result, it is likely that politicians have been more sensitive to this matter for a year or two longer than elsewhere and that public servants are more accustomed at policies through the lens of its impact on rights and privacy of citizens than in many other countries.

Finally, (and this is somewhat related to the first point) New Zealand has, from what I can tell, a remarkably strong open source community. I’m not sure why this is the case, but suspect that people like Nat Torkington – and open source and open data advocate in New Zealand – and others like him play a role in it. More interestingly, this community has had influence across the political spectrum. The centre left labour party deserves much of the credit for the patent reform while the centre-right New Zealand National Party has embraced both open data. The country was among the first to embrace open source as a viable option when procuring software and in 2003 the government developed an official open source policy to help clear the path for greater use of open source software. This contrasts sharply with my experience in Canada where, as late as 2008, open source was still seen by many government officials as a dangerous (some might say cancerous?) option that needed to be banned and/or killed.

All this is to say that in both the public (e.g. civil society and the private sector) and within government there is greater expertise around thinking about open source solutions and so an ability to ask different questions about intellectual property and definitions of the public good. While I recognize that this exists in many countries now, it has existed longer in New Zealand than in most, which suggests that it enjoys greater acceptance in senior ranks and there is greater experience in thinking about and engaging these perspectives.

I share all this for two reasons:

First, I would keep my eye on New Zealand. This is clearly a place where something is happening in a way that may not be possible in other OECD countries. The small size of its economy (and so relative lack of importance to the major proprietary software vendors) combined with a sufficient policy agreement both among the public and elites enables the country to overcome both internal and external lobbying and pressure that would likely sink similar initiatives elsewhere. And while New Zealand’s influence may be limited, don’t underestimate the power of example. Portugal also has limited influence, but its example has helped show the world that the US -ed narrative on the “war on drugs” can be countered. In many ways this is often how it has to happen. Innovation, particularly in policy, often comes from the margins.

Second, if a policy maker, public servant or politician comes to me and asks me who to talk to around digital policy, I increasingly find myself looking at New Zealand as the place that is the most compelling. I have similar advice for PhD students. Indeed, if what I’m arguing is true, we need research to describe, better than I have, the conditions that lead to this outcome as well as the impact these policies are having on the economy, government and society. Sadly, I have no names to give to those I suggest this idea to, but I figure they’ll find someone in the government to talk to, since, as a bonus to all this, I’ve always found New Zealanders to be exceedingly friendly.

So keep an eye on New Zealand, it could be the place where some of the most progressive technology policies first get experimented with. It would be a shame if no one noticed.

(Again If some New Zealanders want to tell me I’m wrong, please do. Obviously, you know your country better than I do).

Making Bug Fixing more Efficient (and pleasant) – This Made Me Smile

The other week I was invited down to the Bay Area Drupal Camp (#BadCamp) to give a talk on community management to a side meeting of the 100 or so core Drupal developers.

I gave a hour long version of my OSCON keynote on the Science of Community Management and had a great time engaging what was clearly a room of smart, caring people who want to do good things, ship great code, and work well with one anther. As part of my talk I ran them through some basic negotiation skills – particularly around separating positions (a demand) from interests (the reasons/concerns that created that demand). Positions are challenging to work with as they tend to lock people into what they are asking and makes outcomes either binary or fosters compromises that may make little sense, where as interests (which you get by being curious and asking lots of whys) can create the conditions for make creative, value generative outcomes that also strengthen the relationship.

Obviously, understanding the difference is key, but so is acting on it, e.g. asking questions are critical moments to try to open up the dialogue and uncover interests.

Seems like someone was listening during the workshop since I just sent this link to a conversation about a tricky drupal bug (Screen shot below)

Drupal-bug-fixing2

I love the questions. This is exactly the type of skill and community norms I think we need to build tino more of bug tracking environments/communities, which can sometimes be pretty hostile and aggressive, something that I think turns off many potentially good contributors.

Community Managers: Expectations, Experience and Culture Matter

Here’s an awesome link to grind home my point from my OSCON keynote on Community Management, particularly the part where I spoke about the importance of managing wait times – the period between when a volunteer/contributor takes and action and when they get feedback on that action.

In my talk I referenced code review wait times. For non-developers, in open source projects, a volunteer (contributor) will often write a patch which they must be reviewed by someone who oversees the project before it gets incorporated into the software’s code base. This is akin to a quality assurance process – say, like if you are baking brownies for the church charity event, the organizer probably wants to see the brownies first, just to make sure they aren’t a disaster. The period between which you write the patch (or make the brownies) and when the project manager reviews them and say they are ok/not ok, that’s the wait time.

The thing is, if you never tell people how long they are going to have to wait – expect them to get unhappy. More importantly, if, while their waiting, other contributors come and make negative comments about their contributions, don’t be surprised if they get even more unhappy and become less and less inclined to submit patches (or brownies, or whatever makes your community go round).

In other words while your code base may be important but expectations, experience and culture matter, probably more. I don’t think anyone believes Drupal is the best CMS ever invented, but its community has a pretty good expectations, a great experience and fantastic culture, so I suspect it kicks the ass of many “technically” better CMS’s run by lesser managed communities.

Because hey, if I’ve come to expect that I have to wait an infinite or undetermined amount of time, if the experience I have interacting with others suck and if the culture of the community I’m trying to volunteer with is not positive… Guess what. I’m probably going to stop contributing.

This is not rocket science.

And you can see evidence of people who experience this frustration in places around the net. Edd Dumbill sent me this link via hacker news of a frustrated contributor tired of enduring crappy expectations, experience and culture.

Heres what happens to pull requests in my experience:

  • you first find something that needs fixing
  • you write a test to reproduce the problem
  • you pass the test
  • you push the code to github and wait
  • then you keep waiting
  • then you wait a lot longer (it’s been months now)
  • then some ivory tower asshole (not part of the core team) sitting in a basement finds a reason to comment in a negative way.
  • you respond to the comment
  • more people jump on the negative train and burry your honestly helpful idea in sad faces and unrelated negativity
  • the pull dies because you just don’t give a fuck any more

If this is what your volunteer community – be it software driven, or for poverty, or a religious org, or whatever – is like, you will bleed volunteers.

This is why I keep saying things like code review dashboards matter. I bet if this user could at least see what the average wait time is for code review he’d have been much, much happier. Even if that wait time were a month… at least he’d have known what to expect. Of course improving the experience and community culture are harder problems to solve… but they clearly would have helped as well.

Most open source projects have the data to set up such a dashboard, it is just a question of if we will.

Okay, I’m late for an appointment, but really wanted to share that link and write something about it.

NB: Apologies if you’ve already seen this. I accidentally publishes this as a page, not a post on August 24th, so it escaped most people’s view.

Lessons from Michigan's "Innovation Fund" for Government Software

So it was with great interest that several weeks ago a reader emailed me this news article coming out of Michigan. Turns out the state recently approved a $2.5 million dollar innovation fund that will be dispersed in $100,000 to $300,000 chunks to fund about 10 projects. As Government Technology reports:

The $2.5 million innovation fund was approved by the state Legislature in Michigan’s 2012 budget. The fund was made formal this week in a directive from Gov. Rick Snyder. The fund will be overseen by a five-person board that includes Michigan Department of Technology, Management and Budget (DTMB) Director John Nixon and state CIO David Behen.

There are lessons in this for other governments thinking about how to spur greater innovation in government while also reducing the cost of software.

First up: the idea of an innovation fund – particularly one that is designed to support software that works for multiple governments – is a laudable one. As I’ve written before, many governments overpay for software. I shudder to think of how many towns and counties in Michigan alone are paying to have the exact same software developed for them independently. Rather than writing the same piece of software over and over again for each town, getting a single version that is usable by 80% (or heck, even just 25%) of cities and counties would be a big win. We have to find a way to get governments innovating faster, and getting them back in the driver’s seat on  the software they need (as opposed to adapting stuff made for private companies) would be a fantastic start.

Going from this vision – of getting something that works in multiple cities – to reality, is not easy. Read the Executive Directive more closely. What’s particularly interesting (from my reading) is the flexibility of the program:

In addition to the Innovation Fund and Investment Board, the plan may include a full range of public, private, and non-profit collaborative innovation strategies, including resource sharing…

There is good news and bad news here.

The bad news is that all this money could end up as loans to mom and pop software shops that serve a single city or jurisdiction, because they were never designed from the beginning to be usable across multiple jurisdictions. In other words, the innovation fund could go to fund a bunch of vendors who already exist and who, at best, do okay, or at worse, do mediocre work and, in either case, will never be disruptive and blow up the marketplace with something that is both radically helpful and radically low cost.

What makes me particularly nervous about the directive is that there is no reference to open source license. If a government is going to directly fund the development of software, I think it should be open source; otherwise, taxpayers are acting as venture capitalists to develop software that they are also going to pay licenses to use. In other words, they’re absorbing the risk of a VC in order to have the limited rights of being a client; that doesn’t seem right. An open source requirement would be the surest way to ensure an ROI on the program’s money. It assures that Michigan governments that want access to what gets developed can get use it at the lowest possible cost. (To be clear, I’ve no problem with private vendors – I am one – but their software can be closed because they (should) be absorbing the risk of developing it themselves. If the government is giving out grants to develop software for government use, the resulting software should be licensed open.)

Which brings us to the good. My interest in the line of the executive directive cited above was piqued by the reference to public and non-profit “collaborative innovation strategies.” I read that and I immediately think of one of my favourite organizations: Kuali.

Many readers have heard me talk about Kuali, an organization in which a group of universities collectively set the specs for a piece of software they all need and then share in the costs of developing it. I’m a big believer that this model could work for local and even state level governments. This is particularly true for the enterprise management software packages (like financial management), for which cities usually buy over-engineered, feature rich bloatware from organizations like SAP. The savings in all this could be significant, particularly for the middle-sized cities for whom this type of software is overkill.

My real hope is that this is the goal of this fund – to help provide some seed capital to start 10 Kuali-like projects. Indeed, I have no idea if the governor and his CIO’s staff have heard of or talked to the Kuali team before signing this directive, but if they haven’t, they should now. (Note: It’s only a 5 hour drive from the capital, Lansing, Michigan to the home of Kuali in Bloomington, Indiana).

So, if you are a state, provincial or national government and you are thinking about replicating Michigan’s directive – what should you do? Here’s my advice:

  • Require that all the code created by any projects you fund be open source. This doesn’t mean anyone can control the specs – that can still reside in the hands of a small group of players, but it does mean that a variety of companies can get involved in implementation so that there is still competition and innovation. This was the genius of Kuali – in the space of a few months, 10 different companies emerged that serviced Kuali software – in other words, the universities created an entire industry niche that served them and their specific needs exclusively. Genius.
  • Only fund projects that have at least 3 jurisdictions signed up. Very few enterprise open source projects start off with a single entity. Normally they are spec’ed out with several players involved. This is because if just one player is driving the development, they will rationally always choose to take shortcuts that will work for them, but cut down on the likelihood the software will work for others. If, from the beginning, you have to balance lots of different needs, you end up architecting your solution to be flexible enough to work in a diverse range of environments. You need that if your software is going to work for several different governments.
  • Don’t provide the funds, provide matching funds. One way to ensure governments have skin in the game and will actually help develop software is to make them help pay for the development. If a city or government agency is devoting $100,000 towards helping develop a software solution, you’d better believe they are going to try to make it work. If the State of Michigan is paying for something that may work, maybe they’ll contribute and be helpful, or maybe they’ll sit back and see what happens. Ensure they do the former and not the latter – make sure the other parties have skin in the game.
  • Don’t just provide funds for development – provide funds to set up the organization that will coordinate the various participating governments and companies, set out the specs, and project manage the development. Again, to understand what that is like – just fork Kuali’s governance and institutional structure.
  • Ignore government agencies or jurisdictions that believe they are a special unique flower. One of the geniuses of Kuali is that they abstracted the process/workflow layer. That way universities could quickly and easily customize the software so that it worked for how their university does its thing. This was possible not because the universities recognized they were each a unique and special flower but because they recognized that for many areas (like library or financial management) their needs are virtually identical. Find partners that look for similarities, not those who are busy trying to argue they are different.

There is of course more, but I’ll stop there. I’m excited for Michigan. This innovation fund has real promise. I just hope that it gets used to be disruptive, and not to simply fund a few slow and steady (and stodgy) software incumbents that aren’t going to shake up the market and help change the way we do government procurement. We don’t need to spend $2.5 million to get software that is marginally better (or not even). Governments already spend billions every year for that. If we are going to spend a few million to innovate, let’s do it to be truly disruptive.

Adapting KUALI financials for cities: Marin County is looking for Partners

Readers of my blog will be familiar Kuali – the coalition of universities that co-create a suite software  core to their operations – as I’ve blogged about several times and argued that it is a powerful model for local governments interested in rethinking how they procure (or really, co-create) their software.

For some time now I’ve heard rumors that some local governments have been playing with Kuali’s software to see if they can adapt it to work for their needs. Yesterday, David Hill of Marin County posted the comment below to a blog post I’d written about Kuali in which he openly states that he is looking for other municipalities to partner with as they try to fork Kuali financials and adapt it to local government.

<dhill@marincounty.org> (unregistered) wrote:

I completely agree.  It is a radical change for government in at least four ways:

1)  Government developers (are there any?) have little experience with open source
2)  CIOs have no inherent motivation to leave the commercial market model
3)  Governments have little experience is sharing
4)  CIOs are losing their staff due to budget cuts, and have no excess resources to take on a project that appears risky

But, let’s not waste a crisis.  Now is the best time to get KUALI financials certified for government finance and accounting and into production.

Please contact me if you are  planning to upgrade or replace your financial system and would like to look at KFS.
Randy Ozden,  VivanTech CEO is a great commercial partner
David Hill,
CIO
County of Marin

David’s offer is an exciting opportunity and I definitely encourage any municipal and county government officials interested in finding a cheap alternative to their financial management software to reach out to David Hill and at least explore this option. (or if you know any local government officials, please forward this to them). I would love nothing more to see some Kuali style projects start to emerge at the local level.

Calling all Mozilla Contributors Past & Present

As some friends know, I’ve been working with Mozilla, helping them design an engagement audit, something to enable them assess how effective they are at engaging and empowering the community. This work has a number of aspects, much of which builds on ideas I’ve blogged about here and spoken about in the last year or so (most recently at DjangoCon and the Drupal Pacific Northwest Summit).

The hardest thing of course, is getting feedback from volunteer contributors themselves. This group of talented people are dispersed and, unsurprisingly, busy. But they also have the best data about their experience and so capturing it, sharing it, and using it to provide recommendations to help Mozilla is essential.

DinoheadIn pursuit of that goal I’ve worked a number of staff at Mozilla, and sought the advice of survey expert Peter Loewen to create a Mozilla Volunteer Contributor Survey.

So…! If you are a Mozilla contributor, or have been in the past, we would be deeply indebted to you if you took the time to fill this out. We are trying to push the survey link into various networks we think contributors will see it, but anything you can do to let e fellow Mozillian know about the survey would be great.

Really, really can’t thank anyone who takes this survey enough.

The Science of Community Management: DjangoCon Keynote

At OSCON this year, Jono Bacon, argued that we are entering a era of renaissance in open source community management – that increasingly we don’t just have to share stories but that repeatable, scientific approaches are increasingly available to us. In short, the art of community management is shifting to a science.

With an enormous debt to Jono, I contend we are already there. Indeed the tools for enable a science of community management have existed for at least 5 years. All that is needed is an effort to implement them.

A few weeks ago the organizers of DjangoCon were kind enough to invite me to give the keynote at their conference in Portland and I made these ideas the centerpiece of my talk.

Embedded below is the result: a talk that that starts slowly, but that grew with passion and engagement as it progressed. I really want to thank the audience for the excellent Q&A and for engaging with me and the ideas as much as they did. As someone from outside their community, I’m grateful.

My hope in the next few weeks is to write this talk up in a series of blog posts or something more significant, and, hopefully, to redo this video in slideshare (although I’m going to have to get my hands on the audio of this). I’ll also be giving a version of this talk at the Drupal Pacific Northwest Summit in a few weeks. Feedback, as always, is not only welcome, but gratefully received. None of this happens in a vacuum, it is always your insights that help me get better, smarter and more on target.

Big thanks to Dierderik Van Liere and Lauren Bacon for inspiration and help as well as Mike Beltzner, Daniel Einspanjer, David Ascher and Dan Mosedale (among many others) at Mozilla who’ve been supportive and a big assistance.

In the meantime, I hope this is enjoyable, challenging and spurs good thoughts.

Open Source Data Journalism – Happening now at Buzz Data

(there is a section on this topic focused on governments below)

A hint of how social data could change journalism

Anyone who’s heard me speak in the last 6 months knows I’m excited about BuzzData. This week, while still in limited access beta, the site is showing hints its potential – and it still has only a few hundred users.

First, what is BuzzData? It’s a website that allows data to be easily uploaded and shared among any number of users. (For hackers – it’s essentially github for data, but more social). It makes it easy for people to copy data sets, tinker with them, share the results back with the original master, mash them up with other data sets, all while engaging with those who care about that data set.

So, what happened? Why is any of this interesting? And what does it have to do with journalism?

Exactly a month ago Svetlana Kovalyova of Reuters had her article – Food prices to remain high, UN warns – re-published in the Globe and Mail.  The piece essentially outlined that food commodities were getting cheaper because of local conditions in a number of regions.

Someone at the Globe and Mail decided to go a step further and upload the data – the annual food price indices from 1990-present – onto the BuzzData site, presumably so they could play around with it. This is nothing complicated, it’s a pretty basic chart. Nonetheless a dozen or so users started “following” the dataset and about 11 days ago, one of them, David Joerg, asked:

The article focused on short-term price movements, but what really blew me away is: 1) how the price of all these agricultural commodities has doubled since 2003 and 2) how sugar has more than TRIPLED since 2003. I have to ask, can anyone explain WHY these prices have gone up so much faster than other prices? Is it all about the price of oil?

He then did a simple visualization of the data.

FoodPrices

In response someone from the Globe and Mail entitled Mason answered:

Hi David… did you create your viz based on the data I posted? I can’t answer your question but clearly your visualization brought it to the forefront. Thanks!

But of course, in a process that mirrors what often happens in the open source community, another “follower” of the data shows up and refines the work of the original commentator. In this case, an Alexander Smith notes:

I added some oil price data to this visualization. As you can see the lines for everything except sugar seem to move more or less with the oil. It would be interesting to do a little regression on this and see how close the actual correlation is.

The first thing to note is that Smith has added data, “mashing in” Oil Price per barrel. So now the data set has been made richer. In addition his graph quite nice as it makes the correlation more visible than the graph by Joerg which only referenced the Oil Price Index. It also becomes apparent, looking at this chart, how much of an outlier sugar really is.

oilandfood

Perhaps some regression is required, but Smith’s graph is pretty compelling. What’s more interesting is not once is the price of oil mentioned in the article as a driver of food commodity prices. So maybe it’s not relevant. But maybe it deserves more investigation – and a significantly better piece, one that would provide better information to the public – could be written in the future. In either case, this discussion, conducted by non-experts simply looking at the data, helped surface some interesting leads.

And therein lies the power of social data.

With even only a handful of users a deeper, better analysis of the story has taken place. Why? Because people are able to access the data and look at it directly. If you’re a follower of Julian Assange of wikileaks, you might call this scientific journalism, maybe it is, maybe it isn’t, but it certainly is a much more transparent way for doing analysis and a potential audience builder – imagine if 100s or 1000s of readers were engaged in the data underlying a story. What would that do to the story? What would that do to journalism? With BuzzData it also becomes less difficult to imagine a data journalists who spends a significant amount of their time in BuzzData working with a community of engaged pro-ams trying to find hidden meaning in data they amass.

Obviously, this back and forth isn’t game changing. No smoking gun has been found. But I think it hints at a larger potential, one that it would be very interesting to see unlocked.

More than Journalism – I’m looking at you government

Of course, it isn’t just media companies that should be paying attention. For years I argued that governments – and especially politicians – interested in open data have an unhealthy appetite for applications. They like the idea of sexy apps on smart phones enabling citizens to do cool things. To be clear, I think apps are cool too. I hope in cities and jurisdictions with open data we see more of them.

But open data isn’t just about apps. It’s about the analysis.

Imagine a city’s budget up on Buzzdata. Imagine, the flow rates of the water or sewage system. Or the inventory of trees. Think of how a community of interested and engaged “followers” could supplement that data, analyze it, visualize it. Maybe they would be able to explain it to others better, to find savings or potential problems, develop new forms of risk assessment.

It would certainly make for an interesting discussion. If 100 or even just 5 new analyses were to emerge, maybe none of them would be helpful, or would provide any insights. But I have my doubts. I suspect it would enrich the public debate.

It could be that the analysis would become as sexy as the apps. And that’s an outcome that would warm this policy wonk’s soul.

Lessons for Open Source Communities: Making Bug Tracking More Efficient

This post is a discussion about making bug tracking in Bugzilla for the Mozilla project more efficient. However, I believe it is applicable to any open source project or even companies or governments running service desks (think 311).

Almost exactly a year ago I wrote a blog post titled: Some thoughts on improving Bugzilla in which I made several suggestions for improving the work flow in bugzilla. Happily a number of those ideas have been implemented.

One however, remains outstanding and, I believe, creates an unnecessary amount of triage work as well as a terrible experience for end users. My understanding is that while the bug could not be resolved last year for a few reasons, there is growing interest (exemplified originally in the comment field of my original post) to tackle it once again. This is my attempt at a rallying cry to get that process moving.

Those who are already keen on this idea and don’t want to read anything more below, this refers to bug 444302.

The Challenge: Dealing with Support Requests that Arrive in Bugzilla

I first had this idea last summer while talking to the triage team at the Mozilla Summit. These are the guys who look at the firehose of bugs being submitted to Mozilla every day. They have a finite amount of time, so anything we can do to automate their work is going to help them, and the project, out significantly.

Presently, I’m told that Mozilla gets a huge number of bugs submitted that are not actually bugs, but support issues. This creates several challenges.

First, it means that support related issues, as opposed to real problems with the software, are clogging up the bug tracking system. This increases the amount of noise in the system – making it harder for everyone to find the information they need.

Second, it means the triage teams has to spend time filtering bugs that are actually support issues. Not a good use of their time.

Third, it means that users who have real support issues but submit them accidentally though Bugzilla, get a terrible experience.

This last one is a real problem. If you are a user, feeling frustrated (and possibly not behaving as your usual rational self – we’ve all been there) because your software is not working the way you expect, and then you submit what a triage person considers a support issue (Resolve-Invalid)  you get an email that looks like this:


If I’m already cheesed that my software isn’t doing what I want, getting an email that says “Invalid” and “Verified” is really going to cheese me off. That of course presumes I even know what this email means. More likely, I’ll be thinking that some ancient machine in the bowels of mozilla using software created in the late 1990s received my plea and has, in its 640K confusion, has spammed me. (I mean look at it… from a user’s perspective!)

The Proposal: Re-Automating the Process for a better result

Step 1: My sense is that this issue – especially problem #3 – could be resolved by simply creating a new resolution field. I’ve opted to call it “Support” but am happy to name it something else.

This feels like a simple fix and it would quickly move a lot of bugs that are cluttering up bugzilla… out.

Step 2: Query the text of bugs marked “support” against Mozilla’s database. Then insert the results in an email that goes back to the user. I’m imagining something that might look like this:

SUMO-transfer-v2

Such an email has several advantages:

First, if these are users who’ve submitted inappropriate bugs and who really need support, giving them a bugzilla email isn’t going to help them, they aren’t even going to know how to read it.

Second, there is an opportunity to explain to them where they should go for help – I haven’t done that explicitly enough in this email – but you get the idea.

Because, because we’ve done a query of the Mozilla support database (SUMO) we are able to include some support articles that might resolve their issue.

Fourth, if this really is a bug from a more sophisticated user, we give them a hyperlink back to bugzilla so they can make a note or comment.

What I like about this is it is customized engagement at a low cost. More importantly, it helps unclutter things while also making us more responsive and creating a better experience for users.

Next Steps:

It’s my understanding that this is all pretty doable. After last year’s post there were several helpful comments. Including this one from Bugzilla expert Gervase Markham:

The best way to implement this would be a field on SUMO where you paste a bug number, and it reaches out, downloads the Bugzilla information using the Bugzilla API, and creates a new SUMO entry using it. It then goes back and uses the API to automatically resolve the Bugzilla bug – either as SUPPORT, if we have that new resolution, or INVALID, or MOVED (which is a resolution Bugzilla has had in the past for bugs moved elsewhere), or something else.

The SUMO end could then send them a custom email, and it could include hyperlinks to appropriate articles if the SUMO engine thought there were any.

And Tyler Downer noted in this comment that there maybe be a dependency bug (#577561) that would also need resolving:

Gerv, I love you point 3. Exactly what I had in mind, have SUMO pull the relevant data from the bug report (we just need BMO to autodetect firefox version numbers, bug 577561 ;) and then it should have most of the required data. That would save the user so much time and remove a major time barrier. They think “I just filed a bug, now they want me to start a forum thread?” If it does it automatically, the user would be so much better served.

So, if there is interest in doing this, let me know. I’m happy to support any discussion, should it take place on the comment stream of the bug, the comments below, or somewhere else that might be helpful (maybe I should dial in on this call?). Regardless, this feels like a quick win, one that would better serve Mozilla users, teach them to go to the right place for support (over time) and improve the Bugzilla workflow. It might be worth implementing even for a bit, and we can assess any positive or negative feedback after 6 months.

Let me know how I can help.

Additional Resources

Bug 444302: Provide a means to migrate support issues that are misfiled as bugs over to the support.mozilla.com forums.

My previous post: Some thoughts on improving Bugzilla. The comments are worth checking out

Mozilla’s Bugzilla Wiki Page