Tag Archives: open source

MuniForge: Creating municipalities that work like the web

Last month I published the following article in the Municipal Information Systems Association’s journal Municipal Interface. The article was behind a firewall so now that the month has gone by I’m throwing it up here. Basically, it makes the case for why, if government’s applied open source licenses to the software they developed (or paid to develop), they could save 100’s of millions, or more likely billions of dollars, a year. Got a couple of emails from municipal IT professionals from across the country

MuniForge: Creating Municipalities that Work like the Web

Introduction

This past May the City of Vancouver passed what is now referred to as “Open 3”.This motion states that the City will use open standards for managing its information, treat open source and proprietary software equally during the procurement cycle, and apply open source licenses to software the city creates.

While a great deal of media attention has focused on the citizen engagement potential of open data, but the implications of the second half of the motion – that relating to open source software – has gone relatively unnoticed. This is all the more surprising since last year the Mayor of Toronto’s also promised his city would apply an open source license to software it creates. This means that two of Canada’s largest municipalities are set to apply open source licenses to software they create in house. Consequently, the source code and the software itself will be available for free under a license that permits users to use, change, improve and redistribute it in modified or unmodified forms.

If capitalized upon these announcements could herald a revolution in how cities currently procure and develop software. Rather than having thousands of small municipalities collectively spending billions of dollars to each recreate the own wheel the open sourcing of municipal software could weave together Canada’s municipal IT departments into one giant network in which expertise and specialized talents drive up quality and security to the benefit of all while simultaneously collapsing the costs of development and support. Most interestingly, while this shift will benefit larger cities, its benefit and impact could be most dramatic and positive among the country’s smaller cities (those with populations under 200K). What is needed to make it happen is a central platform where the source code and documentation for software that cities wish to share can be uploaded and collaborated on. In short, Canada needs a Sourceforge, or better, a GitHub for municipal software.

The cost

For the last two hundred years one feature has dominated the landscape for the majority if municipalities in Canada: isolation. In a country as vast and sparsely populated as ours villages, towns, and cities have often found themselves alone. For citizens the railway, the telegraph, then the highway and telecommunications system eroded that isolation, but if we look at the operations of cities this isolation remains a dominant feature. Most Canadian municipalities are highly effective, but ultimately self contained islands. Municipal IT departments are no different. One municipality rarely talks to that of another, particularly if they are not neighbours.

The result of this process is that in many cities across Canada IT solutions are frequently developed in one of two manners.

The first is the procurement model. Thankfully, when the product is off the shelf, or easily customized, deployment can occur quickly, this however, is rarely the case. More often, larger software and expensive consulting firms are needed to deploy such solutions frequently leaving them beyond the means of many smaller cities. Moreover, from an economic development perspective the dollars spent on these deployments often flow out of the community to companies and consultants based elsewhere. On the flip side, local, smaller firms, if they exist at all, tend to be untested and frequently lack the expertise and competition necessary to provide a reliable and affordable product. Finally, regardless of the firms’ size, most solutions are proprietary and so lock a city into the solution in perpetuity. This not only holds the city hostage to the supplier, it eliminates future competition and worse, should the provider go out of business, it saddles the city with an unsupported system which will be painful and expensive to upgrade out of.

The second option is to develop in-house. For smaller cities with limited IT departments this option can be challenging, but is often still cheaper than hiring an external vendor. Here the challenge is that any solution is limited by the skills and talents of the City’s IT staff. A small city, with even a gifted IT staff of 2-5 people will be challenged to effectively build and roll out all the IT infrastructure city staff and citizens need. Moreover, keeping pace with security concerns, new technologies and new services poses additional challenges.

In both cases the IT services a city can develop and support for staff and citizens is be limited by either the skills and capacity of its team or the size of its procurement budget. In short, the collective purchasing power, development capacity and technical expertise of Canada’s municipal IT departments is lost because we remain isolated from one another. With each city IT department acting like an island this creates enormous constraints and waste. Software is frequently recreated hundreds of times over as each small city creates its own service or purchases its own license.

The opportunity

It need not be this way. Rather than a patchwork of isolated islands, Canada’s municipal IT departments could be a vast interconnected network.

If even two small communities in Canada applied an open source license to a software they were producing, allowed anyone to download it and documented it well the cost savings would be significant. Rather than having two entities create what is functionally the same piece of software, the cost would be shared. Once available, other cities could download and write patches that would allow this software to integrate with their own hardware/software infrastructure. These patches would also be open source making it easier for still more cities to use the software. The more cities participate in identifying bugs, supplying patches and writing documentation, the lower the costs to everyone becomes. This is how Linus Torvalds started a community whose operating system – Linux – would become world class. It is the same process by which Apache came to dominate webservers and it is the same approach used by Mozilla to create Firefox, a web browser whose market share now rivals that of Internet Explorer. The opportunity to save municipalities millions, if not billions in software licensing and/or development costs every year is real and tangible.

What would such a network look like and how hard would it be to create? I suspect that two pieces would need to be in place to begin growing a nascent network.

First, and foremost, there need to be a handful of small projects. Often the most successful source projects are those that start collaboratively. This way the processes and culture are, from the get go, geared towards collaboration and sharing.  This is also why smaller cities are the perfect place to start for collaborating on open source projects. The world’s large cities are happy to explore new models, but they are too rich, too big and too invested in their current systems to drive change. The big cities can afford Accenture. Small cities are not only more nimble, they have the most to gain. By working together and using open source they can provide a level of service comparable to that of the big cities, at a fraction of the cost. An even simpler first step would be to ensure that when contractors sign on to create new software for a city, they agree that the final product will be available under and open source license.

Second, MISA, or another body, should create a Sourceforge clone for hosting open sourced municipal software projects. Sourceforge is an American based open source software development web site which provides services that help people build cool and share software with coders around the world. It presently hosts more than 230,000 software projects has over 2 million registered users. Soureforge operates as a sort of market place for software initiatives, a place where one can locate software one is interested in and then both download it and/or become part of a community to improve it.

A Soureforge clone – say Muniforge – would be a repository for software that municipalities across the country could download and use for free. It would also be the platform upon which collaboration around developing, patching and documenting would take place. Muniforge could also offer tips, tools and learning materials for those new to the open source space on how to effectively lead, participate and work within an open source community. This said, if MISA wanted to keep costs even lower, it wouldn’t even need to create a sourecforge clone, it could simply use the actual sourceforge website and lobby the company to create a new “municipal” category.

And herein lies the second great opportunity of such a platform. It can completely restructure the government software business in Canada. At the moment Canadian municipalities must choose between competing proprietary systems that lock them into to a specific vendor. Worst still, they must pay for both the software development and ongoing support. A Muniforge would allow for a new type of vendor modeled after Redhat – the company that offers support to users that adopt its version of the free, open source Linux operating system. Suddenly while vendors can’t sell software found on Muniforge, they could offer support for it. Cities would not have the benefit of outsourcing support, without having to pay for the development of a custom, proprietary software system. Moreover, if they are not happy with their support they can always bring it in house, or even ask a competing company to provide support. Since the software is open source nothing prevents several companies from supporting the same piece of software – enhancing service, increasing competition and driving down prices.

There is another, final, global benefit to this approach to software development. Over time, a Muniforge could begin to host all of the software necessary to run a modern day municipality. This has dramatic implications for cities in the developing world. Today, thanks to rapid urbanization, many towns and villages in Asian and Africa will be tomorrow’s cities and megacities. With only a fraction of the resources these cities will need to be able to offer the services that are today common place in Canada. With Muniforge they could potentially download all the infrastructure they need for free – enabling precious resources to go towards other critical pieces of infrastructure such as sewers and drinking water. Moreover, a Muniforge would encourage small local IT support organizations to develop in those cities providing jobs fostering IT innovation where it is needed most.  Better still, over time, patches and solutions would flow the other way, as more and more cities help improve the code base of projects found on Muniforge.

Conclusion

The internet has demonstrated that new, low cost models of software development exist. Open source software development has shown how loosely connected networks of coders/users from across a country, or even around the world can create world class software that rivals and even outperforms software created by the largest proprietary developers. This is the logic of the web – participation, better development and low-cost development.

The question cities across Canada need to ask themselves is: do we want to remain isolated islands, or do we want to work like the web, working collaboratively to offer better services, more quickly and at a lower cost. If even only some cities choose the later answer an infrastructure to enable collaboration can be put in place at virtually no cost, while the potential benefits and the opportunity to restructure the government software industry would be significant. Island or network – which do we want to be?

Making Open Source Communities (and Open Cities) More Efficient

My friend Diederik and I are starting to work more closely with some open source projects about how to help “open” communities (be they software projects or cities) become more efficient.

One of the claims of open source is that many eyes make all bugs shallow. However, this claim is only relevant if there is a mechanism for registering and tackling the bugs. If a thousand people point out a problem, one may find that one is overwhelmed with problems – some of which may be critical, some of which are duplicates and some of which are not problems at all, but mistakes, misunderstandings or feature requests. Indeed, in recent conversations with open source community leaders, one of the biggest challenges and time sinks in a project is sorting through bugs and identifying those that are both legitimate and “new.” Cities, particularly those with 311 systems that act similar to “bug tracking” software in open source projects, have a similar challenge. They essentially have to ensure that each new complaint is both legitimate, and geuninely “new” (and not a duplicate complaint – eg. are there 2 potholes at Broadway and 8th vs. two people have called in to complain about the same pothole).

The other month Diederik published the graph below that used bug submission data for Mozilla Firefox tracked in Bugzilla to demonstrate how, over time, bug submitters on average do become more efficient (blue line). However, what is interesting is that despite the improved average quality the variability in the efficacy of individual bug submitters remained high (red line). The graph makes it appear as though the variability increases as submitters become more experienced but this is not the case, towards the left there were simply many more bug submitters and they averaged each other out creating the illusion of less variability. As you move to the right the number of bug submitters with these levels of experience are quite few, sometimes only 1-2 per data point, so the variability simply becomes more apparent.

Consequently, the group encircled by purple oval are very experienced and yet continue to submit bugs the community ultimately chooses to either ignore or deems not worth fixing. Sorting through, testing and evaluating these bugs suck up precious time and resource.

We are presently looking at more data to assess if we can come up with a profile for what makes for a bug submitter who falls into this group (as opposed to be “average” or exceedingly effective). If one could screen for such bug submitters, then a community might be able to better educate them and/or provide more effective tools and thus improve their performance. In more radical cases – if the net cost of their participation was too great – one could even screen them out of the bug submission process. If one could improve the performance of this purple oval group by even 25% there would be a significant improvement in the average (blue line). We are looking forward to talk and share more about this in the near future.

As a secondary point, I feel it is important to note that we are still in the early days of open source development model. My sense is there are still improvements – largely through more effective community management – that can yield dramatic (as opposed to incremental) boosts in productivity for open source projects. This separates them again from proprietary models which – as far as I can tell – can at the moment at best hope for incremental improvements in productivity. Thus, for those evaluating the costs of open versus closed processes, it might be worth considering the fact that the two approaches may be (and, in my estimation, are) evolving at very different rates.

(If someone from a city government is reading this and you have data regarding 311 reports – we would be interested in analyzing your data to see if similar results bear out – plus it may enable us to help you manage you call volume more effectively.)

Searching The Vancouver Public Library Catalog using Amazon

A few months ago I posted about a number of civic applications I’d love to see. These are computer, iphone, blackberry applications or websites that leverage data and information shared by the government that would help make life in Vancouver a little nicer.

Recently I was interviewed on CBC’s spark about some of these ideas that have come to fruition because of the hard work and civic mindedness of some local hackers. Mostly, I’ve talked about Vantrash (which sends emails or tweets to remind people of their upcoming garbage day), but during the interviewed I also mentioned that Steve Tannock created a script that allows you to search the Vancouver Public Library (VPL) Catalog from the Amazon website.

Firstly – why would you want you want to use Amazon to search the VPL? Two reasons: First, it is WAY easier to find books on the Amazon site then the library site, so you can leverage Amazon’s search engine to find books (or book recommendations) at the VPL. Second, it’s a great way to keep the book budget in check!

To use the Amazon website to search the VPL catalog you need to follow these instructions:

1. You need to be using the Firefox web browser. You can download and install it for free here. It’s my favourite browser and if you use it, I’m sure it will become yours too.

2. You will need to install the greasemonkey add-on for Firefox. This is really easy to do as well! After you’ve installed Firefox, simply go here and click on install.

3. Finally, you need to download the VPL-Amazon search script from Steve Tannock’s blog here.

4. While you are at Steve’s blog, write something nice – maybe a thank you note!

5. Go to the Amazon website and search for a book. Under the book title will be a small piece of text letting you know if the VPL has the book in its catalog! (See example picture below) Update: I’m hearing from some users that the script works on the Amazon.ca site but not the Amazon.com site.

I hope this is helpful! And happy searching.

Also, for those who are more technically inclined feel free to improve on the script – fix any bugs (I’m not sure there are any) or make it better!

Amazon shot

Spark Interview on VanTrash – The Open Source Garbage Reminder Service

A couple of weeks ago I was interviewed by the CBC’s Nora Young for her show Spark:  a weekly audio blog of smart and unexpected trendwatching about the way technology affects our lives and world.

The interview (which was fun!) dives a little deeper into some of the cool ways citizens – in working to make their lives better – can make cool things happen (and improve their community) when government’s make their data freely available. The interview focuses mostly on VanTrash, the free garbage reminder service created by Luke Closs and Kevin Jones based on a blog post I wrote. It’s been getting a lot of positive feedback and is helping make the lives of Vancouverites just a little less hectic.

You can read more about the episode here and listen to it on CBC radio at 1:05 local time in most parts of Canada and 4:05 on the west coast.

You can download a podcast of the Spark episode here or listen to it on the web here.

If you live in Vancouver – check out VanTrash.ca and sign up! (or sign your parents or neighbour up!) Never forget to take the garbage out again. It works a whole lot better than this approach my friends mom uses for her:

Van trash reminder

Emergent Systems in Government: Let's put the horse before the cart

Yesterday Paul McDowall, Knowledge Management Advisor at the Government’s School of the Public Service and chairperson of the Interdepartmental Knowledge Management Forum, wrote the following comment in response to a blog post from several months ago entitled “How GCPEDIA will save the public service.”

I’ve posted his comment – feel free to read it or skip it and go straight to my analysis below. In summary, what makes McDowall’s comments interesting isn’t just the argument (or its reactionary nature) but the underlying perspective/assumptions that drives it. It serves as a wonderful example of the tension between how the traditional hierarchical nature of the public service and some evolving emergent models that challenging this approach.

So first, McDowall:

Will GCPEDIA save the public service, or capture all the tacit knowledge that will walk out the door? No, of course not! To suggest otherwise is, frankly, naive hyperbole.

As great and as promising as GCPEDIA and other Web 2.0 tools are, tools will never save the public service. People are the public service and only people have the capacity to save the public service, and it will take a whole lot more to improve the weak areas of the public service than a tool. Things like leadership play a pretty important role in organizational effectiveness. There are many good Organizational Excellence models (I have researched this area) and they all include people and leadership as two elements, but funny enough, tools aren’t included. Why? Because it is not so much a tool issue as it is a craftsman issue.

With respect to your comment about tacit knowledge and social capital (not the same things by the way), I think it may be helfpul to brush up on what tacit knowledge is, and what Knowledge Management is.

It is unquestionably true that the public service continues to face a potential impact from demographic changes that are both extremely significant and yet unquantified. It is also unquestionably true that most public service organizations haven’t truly understood or addressed these potential impacts, to say nothing of the potential of improving their effectiness right NOW from better Knowledge Management (productivity, innovation, etc).

These issues need to be addressed by public service leaders in an intelligent and thoughtful manner. Tools can and certainly should help but only when wielded by craftsmen and women. For too long vendors have made grandiose and unrealizable promises about their ‘solutions’. I thought we had learned our lessons from all that experience.
Let’s not get the cart before the horse, shall we?

Paul McDowall
Knowledge Management Advisor and chairperson of the Interdepartmental Knowledge Management Forum

McDowall’s main concern appears to be that GCPEDIA doesn’t have a clear purpose and, more importantly, doesn’t serve a specific leadership objective. (If you are wondering how I gleaned that from the above, well, I cheated, I called McDowall to ask him more about his comment since the nature of his concern wasn’t clear to me). For those used to an era where IT projects were planned out from the beginning, everything was figured out in advance, and the needs of the leadership were the paramount priority, GCPEDIA would be disconcerting. Indeed, the very idea of unleashing people willy-nilly on a system would be an anathema. In short, when McDowall says, don’t put the horse before the cart, what he’s saying is, “you’ve rolled out a tool, and you don’t even know what you are going to use it for!”

This would appear to be rational concern. Except, many of the rules that underlay this type of thinking are disappearing. Indeed, had this type of thinking been adhered to, the web would not have developed.

First, The economics have changed. There was a time when IT projects necessarily costed tens of millions of dollars.  But GCPEDIA was built on a (free) open source platform using a handful of internal FTEs (making McDowell’s comments about vendors even more confusing). Indeed GCPEDIA has cost the public service virtually nothing to create. One invests in planning so as to avoid expensive or ineffective deployments. But if the costs of deployment are virtually zero and failure really isn’t that traumatic then… why waste time and years planning? Release, test, and adapt (or kill the project).

Second, with projects like this become cheap to deploy another important shift takes place. Users – not their bosses or a distant IT overlord – decide a) if they want to participate and b) co-develop and decide what is useful. This has powerful implications. It means that you had better serve a real (not perceived or mandated) need, and that, if successful, you’d better be prepared to evolve quickly. This, interestingly, is how that usefully little tool called the World Wide Web evolved. Read the original proposal to create the World Wide Web. IT departments of the world didn’t all collectively and suddenly decide that people should be made to use the web. No! It grew organically responding to demand. In addition, there is very little in it that talks about how we use the web today, users of the web (us!) have helped it evolve so that it serves us more effectively.

This is probably the biggest disconnect between McDowell and myself. He believes GCPEDIA is problematic (or at least won’t do the things I think it will do) because it doesn’t serve the leadership. I think it will work because it does something much better, it serves actual users – public servants (and thus, contrary to his argument, is very much about people). This includes, critically, capturing tacit knowledge and converting it into formal – HTML encoded – knowledge that helps build social capital (I do, actually, know the difference between the two).

Indeed, the last thing we need is a more leadership oriented public service, what we need is an employee centric public service. One that enables those who are actually doing the work to communicate, collaborate and work, more effectively. In this regard, I think GCPEDIA is demonstrating that it is effective (although it is still is very early days) with logarithmic growth, 8000+ users and 200 more signing up every week (all with virtually no promotional budget). Clearly some public servants are finding it to be at worst interesting, and at best, deeply enabling.

How bloggers can keep the internet healthy

I’m continuously trying to brainstorming ways that Mozilla can find the next million mozillians and figure out activities they can do. I think I’ve stumbled on to a new one but would need some help to make it happen.

As some of you may know, as part of Mozilla Service Week Mozillians around the world donated their time and helped perform numerous Internet Health Checks. The goal here is to help people migrate off of Internet Explorer 6 (which is vulnerable to attacks and therefor makes the web less safe).

In my case, I’d already helped move pretty much anyone I know who uses IE 6 to something newer and better. What I need is a way to help prod people I don’t know.

What I thought might be interesting is if someone could build a blogger & wordpress plugin. This plug would ascertain what browser a visitor to your blog is using and… if they are using IE6… then the blog widget would let the reader know that the author of the blog strongly encourages them to upgrade to IE8, Firefox, Safari or really anything newer and safer. With this (hopefully relatively simple plug-in) Mozilla can engage the blogging community, enable people to help advocate for a safer internet and, most importantly, encourage still more IE6 users to move to something newer and safer.

Yes, it isn’t the be all and end all, but its another small idea that allows another group of people to contribute is an easy, but tangible and important way. That said, if this widget can’t be created, or if there is something easier/better that can be done, please, send me an email or comment below!

More ways to make open data sexy: 5 Municipal Apps I'd love to see (what are yours?)

One of the big goals of the open data project is to get many citizens interested in different ways the data can be used. Many citizens lack the skills to code up an application and creating a website is intimidating, but they may have ideas that could improve the city or be useful to many citizens.

In the hopes of spurring more interest in the open data and getting those not tradition involved, well… involved, I’ve created an “Ideas for the Taking” page on the Vancouver Open Data wiki. I’ve seeded the page with some of the ideas I promised I would share at the Open Data Hackathon last week . Some use open data, others don’t. Mostly however, I hop they spurn others to think of what is possible and what interests them. (PS. If you are a reader and the wiki is too confusing, just email me your idea and I’ll add it to the wiki with (or without, if you prefer) you’re name attached.

So here are some ideas I’ve brainstormed:

1. Stolen Bike Tracker

Vancouver’s cycling community is huge, sadly however, the city is plagued by a serious problem: stolen bicycles. There is no solution to this problem but I think a well crafted app could help minimize the nuisance. I can imagine an app or website in which you take a photo of your bike and upload it along with some identifying information(like the serial number) to a website. The picture stays hidden, however, if your bike gets tragically stolen you load up the apps and press the “my bike was stolen button.” This marks the physical place where your bike was stolen and activates your bike photo and marks it as stolen. Now cyclists, bike shop owners and the police can check bikes to see if they are stolen before buying them (or return them to their owner if they are recovered). In addition, a street map of bike theft would also be created. This could be particularly relevant since I suspect a great deal of bike theft is not reported. Finally, for those worried about privacy, I could imagine the app using a Craigslist style contact system that would preserve the anonymity of the original owner.

2. A Downtown East Side Landlord wiki

There are a few data sets that might allow for someone to create a geo-wiki of the DTES. I think it would be interesting to have a wiki that – on a building by building level – outlined who owned which residential buildings, what they charged in rent, a list of the room amenities and comments about the property’s management. It might also be interesting to enable photos to be posted so people can show the living conditions. Such a wiki might give the public (and prospective renters) a window into the deplorable conditions and poor practices of the worst offenders. It might also help City Staff deploy resources for investigating code violations and other questionable practices.

3. Everyblock+

Obviously, I think an Everyblock app for Vancouver would be great. The one new layer I’d love to see added to it is a charity button. With this button you would see what charities are operating on the block/area you are standing on. This is harder to imagine realizing, but cooler still would be a button that would allow you to then donate to that charity.

4. Burrard Bridge Trial Website

While not located on the Open Data Portal, the city has been releasing weekly data sets on vehicle, pedestrian and cycle trip across the Burrard Bridge Trial on the Burrard Trial blog. The data is not particularly well organized (you’d have to scrape it and its only granular to the 24hr time block – so no hour by hour data sets) but it is a start. I’d be fascinating to have a site that does a deeper analysis of the data and maybe shows it in a more interesting format. Maybe a discussion on carbon emissions reduced… still more interesting would be an analysis of bicycle accidents at present versus before the trial (data that is, sadly, not obviously available).

5. City Services vs. Land Value Mashup

It would be interesting to see what impact city services have on land values. I’m not sure if land value data is available (anyone know?) but mashing it up against the location of parks, community centres, schools, firehalls, and other city amenities would be interesting. While potentially interesting to prospective home owners (maybe a real estate agency should develop – or pay to develop – this app) I think it might also be of interest to the electorate and politicians.

One last one: A Library-Amazon Greasemonkey script

A Library-Amazon Greasemonkey search script allows a user to see if a book being displayed on an Amazon.ca website is available in the Vancouver Public Library. This has two benefits. First, it is WAY easier to find books on the Amazon site then the library site, so you can leverage Amazon’s search engine to find books (or book recommendations) at the VPL. Second, it’s a great way to keep the book budget in check!

The Vancouver Public Library has said that it will share access to its database that would allow such an app to work. I believe I have the email address for the relevant person somewhere on my computer who can make this happen. (I can get the contact info for the right person if someone nudges me.) Better still the necessary Greasemonkey script is already available (scripts exist for Palo  Alto, Seattle and Ottawa), it would be great if someone tweaked the script so it worked with the VPL.

Of course, I’m hoping that others are already hatching plans about how they’d like to use the city’s data to create something they feel passionate about. And remember, if there is an app you’d like to create but the data set isn’t available – take the Open Data survey to let your voice be heard! If any of these ideas interest you, go for it. If I can help in any way, let me know, I’m keen to contribute.

Many eyes make all bugs shallow – especially when the eyes get smarter

[Please bear with me for the next 24 hours – I’m moving to a new hosting company, and there may be some glitches here and there to work out. Thanks.]

My friend Diederik has been writing a number of cool posts over at his blog Network-labs.org. For example he has an awesome jetpack plugin that predicts the likelihood a bug will get fixed and a plugin that allows users to rate how badly a bug is being flamed (to spot counterproductive behaviours).

But he recently published a very cool post that uses data from Mozilla bug submissions over the past decade to demonstrate that bug submitters become more proficient over time. However, there are outliers who are a “drag” on the system. More importantly, I believe his data can be interpreted in another way. That, with some minor investment (particularly some simpler vetting screens prior to reaching bugzilla) bug submitters could learn faster.

For example, a landing screen that asks you if you’ve ever submitted a bug before might take newbies to a different page where the bugzilla process is explained in greater detail, the fact that this is not a support site is outlined, and some models of good “submissions” are shared (along with some words of encouragement). By segmenting newbies we might ease the work burden on those who have to vet the bugs.

I think this also has implications for cities and 311 systems (systems that essentially allow citizens to report problems – or bugs – with city infrastructure or services). There will inevitably be people who become heavy users of 311 – the goal is not to get frustrated with them, but to move them up a learning curve of submitting helpful and useful information as quickly as possible. Ignoring them will not work. Strategies for engagement, including understanding what is motivating them and why they continue to use the system in an ineffective way, could help save the city resources and help foster a larger army of citizens who use their spare social capital to make the city a better and more resilient space.

SXSWi Panel: Fostering Collaborative Open Source Communities

Yesterday I saw this academic journal article and was reminded about how an individuals behaviour can negatively impact and groups productivity. In his article “Overlooked but not untouched: How incivility reduces onlookers’ performance on routine and creative tasks.” in the Organizational Behavior and Human Decision Processes (109: 29-44) Amir Erez describes how even just witnessing rudeness resulted in diminished creativity, increased one’s own negative behaviour, damaged productivity and short term memory.

This is a perfect example of why I believe we need open source communities to foster collaborative cultures that nudge people to engage in positive and constructive ways.

In pursuit of talking about this more, I’ve put together a presentation proposal for SXSWi in which I’d like to build on my FSOSS presentation (which has logged over 15000 views since going up on SlideShare.net two years ago) on how the skills and tools from the field of negotiation and collaboration can help improve community management and productivity in open source communities. If this sounds at all interesting to you, I’m hoping that you’ll consider going to the SXSWi Panel Picker website and voting for this panel.

Since FSOSS 2008 I’ve done more research and work in the area and so have more examples to share out of the open source space. In addition, I’ve been working with Diederik Van Liere at the University of Toronto’s business school trying to get data around how behaviour impacts a open source community’s effectiveness.

Title:

Fostering Collaborative Open Source Communities

Description:

Community management is a core competency of open source. So what skills, tools and culture can facilitate and enable collaboration? Drawing from negotiation theory David shares what open source project participants can do to foster sustainable and effective collaborative communities where conflict is productive and not soul-sucking or time consuming.

Questions Answered:

  1. What skills does an open source project leader need
  2. How to a minimize destructive conversations?
  3. How can I encourage participation in my open source project?
  4. How do enable members of my open source community to work together better?
  5. What is negotiation theory?
  6. Someone is being a troll in my discussion group. What do I do?
  7. How can I attract more users to my open source project?
  8. How can I make my open project contributors more effective?
  9. I don’t like arguing with people, what should I do?
  10. I think I may be abrasive, what should do?

Category:

Community / Online Community, Open Source, Self-Help /Self-Improvement, User Experience

How to Engage Citizens on a Municipal Website…

Sometimes, it’s nice to be small, the City of Nanaimo has been pushing the envelop on open data and open government for a number of years now.

Recently, I was directed to their new Council Agendas and Minutes webpage. I recommend you check it out.

Here’s why.

At first blush the site seems normal. There is the standard video of the council meeting (queue cheesy local cable access public service announcement), but them meeting minutes underneath are actually broken down by the second and by clicking on them you can jump straight to that moment in the meeting.

As anyone who’s ever attended a City Council meeting (or the legislature, or parliament) knows, the 80/20 rule is basically always in effect. About 80% of the time the proceedings are either dead boring and about 20% (often much less) of the time the proceedings are exciting, or more importantly, pertinent to you. One challenge with getting citizens engaged on the local level is that they often encounter a noise to signal problem. The ratio of “noise” (issues a given citizen doesn’t care about) drowns out the “signal” (the relatively fewer issues they do care about).

The City of Nanaimo’s website helps address this problem. It enables citizens to find what matters to them without having to watch or scroll through a long and dry council meeting. Better still, they are given a number of options by which to share that relevant moment with friends, neighbours, allies or colleagues via twitter, facebook, delicious or any other number of social media tools.

One might be wondering: can my city afford such a wizbang setup?

Excellent question.

Given Nanaimo’s modest size (it has 78,692 citizens) suggests they have a modest IT budget. So I asked Chris McLuckie, a City of Nanaimo public servant who worked on the project. He informed me that the system was built in-house by him and another city staff member, it uses off-the-shelf hardware and software and so cost under $2000 and it took 2 week to code up.

2 weeks?

No million dollar contract? No 8 month timeline? No expensive new software?

No. Instead, if you’re smart, you might find a couple of local creative citizen-hackers to put something together in no time at all.

You know what’s more, because Chris and the City of Nanaimo want to help more cities learn how to think like the web, I bet if the IT director from any city (or legislative body) asked nicely, they would just give them the code.

So how Open is your city? And if not, do they have $2000 lying around to change that?