Category Archives: open source

Open Source Strategy: OpenMRS case study

Last week I had the pleasure of being invited to Indianapolis to give a talk at the Regenstrief Institute – an informatics and healthcare research organization – which also happens to host the founders of OpenMRS.

For those not familiar with OpenMRS (which I assume to be most of you) it is open-source, enterprise electronic medical record system platform specifically designed to respond to those actively building and managing health systems in the developing world. It’s a project I’m endlessly excited about not only because of its potential to improve healthcare in developing and emerging markets, but also because of its longer-term potential to serve as a disruptive innovator in developed markets.

Having spent a few days at Regenstrief hanging out with the OpenMRS team, here are some take aways I have regarding where they are, and where – in my mind – they should consider heading and what are some of the key places they could focus on to get there.

Current State: Exiting Stealth Mode

Paul Biondich and Andrew Arenson point me to this article about Community Source vs. Open Source which has an interesting piece on open source projects that operate in “Stealth Mode”

It is possible to find models similar to community source within the open source environment. For example, some projects opt to launch in a so called ‘stealth mode’, that is, they operate as a truly open source development from inception, but restrict marketing information about the project during the early stages. This has the effect of permitting access to anyone who cares enough to discover the project, whilst at the same time allowing the initiating members to maintain a focus on early strategic objectives, rather than community development.

OpenMRS has grown in leaps and bounds and – I would argue – has managed to stay in stealth mode (even with the help of their friends at Google summer of code). But for OpenMRS to succeed it must exit stealth mode (a transition that has already been steadily gathering steam). By being more public it can attract more resources, access a broader development community and role out more implementations for patients around the world. But to succeed I suspect that a few things need to be in place.

Core Opportunities/Challenges:

1. Develop OpenMRS as a platform to push people towards cooperating (as opposed requiring collaboration) whenever possible.

One of the smartest things Firefox did was create add-ons. The innovation of add-ons accomplished two benefits. First, it allowed those working on the trunk of the Firefox code to continue to do their work without being distracted by as many feature requests from developers who had an idea they wanted to try out. Second, it increased the number of people who could take interest in Firefox, since now you could code up your own add-on cooperatively but independently, of the rest of the project.

With OpenMRS my sense is that then entire UI is a platform that others should be able to develop or build add-on’s for. Indeed, the longer term business model that makes significant sense to me is to treat OpenMRS like WordPress or Drupal. The underlying code is managed by a core open source community but installation, customization, skinning, widgets, etc… is done by a mix of actors from amateurs, to independent hackers and freelancers to larger design/dev organizations. The local business opportunities to support OpenMRS and, in short, create an IT industry in the developing world, are enormous.

2. Structural Change(s)

One consequence of treating OpenMRS as a platform is that the project needs to be very clear about what is “core” versus what is platform. My sense is that members of the Mozilla team does not spend a lot of time hacking on add-ons (unless they have proven so instrumental they are brought into the trunk). Looking at WordPress the standard install theme is about as basic as one could expect. It would seem no one at WordPress is wasting cycles developing nice themes to roll out with the software. There is a separate (thriving) community that can do that.

As a result, my sense is that OpenMRS should ensure that its front-end developers slowly begin to operate as a separate entity. One reason for this is that if they are too close to the trunk developers they may inadvertently prevent prevent the emergence of a community that would specialize in the installing and customizing of OpenMRS. More importantly, those working on the platform and those working on the trunk may have different interests, and so allowing that tension to emerge and learning how to manage it in the open will be healthy for the long term viability of the project as more and more people do front end work and share their concerns with trunk developers.

3. Stay Flexible by Engaging in Community Management/Engagement

One of the challenges that quickly emerges when one turns a software product into an innovation platform is that the interests of those working on the product and those developing on the platform can quickly divide. One often here’s rumblings from the Drupal community about how drupal core developers often appear more interested in writing interesting/beautiful code than in making Drupal easier to use for businesses (core vs. platform!). Likewise, Firefox and Thunderbird also hear similar rumblings from add-on developers who worry about how new platforms (jetpack) might make old platforms (add-ons) obsolete. In a sense, people who build on platforms are inherently conservative. Change, even (or sometimes especially!) change that lowers barriers to entry means more work for them. They have to ensure that whatever they’ve built on top of the platform doesn’t break when the platform changes. Conversely trunk developers can become enamored with change for change’s sake – including features that offer marginal benefits but that disrupt huge ecosystems.

In my mind, managing these types of tension is essential for an open source project – particularly one involving medical records. Trunk developers will need to have A-level facilitation and engagement skills, capable of listening to platform developers and others, not be reactive or defensive, understand interests and work hard to mediate disputes – even disputes they are involved in. These inter-personal skills will be the grease that ensure the OpenMRS machine can keep innovating while understanding and serving the developer community that is building on top of it. The OpenMRS leadership will also have to take a strong lead in this area – setting expectations around how, and how quickly OpenMRS will evolve so that the developer ecosystem can plan accordingly. Clear expectations will do wonders for reducing tensions between disparate stakeholders.

4) Set the Culture in Place now

Given that OpenMRS is still emerging from Stealth mode, now is the time to imprint the culture with the DNA it will need to survive the coming growth. A clear social contract for participation, a code of community conduct and clearer mission statement that can be referenced during decisions will all be essential. I’m of course also interested in the tools we can role out that will help manage the community. Porting over to Trac the next generation of Diederik’s bug-fix predicter, along with his flame monitor, are ways to give community the influence to have a check on poor behaviour and nudge people towards making better choices in resolving disputes.

5) Create and Share Data to Foster Markets

Finally, I think there is enormous opportunity for a IT industry – primarily located in the developing world – to emerge and support OpenMRS. My sense is that OpenMRS should do everything it can to encourage and foster such an industry.

Some ideas for doing this have been inspired by my work around open data. I think it is critical that OpenMRS start asking implementations to ping them once complete – and again whenever an upgrade is complete. This type of market data – anatomized – could help the demonstrate demand for services that already exists, as well as its rate of growth. Developers in underserved counties might realize there are market niches to be filled. In addition, I suspect that all of the OpenMRS implementations that have been completed that we don’t know about represent a huge wealth of information. These are people who managed to install OpenMRS with no support and possibly – on the cheap. Understanding how they did and who was involved could yield important best practices as well as introduce us to prospective community members with a “can do” spirit and serious skills. I won’t dive into too much detail here, but needless to say, I think anonymized but aggregated data provided for free by OpenMRS could spur further innovation and market growth.

Postscript

I’m sure there is lots to debate in the above text – I may have made some faulty assumptions along the way – so this should not be read as final or authoritative, mostly a contribution to what is an ongoing discussion at OpenMRS. Mostly I’m excited about where things are and where they are going, and the tremendous potential of OpenMRS.

Open Government – New Book from O'Reilly Media

I’m very excited to share I have a chapter in the new O’Reilly Media book Open Government (US Link & CDN Link). I’ve just been told that the book has just come back from the printers and can now be ordered.

Also exciting is that a sample of the book (pictured left) that includes the first 8 chapters can be downloaded as a PDF for free.

The book includes several people and authors I’m excited to be in the company of, including: Tim O’Reilly, Carl Malamud, Ellen Miller, Micah Sifry, Archon Fung and David Weil. My chapter – number 12 – is titled “After the Collapse,” a reference to the Coasean collapse Shirky talks about in Here Comes Everybody. It explores what is beginning to happen (and what is to come) to government and civil services when transaction and coordination costs for doing work dramatically lower. I’ve packed a lot into it, so it is pretty rich with my thinking, and I’m pleased with the result.

If you care about the future of government as well as the radical and amazing possibilities being opened up by new technologies, processes and thinking, then I hope you’ll pick up a copy. I’m not getting paid for it; instead, a majority of the royalties go to the non-profit Global Integrity.

Also, the O’Reilly people are trying to work out a discount for government employees. We all would like the ideas and thinking in this book to go wide and far and around the globe.

Finally, I’d like to give a big thank you to the editors Laurel Ruma and Daniel Lathrop, along with Sarah Schacht of Knowledge as Power, who made it possible for me to contribute.

More Open Data Apps hit Vancouver

Since the launch of Vancouver’s open data portal a lot of the talk has focused on independent or small groups of programmers hacking together free applications for citizens to use. Obviously I’ve talked a lot about (and have been involved in) Vantrash and have been a big fan of the Amazon.ca/Vancouver Public Library Greasemonkey script created by Steve Tannock.

But independent hackers aren’t the only ones who’ve been interested. Shortly after the launch of the city’s Open Data Portal, Microsoft launched an Open Data App Competition for developers at the Microsoft Canadian Development Centre just outside Vancouver in Richmond, British Columbia. On Wednesday I had the pleasure of being invited to the complex to eat free pizza and, better still, serve as a guest judge during the final presentations.

So here are 5 more applications that have been developed using the city’s open data. (Some are still being tweaked and refined, but the goal is to have them looking shiny and ready by the Olympics.)

Gold

MoBuddy by Thomas Wei: Possibly the most ambitious of the projects, MoBuddy enables you to connect with friends and visitors during Olympics to plan and share experiences through mobile social networking including Facebook.

Silver

Vancouver Parking by Igor Babichev: Probably the most immediately useful app for Vancouverites, Vancouver Parking helps you plan your trip by using your computer in advanced to find parking spots, identify time restrictions, parking duration and costs… It even knows which spots won’t be available for the Olympics. After the Olympics are over, it will be interesting to see if other hackers want to help advance this app. I think a mobile or text message enabled version might be interesting.

Bronze (tie):

Free Finders by Avi Brenner: Another app that could be quite useful to Vancouver residents and visiting tourists, Free Finders uses your facebook connection to find free events and services across the city. Lots of potential here for some local newspapers to pick up this app and run with it.

eVanTivitY by Johannes Stockmann: A great play on creativity and Vancouver, eVanTivity enables you to find City and social events and add-in user-defined data-feeds. Once the Olympics are over I’ve got some serious ideas about how this app could help Vancouver’s Arts & Cultural sector.

Honourable Mention:

MapWay by Xinyang Qiu: Offers a way to find City of Vancouver facilities and Olympic events in Bing Maps as well as create a series of customized maps that combine city data with your own.

More interestingly, in addition to being available to use, each of these applications can be downloaded, hacked on, remixed and tinkered with under an open source license (GNU I believe) once the Olympics are over. The source codes will be available at Microsoft’s Codeplex.

In short, it is great to see a large company like Microsoft take an active interest in Vancouver’s Open Data and try to find some ways to give back to the community – particularly using an open source licenses. I’d also like to give a shout out to Mark Gayler (especially) as well as Dennis Pilarinos and Barbara Berg for making the competition possible and, of course, to all the coders at the Development Centre who volunteered their time and energy to create these apps. These are interesting times for a company like Microsoft and so I’d also like to give a shout out to David Crow who’s been working hard to get important people inside the organization comfortable with the idea of open source and open to experimenting with it.

Vancouver Open Data Version 2: New Apps to create

Wow, wow, wow.

The City of Vancouver has just launched version 2 of its open data portal. A number of new data sets have been added to the site which is very exciting. Better still previously released data sets have been released in new formats.

Given that at 5pm tomorrow (Tuesday. Jan 26th) there will be the third Open Data Hackathon at the city archives to which anyone is invited, I thought I’d share the 5 new open data apps I’d love to see:

1. Home Buyers App.

So at some point some smart real estate agent is going to figure out that there is a WEALTH of relevant information for home buyers in the open data catalogue. Perhaps someone might create this iPhone app and charge for it, perhaps a real estate group will pay for its creation (I know some coders who would be willing – drop me an email).

Imagine an iPhone app you use when shopping around for homes. Since the app knows where you are it can use open data to tell you: property assessment, the distance to the nearest park (and nearest park with off leash area), nearest school, school zone (elementary, plus secondary immersion and regular), distance to the local community centre, neighborhood name, nearest bus/subway stops and routes, closest libraries, nearest firehall among a host of other data. Having that type of information at your finger tips could be invaluable!

2. My Commute App:

One of the sexiest and most interesting data sets released in version 2 is a GeoRss feed of upcoming road closures (which you can also click and see as a map!). It would be great if a commuter could outline their normal drive or select their bus route and anytime the rss feed posts about roadwork that will occur on that route the user receives an email informing them of this fact. Allows you to plan an alternative route or know that you’re going to have to leave a little early.

3. Development Feedback App

There is always so much construction going on in Vancouver it is often hard to know what is going to happen next. The city, to its credit, requires developers to post a giant white board outlining the proposed development. Well now a data feed of planned developments is available on the data portal (it also can already be viewed in map form)! Imagine an iPhone app which shows you the nearest development applications (with details!) and heritage buildings so you can begin to understand how the neighbourhood is going to change. Then imagine a form you can fill in – right then(!) – that emails your concerns or support for that development to a councilor or relevant planning official…

For a city like Vancouver that obsesses about architecture and its neighborhoods, this feels like a winner.

4. MyPark App

We Vancouverites are an outdoorsey bunch. Why not an app that consolidates information about the cities parks into one place. You could have park locations, nearest park locator, nearest dog park locator, the Parks Boards most recent announcements and events RSS Feed. I’m hoping that in the near future Parks Board will release soccer/ultimate frisbee field conditions updates in a machine readable format.

5. VanTrash 2.0?

Interestingly Apartment recycling schedule zones was also released in the new version of the site. Might be interesting to see if we can incorporate it into the already successful Vantrash and so expand the user base.

I’m also thinking there could be some cool things one could do with Graffiti information (maybe around reporting? a 311 tie in?) and street lights (safest route home walking app?)

So there is a start. If you are interested in these – or have your own ideas for how the data could be used – let me know. Better yet, consider coming down to the City Archives tomorrow evening for the third open data hackathon. I’ll be there, it would be great to chat.

The Internet as Surveillance Tool

There is a deliciously ironic, pathetically sad and deeply frightening story coming out of France this week.

On January 1st France’s new (and controversial law) Haute Autorité pour la Diffusion des Œuvres et la Protection des Droits sur Internet otherwise known by its abbreviation – Hadopi – came into effect. The law makes it illegal to download copyright protected works and uses a “three-strikes” system of enforcement. The first two times an individual illegally downloads copyrighted content (knowingly or unknowingly) they receive a warning. Upon the third infraction the entire household has its internet access permanently cut off and is added to a blacklist. To restore internet access the households’ computers must be outfitted with special monitoring software which tracks everything the computer does and every website it visits.

Over at FontFeed, Yves Peters chronicles how the French Agency designated with enforcing the legislation, also named Hadopi, illegally used a copyrighted font, without the permission of its owner, in their logo design. Worse, once caught the organization tried to cover up this fact by lying to the public. I can imagine that fonts and internet law are probably not your thing, but the story really is worth reading (and is beautifully told).

But as sad, funny and ironic as the story is, it is deeply scary. Hadopi, which is intended to prevent the illegal downloading of copyrighted materials, couldn’t even launch without (innocently or not) breaking the law. They however, are above the law. There will be no repercussions for the organization and no threat that its internet access will be cut off.

The story for French internet users will, however, be quite different. Over the next few months I wouldn’t be surprised if tens, or even hundreds of thousands of French citizens (or their children, or someone else in their home) inadvertently download copyrighted material illegally and, in order to continue to have access to the internet, will be forced to acquiesce to allowing the French Government to monitor everything they do on their computer. In short, Hadopi will functionally become a system of mass surveillance – a tool to enable the French government to monitor the online activities of more and more of its citizens. Indeed, it is conceivable that after a few years a significant number and possibly even a majority of French computers could be monitored. Forget Google. In France, the government is the Big Brother you need to worry about.

Internet users in other countries should also be concerned. “Three Strikes” provisions likes those adopted by France have allegedly been discussed during the negotiations of ACTA, an international anti-counterfeiting treaty that is being secretly negotiated between a number of developed countries.

Suddenly copyright becomes a vehicle to justify the governments right to know everything you do online. To ensure some of your online activities don’t violate copyright online, all online activities will need to be monitored. France, and possibly your country soon too, will thus transform the internet, the greatest single vehicle for free thought and expression, into a giant wiretap.

(Oh, and just in case you thought the French already didn’t understand the internet, it gets worse. Read this story from the economist. How one country can be so backward is hard to imagine).

Eaves.ca Blogging Moment #1 (2009 Edition): Open Data Comes to Vancouver

Back in 2007 I published a list of top ten blogging moments – times I felt blogging resulted in something fun or interesting. I got numerous notes from friends who found it fun to read (though some were not fans) so I’m giving it another go. Even without these moments it has been rewarding, but it is nice to reflect on them to understand why spending so many hours, often late at night, trying to post 4 times a week can give you something back that no paycheck can offer. Moreover, this is a chance to celebrate some good fortune and link to people who’ve made this project a little more fun. So here we go…

Eaves.ca Blogging Moment #1 (2009 Edition): Open Data Comes to Vancouver

On May 14th I blogged about the tabling of Vancouver’s Open Data motion to city council. After thousands of tweets, dozens of international online articles and blog posts, some national press and eventually some local press, the City of Vancouver passes the motion.

This was a significant moment for myself and people like Tim Wilson, Andrea Reimer and several people in the Mayor’s Office who worked hard to craft the motion and make it reality. The first motion of its type in Canada I believe it helped put open data on the agenda in policy circles across the country. Still more importantly, the work of the city is providing advocates with models – around legal issues, licensing and community engagement – that will allow them to move up the learning curve faster.

All this is also a result of the amazing work by city staff on this project. The fact that the city followed up and launched an open data portal less than 3 months later – becoming the first major city in Canada to do so – speaks volumes. (Props also to smaller cities like Kamloops and Nanaimo that were already sharing data.)

Today, several cities are contemplating creating similar portals and passing similar motions (I spoke at the launch of Toronto’s open portal, Ottawa, Calgary, & Edmonton are in various stages of exploring the possibility of doing something, over the border the City of Seattle invited me to present on the subject to their city councilors.). We are still in early days but I have hopes that this initiative can help drive a new era of government transparency & citizen engagement.

Eaves.ca Blogging Moment #2 (2009 Edition): The Three Laws of Open Data go Global

Back in 2007 I published a list of top ten blogging moments – times I felt blogging resulted in something fun or interesting. I got numerous notes from friends who found it fun to read (though some were not fans) so I’m giving it another go. Even without these moments it has been rewarding, but it is nice to reflect on them to understand why spending so many hours, often late at night, trying to post 4 times a week can give you something back that no paycheck can offer. Moreover, this is a chance to celebrate some good fortune and link to people who’ve made this project a little more fun. So here we go…

Eaves.ca Blogging Moment #2 (2009 Edition): The Three Laws of Open Data go Global

In preparation for a panel presentation to parliamentarians hosted by the Office of the Information Commissioner, I wrote this piece titled “The Three Laws of Open Data.” The piece gets a lot of web traffic and interest.

Better still, the previously mentioned Australian Government 2.0 Taskforce includes the three laws in their final report.

Also nice: Tim O’Reilly – tech guru, publisher and open government champion – mentions it during his GTEC keynote in Ottawa.

Best yet, after putting out the request on twitter several volunteers from around the world translate the 3 laws into seven languages! (German, Russian, Japanese, Chinese, Dutch, Spanish and Catalan)

    Hurray again for the internet!

    BC Government's blog on renewing the Water Act

    On Friday the Government of British Columbia announced that it was beginning the process to renew the province’s water act. This is, in of itself, important and good.

    More interesting however, is that the government has chosen to launch a blog to discuss ideas, prospective changes and generally engage the public on water issues.

    It is, of course, early days. I’m not one to jump up and proclaim instant success nor pick apart the effort and find its faults after a single post. What I will say is that this type of experimentation in public engagement and policy development is long overdue. It is exciting to see a major government in Canada tentatively begin to explore how online technology and social media might enhance policy development as more (hopefully) than just a communication exercise. Even if it does not radically alter the process – or even if it does not go well – at least this government is experimenting and beginning learn what will work and what won’t. I hope and suspect other jurisdictions will be watching closely.

    If you are such a government-type and are wondering what it is about the site that gives me hope… let me briefly list three things:

    1. Site design: Unlike most government websites which OVERWHELM you with information, menus and links, this one is (relatively) simple.
    2. Social media: A sidebar with recent comments! A tag cloud! RSS feed! Things that most blogs and website have had for years and yet… seem to elude government websites.
    3. An effective platform (bonus points for being open source): This may be the first time I’ve seen an official government website in Canada use wordpress (which, by the by, is free to download). When running a blog wordpress is certainly my choice (quite literally) and has been a godsend. The choice of wordpress also explains a lot of why point #2 is possible.

    So… promising start. Now, what would I like to see happen around the government’s blog?

    Well, if you want to engage the public why not give them data that you are using internally? It would be great to get recent and historic flow rate data from major rivers in BC. And what about water consumption rates by industry/sector but also perhaps by region and by city and dare we ask… by neighborhood? It would also be interesting to share the assumptions about future growth so that professors, thinktanks and those who care deeply about water issues could challenge and test them. Of course the government could share all this data on its upcoming Apps For Climate Change data portal (more on that soon). If we were really lucky, some web superstar like this guy, would create some cool visualization to help the public understand what is happening to water around the province and what the future holds.

    In short, having a blog is a fantastic first start, but lets use it to share information so that citizens can do their own analysis using their own assumptions with the same data sets the government is using. That would certainly elevate the quality of the discussion on the site.

    All in all, the potential for a site like this is significant. I hope the water geeks show up in force and are able to engage in a helpful manner.

    MuniForge: Creating municipalities that work like the web

    Last month I published the following article in the Municipal Information Systems Association’s journal Municipal Interface. The article was behind a firewall so now that the month has gone by I’m throwing it up here. Basically, it makes the case for why, if government’s applied open source licenses to the software they developed (or paid to develop), they could save 100’s of millions, or more likely billions of dollars, a year. Got a couple of emails from municipal IT professionals from across the country

    MuniForge: Creating Municipalities that Work like the Web

    Introduction

    This past May the City of Vancouver passed what is now referred to as “Open 3”.This motion states that the City will use open standards for managing its information, treat open source and proprietary software equally during the procurement cycle, and apply open source licenses to software the city creates.

    While a great deal of media attention has focused on the citizen engagement potential of open data, but the implications of the second half of the motion – that relating to open source software – has gone relatively unnoticed. This is all the more surprising since last year the Mayor of Toronto’s also promised his city would apply an open source license to software it creates. This means that two of Canada’s largest municipalities are set to apply open source licenses to software they create in house. Consequently, the source code and the software itself will be available for free under a license that permits users to use, change, improve and redistribute it in modified or unmodified forms.

    If capitalized upon these announcements could herald a revolution in how cities currently procure and develop software. Rather than having thousands of small municipalities collectively spending billions of dollars to each recreate the own wheel the open sourcing of municipal software could weave together Canada’s municipal IT departments into one giant network in which expertise and specialized talents drive up quality and security to the benefit of all while simultaneously collapsing the costs of development and support. Most interestingly, while this shift will benefit larger cities, its benefit and impact could be most dramatic and positive among the country’s smaller cities (those with populations under 200K). What is needed to make it happen is a central platform where the source code and documentation for software that cities wish to share can be uploaded and collaborated on. In short, Canada needs a Sourceforge, or better, a GitHub for municipal software.

    The cost

    For the last two hundred years one feature has dominated the landscape for the majority if municipalities in Canada: isolation. In a country as vast and sparsely populated as ours villages, towns, and cities have often found themselves alone. For citizens the railway, the telegraph, then the highway and telecommunications system eroded that isolation, but if we look at the operations of cities this isolation remains a dominant feature. Most Canadian municipalities are highly effective, but ultimately self contained islands. Municipal IT departments are no different. One municipality rarely talks to that of another, particularly if they are not neighbours.

    The result of this process is that in many cities across Canada IT solutions are frequently developed in one of two manners.

    The first is the procurement model. Thankfully, when the product is off the shelf, or easily customized, deployment can occur quickly, this however, is rarely the case. More often, larger software and expensive consulting firms are needed to deploy such solutions frequently leaving them beyond the means of many smaller cities. Moreover, from an economic development perspective the dollars spent on these deployments often flow out of the community to companies and consultants based elsewhere. On the flip side, local, smaller firms, if they exist at all, tend to be untested and frequently lack the expertise and competition necessary to provide a reliable and affordable product. Finally, regardless of the firms’ size, most solutions are proprietary and so lock a city into the solution in perpetuity. This not only holds the city hostage to the supplier, it eliminates future competition and worse, should the provider go out of business, it saddles the city with an unsupported system which will be painful and expensive to upgrade out of.

    The second option is to develop in-house. For smaller cities with limited IT departments this option can be challenging, but is often still cheaper than hiring an external vendor. Here the challenge is that any solution is limited by the skills and talents of the City’s IT staff. A small city, with even a gifted IT staff of 2-5 people will be challenged to effectively build and roll out all the IT infrastructure city staff and citizens need. Moreover, keeping pace with security concerns, new technologies and new services poses additional challenges.

    In both cases the IT services a city can develop and support for staff and citizens is be limited by either the skills and capacity of its team or the size of its procurement budget. In short, the collective purchasing power, development capacity and technical expertise of Canada’s municipal IT departments is lost because we remain isolated from one another. With each city IT department acting like an island this creates enormous constraints and waste. Software is frequently recreated hundreds of times over as each small city creates its own service or purchases its own license.

    The opportunity

    It need not be this way. Rather than a patchwork of isolated islands, Canada’s municipal IT departments could be a vast interconnected network.

    If even two small communities in Canada applied an open source license to a software they were producing, allowed anyone to download it and documented it well the cost savings would be significant. Rather than having two entities create what is functionally the same piece of software, the cost would be shared. Once available, other cities could download and write patches that would allow this software to integrate with their own hardware/software infrastructure. These patches would also be open source making it easier for still more cities to use the software. The more cities participate in identifying bugs, supplying patches and writing documentation, the lower the costs to everyone becomes. This is how Linus Torvalds started a community whose operating system – Linux – would become world class. It is the same process by which Apache came to dominate webservers and it is the same approach used by Mozilla to create Firefox, a web browser whose market share now rivals that of Internet Explorer. The opportunity to save municipalities millions, if not billions in software licensing and/or development costs every year is real and tangible.

    What would such a network look like and how hard would it be to create? I suspect that two pieces would need to be in place to begin growing a nascent network.

    First, and foremost, there need to be a handful of small projects. Often the most successful source projects are those that start collaboratively. This way the processes and culture are, from the get go, geared towards collaboration and sharing.  This is also why smaller cities are the perfect place to start for collaborating on open source projects. The world’s large cities are happy to explore new models, but they are too rich, too big and too invested in their current systems to drive change. The big cities can afford Accenture. Small cities are not only more nimble, they have the most to gain. By working together and using open source they can provide a level of service comparable to that of the big cities, at a fraction of the cost. An even simpler first step would be to ensure that when contractors sign on to create new software for a city, they agree that the final product will be available under and open source license.

    Second, MISA, or another body, should create a Sourceforge clone for hosting open sourced municipal software projects. Sourceforge is an American based open source software development web site which provides services that help people build cool and share software with coders around the world. It presently hosts more than 230,000 software projects has over 2 million registered users. Soureforge operates as a sort of market place for software initiatives, a place where one can locate software one is interested in and then both download it and/or become part of a community to improve it.

    A Soureforge clone – say Muniforge – would be a repository for software that municipalities across the country could download and use for free. It would also be the platform upon which collaboration around developing, patching and documenting would take place. Muniforge could also offer tips, tools and learning materials for those new to the open source space on how to effectively lead, participate and work within an open source community. This said, if MISA wanted to keep costs even lower, it wouldn’t even need to create a sourecforge clone, it could simply use the actual sourceforge website and lobby the company to create a new “municipal” category.

    And herein lies the second great opportunity of such a platform. It can completely restructure the government software business in Canada. At the moment Canadian municipalities must choose between competing proprietary systems that lock them into to a specific vendor. Worst still, they must pay for both the software development and ongoing support. A Muniforge would allow for a new type of vendor modeled after Redhat – the company that offers support to users that adopt its version of the free, open source Linux operating system. Suddenly while vendors can’t sell software found on Muniforge, they could offer support for it. Cities would not have the benefit of outsourcing support, without having to pay for the development of a custom, proprietary software system. Moreover, if they are not happy with their support they can always bring it in house, or even ask a competing company to provide support. Since the software is open source nothing prevents several companies from supporting the same piece of software – enhancing service, increasing competition and driving down prices.

    There is another, final, global benefit to this approach to software development. Over time, a Muniforge could begin to host all of the software necessary to run a modern day municipality. This has dramatic implications for cities in the developing world. Today, thanks to rapid urbanization, many towns and villages in Asian and Africa will be tomorrow’s cities and megacities. With only a fraction of the resources these cities will need to be able to offer the services that are today common place in Canada. With Muniforge they could potentially download all the infrastructure they need for free – enabling precious resources to go towards other critical pieces of infrastructure such as sewers and drinking water. Moreover, a Muniforge would encourage small local IT support organizations to develop in those cities providing jobs fostering IT innovation where it is needed most.  Better still, over time, patches and solutions would flow the other way, as more and more cities help improve the code base of projects found on Muniforge.

    Conclusion

    The internet has demonstrated that new, low cost models of software development exist. Open source software development has shown how loosely connected networks of coders/users from across a country, or even around the world can create world class software that rivals and even outperforms software created by the largest proprietary developers. This is the logic of the web – participation, better development and low-cost development.

    The question cities across Canada need to ask themselves is: do we want to remain isolated islands, or do we want to work like the web, working collaboratively to offer better services, more quickly and at a lower cost. If even only some cities choose the later answer an infrastructure to enable collaboration can be put in place at virtually no cost, while the potential benefits and the opportunity to restructure the government software industry would be significant. Island or network – which do we want to be?

    Making Open Source Communities (and Open Cities) More Efficient

    My friend Diederik and I are starting to work more closely with some open source projects about how to help “open” communities (be they software projects or cities) become more efficient.

    One of the claims of open source is that many eyes make all bugs shallow. However, this claim is only relevant if there is a mechanism for registering and tackling the bugs. If a thousand people point out a problem, one may find that one is overwhelmed with problems – some of which may be critical, some of which are duplicates and some of which are not problems at all, but mistakes, misunderstandings or feature requests. Indeed, in recent conversations with open source community leaders, one of the biggest challenges and time sinks in a project is sorting through bugs and identifying those that are both legitimate and “new.” Cities, particularly those with 311 systems that act similar to “bug tracking” software in open source projects, have a similar challenge. They essentially have to ensure that each new complaint is both legitimate, and geuninely “new” (and not a duplicate complaint – eg. are there 2 potholes at Broadway and 8th vs. two people have called in to complain about the same pothole).

    The other month Diederik published the graph below that used bug submission data for Mozilla Firefox tracked in Bugzilla to demonstrate how, over time, bug submitters on average do become more efficient (blue line). However, what is interesting is that despite the improved average quality the variability in the efficacy of individual bug submitters remained high (red line). The graph makes it appear as though the variability increases as submitters become more experienced but this is not the case, towards the left there were simply many more bug submitters and they averaged each other out creating the illusion of less variability. As you move to the right the number of bug submitters with these levels of experience are quite few, sometimes only 1-2 per data point, so the variability simply becomes more apparent.

    Consequently, the group encircled by purple oval are very experienced and yet continue to submit bugs the community ultimately chooses to either ignore or deems not worth fixing. Sorting through, testing and evaluating these bugs suck up precious time and resource.

    We are presently looking at more data to assess if we can come up with a profile for what makes for a bug submitter who falls into this group (as opposed to be “average” or exceedingly effective). If one could screen for such bug submitters, then a community might be able to better educate them and/or provide more effective tools and thus improve their performance. In more radical cases – if the net cost of their participation was too great – one could even screen them out of the bug submission process. If one could improve the performance of this purple oval group by even 25% there would be a significant improvement in the average (blue line). We are looking forward to talk and share more about this in the near future.

    As a secondary point, I feel it is important to note that we are still in the early days of open source development model. My sense is there are still improvements – largely through more effective community management – that can yield dramatic (as opposed to incremental) boosts in productivity for open source projects. This separates them again from proprietary models which – as far as I can tell – can at the moment at best hope for incremental improvements in productivity. Thus, for those evaluating the costs of open versus closed processes, it might be worth considering the fact that the two approaches may be (and, in my estimation, are) evolving at very different rates.

    (If someone from a city government is reading this and you have data regarding 311 reports – we would be interested in analyzing your data to see if similar results bear out – plus it may enable us to help you manage you call volume more effectively.)