Category Archives: open source

Covid-19: Lessons from and for Government Digital Service Groups

This article was written by David Eaves, lecturer at the Harvard Kennedy School, Tom Loosemore, Partner at Public Digital, with Tommaso Cariati and Blanka Soulava, students at the Harvard Kennedy School. It first appeared in Apolitical.

Government digital services have proven critical to the pandemic response. As a result, the operational pace of launching new services has been intense: standing up new services to provide emergency funds, helping people stay healthy by making critical information accessible online, reducing the strain on hospitals with web and mobile based self-assessment tools and laying the groundwork for a gradual reopening of society by developing contact tracing solutions.

To share best practices and discuss emerging challenges Public Digital and the Harvard Kennedy School co-hosted a gathering of over 60 individuals from digital teams from over 20 national and regional governments. On the agenda were two mini-case studies: an example of collaboration and code-sharing across Canadian governments and the privacy and interoperability challenges involved in launching contract tracing apps.

We were cautious about convening teams involved in critical work, as we’re aware of how much depends on them. However, due to the positive feedback and engaging learnings from this session we plan additional meetings for the coming weeks. These will lead up to the annual Harvard / Public Digital virtual convening of global digital services taking place this June. (If you are part of a national or regional government digital service team and interested in learning more please contact us here.)

Case 1: sharing code for rapid response

In early March as the Covid-19 crisis gained steam across Canada, the province of Alberta’s non-emergency healthcare information service became overwhelmed with phone calls.

Some calls came from individuals with Covid-19 like symptoms or people who had been exposed to someone who had tested positive. But many calls also came from individuals who wanted to know whether it was prudent to go outside, who were anxious, or had symptoms unrelated to Covid-19 but were unaware of which symptoms to look out for. As the call center became overwhelmed it impeded their ability to help those most at risk.

Enter the Innovation and Digital Solutions from Alberta Health Services, led by Kass Rafih and Ammneh Azeim. In two days, they interviewed medical professionals, built a prototype self-assessment tool and conducted user-testing. On the third day, exhausted but cautiously confident, they launched the province of Alberta’s Covid-19 Self Assessment tool. With a ministerial announcement and a lucky dose of Twitter virality, they had 300k hits in the first 24h, rising to more than 3 million today. This is in a province with a total population of 4.3 million residents.

But the transformative story begins five days later, when the Ontario Digital Service called and asked if the team from Alberta would share their code. In a move filled with Canadian reasonableness, Alberta was happy to oblige and uploaded their code to GitHub.

Armed with Alberta’s code, the Ontario team also moved quickly, launching a localised version of the self-assessment tool in three days on Ontario.ca. Anticipating high demand, a few days later they stood up and migrated it to a new domain — Covid-19.ontario.ca — which has since evolved into a comprehensive information source for citizens, hosting information such as advice on social distancing or explanations about how the virus works with easy to understand answers.

The evolution of the Ontario Covid-19 portal information page, revised for ease of understanding and use

The Ontario team, led in part by Spencer Daniels, quickly iterated on the site, leveraging usage data and user feedback to almost entirely rewrite the government’s Covid-19 advice in simpler and accessible language. This helped reduce unwarranted calls to the province’s help lines.

Our feeling is that governments should share code more often. This case is a wonderful example of the benefits it can create. We’ve mostly focused on how code sharing allowed Ontario to move more quickly. But posting the code publicly also resulted in helpful feedback from the developer community and wider adoption. In addition, several large private sector organisations have repurpose that code to create similar applications for their employees and numerous governments on our call expressed interest in localising it in their jurisdiction. Sharing can radically increase the impact of a public good.

The key lesson. Sharing code allows:

  • Good practices and tools to be adopted more widely — in days, not weeks
  • Leveraging existing code allows a government team to focus on user experience, deploying and scaling
  • The crisis is a good opportunity to overcome policy inertia around sharing or adopting open source solutions
  • Both digital services still have their code on GitHub (Ontario’s can be found here and Alberta’s here).

The amazing outcome of this case is also a result of the usual recommendations for digital services that both Alberta and Ontario executed so well: user-centered design, agile working and thinking, working in cross-functional teams, embedding security and privacy by design and using simple language.

Case 2: Contact tracing and data interoperability

Many countries hit hard by the coronavirus are arriving at the end of the beginning.

The original surge of patients is beginning to wane. And then begins a complicated next phase. A growing number of politicians will be turning to digital teams (or vendors) hoping that contact tracing apps will help re-open societies sooner. Government digital teams need to understand the key issues to ensure these apps are deployed in ways that are effective, or to push back against decision makers if these apps will compromise citizens’ trust and safety.

To explore the challenges contact tracing apps might create, the team from Safe Paths, an open source, privacy by design contact tracing app built by an MIT led team of epidemiologists, engineers, data scientists and researchers, shared some early lessons. On our call, the Safe Paths team outlined two core thoughts behind their work on the app: privacy and interoperability between applications.

The first challenge is the issue of data interoperability. For large countries like the United States, or regions like Europe where borders are porous, contact tracing will be difficult if data cannot be scaled or made interoperable. Presently, many governments are exploring developing their own contact tracing apps. If each has a unique approach to collecting and structuring data it will be difficult to do contact tracing effectively, particularly as societies re-open.

Apple and Google’s recent announcement on a common Bluetooth standard to enable interoperability may give governments and a false sense of security that this issue will resolve itself. This is not the case. While helpful, this standard will not solve the problem of data portability so that a user could choose to share their data with multiple organisations. Governments will need to come together and use their collective weight to drive vendors and their internal development teams towards a smaller set of standards quickly.

The second issue is privacy. Poor choices around privacy and data ownership — enabled by the crisis of a pandemic — will have unintended consequences both in the short and long term. In the short term, if the most vulnerable users, such as migrants, do not trust a contact app they will not use it or worse, attempt to fool it, degrading data collection and undermining the health goals. Over the long term, decisions made today could normalise privacy standards that run counter to the values and norms of free liberal societies, undermining freedoms and the public’s long term trust in government. This is already of growing concern to civil liberties groups.

One way Safepaths has tried to address the privacy issue is by storing users data on their device and giving the user control over how and when data is shared in a de-identified manner. There are significant design and policy challenges in contact apps. This discussion is hardly exhaustive, but they need to start happening now, as decisions about how to implement these tools are already starting to be made.

Finally, the Safepaths team noted that governments have a responsibility in ensuring access to contact tracing infrastructure. For example, they struck agreements to zero-rate — e.g. make the mobile data needed to download and run the app free of charge — in a partner Caribbean country to minimise any potential cost to the users. Without such agreements, some of the most vulnerable won’t have access to these tools.

Conclusions and takeaways

This virtual conversation was the first in a series that will be held between now and the annual June Harvard / Public Digital convening of global digital services. We’ll be hosting more in the coming weeks and months.

Takeaways:

  • The importance of collaboration and sharing code within and between countries. This was exemplified by code sharing between the Canadian provinces and by the hope that this can become an international effort.
  • Importance of maintaining user-centered focus despite of the time pressure and fast-changing environment that requires quick implementation and iteration. Another resource here is California’s recently published crisis digital standard.
  • Privacy and security must be central to solutions that help countries deal with Covid-19. The technology exists to make private and secure self-assessment forms and contact tracing apps. The challenge is setting those standards early and driving global adoption of them.
  • Interoperability of contact tracing solutions will be pivotal to tackle a pandemic that doesn’t borders, cultures, or nationality. As the SafePaths team highlighted, this is a global standard-setting challenge.

Harvard and Public Digital are planning to host another event on this series on the digital response to Covid-19, sign up here if you’d like to participate in future gatherings! — David Eaves and Tom Loosemore with Tommaso Cariati and Blanka Soulava

This piece was originally published on Apolitical.

Government Procurement Failure: BC Ministry of Education Case Study

Apologies for the lack of posts. I’ve been in business mode – both helping a number of organizations I’m proud of and working on my own business.

For those interested in a frightening tale of inept procurement, poor judgement and downright dirty tactics when it comes to software procurement and government, there is a wonderfully sad and disturbing case study emerging in British Columbia that shows the lengths a government is willing to go to shut out open source alternatives and ensure that large, expensive suppliers win the day.

The story revolves around a pickle that the province of British Columbia found itself in after a previous procurement disaster. The province had bought a student record management system – software that records elementary and secondary students’ grades and other records. Sadly, the system never worked well. For example, student records generally all get entered at the end of the term, so any system must be prepared to manage significant episodic spikes in usage. The original British Columbia Electronic Student Information System (BCeSIS) was not up to the task and frequently crashed and/or locked out teachers.

To make matters worse, after spending $86M over 6 years it was ultimately determined that BCeSIS was unrecoverably flawed and, as the vendor was ending support, a new system needed to be created.

Interestingly, one of the Province’s school districts – the District of Saanich – decided it would self-fund an open source project to create an alternative to BCeSIS. Called OpenStudent, the system would have an open source license, would be created using locally paid open source developers, could be implemented in a decentralized way but still meet the requirements of the province and… would cost a fraction of that proposed by large government vendors.  The Times Colonist has a simple article that covers the launch of OpenStudent here.

Rather than engage Saanich, the province decided to take another swing at hiring a multinational to engage in a IT mega-project. An RFP was issued to which only companies with $100M in sales could apply. Fujitsu was awarded a 12 year contract with costs of up to $9.4M a year.

And here are the kickers:

So in other words, the province sprung some surprise requirements on the District of Saanich that forced it to kill an open source solution that could have saved tax payers millions and employed British Columbians, all while exempting a multinational from meeting the same requirements. It would appear that the province was essentially engaged in a strategy to kill OpenStudent, likely because any success it enjoyed would have created an ongoing PR challenge for the province and threatened its ongoing contract with Fujitsu.

While I don’t believe that any BC government official personally profited from this outcome, it is hard – very hard indeed – not to feel like the procurement system is deeply suspect or, at worst, corrupted. I have no idea if it is possible, but I do hope that these documents can serve as the basis for legal action by the District of Saanich against the Province of British Columbia to recapture some of their lost expenses. The province has clearly used its purchasing power to alter the marketplace and destroy competitors; whether this is in violation of a law, I don’t know. I do know, however, that it is in violation of good governance, effective procurement and general ethics. As a result, all BC tax payers have suffered.

Addendum: It has been suggested to me that that one reason the BC government may be so keen to support Fujitsu and destroy competing suppliers is because it needs to generate a certain amount of business for the company in order for it to maintain headcount in the province. Had OpenStudent proved viable and cheaper (it was estimated to cost $7-10 per student versus $20 for Fujistu’s service), Fujistu might have threatened to scale back operations which might have hurt service levels for other contracts. Unclear to me if this is true or not. To be clear I don’t hold Fujistu responsible for anything here – they are just a company trying to sell their product and offer the best service they can. The disaster described above has nothing to do with them (they may or may not offer amazing products, I don’t know); rather, it has everything to do with the province using its power to eliminate competition and choice.

Mozillians: Announcing Community Metrics DashboardCon – January 21, 2014

Please read background below for more info. Here’s the skinny.

What

A one day mini-conference, held (tentatively) in Vancouver on January 14th  San Francisco on January 21st and 22nd, 2014 (remote participating possible) for Mozillians about community metrics and dashboards.

Update: Apologies for the change of date and location, this event has sparked a lot of interest and so we had to change it so we could manage the number of people.

Why?

It turns out that in the past 2-3 years a number of people across Mozilla have been tinkering with dashboards and metrics in order to assess community contributions, effectiveness, bottlenecks, performance, etc… For some people this is their job (looking at you Mike Hoye) for others this is something they arrived at by necessity (looking at you SUMO group) and for others it was just a fun hobby or experiment.

Certainly I (and I believe co-collaborators Liz Henry and Mike Hoye) think metrics in general and dashboards in particular can be powerful tools, not just to understand what is going in the Mozilla Community, but as a way to empower contributors and reduce the friction to participating at Mozilla.

And yet as a community of practice, I’m not sure those interested in converting community metrics into some form of measurable output have ever gathered together. We’ve not exchanged best practices, aligned around a common nomenclature or discussed the impact these dashboards could have on the community, management and other aspects of Mozilla.

Such an exercise, we think, could be productive.

Who

Who should come? Great question. Pretty much anyone who is playing around with metrics around community, participation, or something parallel at Mozilla. If you are interested in participating please contact sign up here.

Who is behind this? I’ve outlined more in the background below, but this event is being hosted by myself, Mike Hoye (engineering community manager) and Liz Henry (bugmaster)

Goal

As you’ve probably gathered the goals are to:

  • Get a better understanding of what community metrics and dashboards exist across Mozilla
  • Learn about how such dashboards and metrics are being used to engage, manage or organize communities and/or influence operations
  • Exchange best around both the development of and use/application of dashboards and metrics
  • Stretch goal – begin to define some common definitions for metrics that exists across mozilla to enable portability of metrics across dashboards.

Hope this sounds compelling. Please feel free to email or ping me if you have questions.

—–

Background

I know that my cocollaborators – Mike Hoye and Liz Henry have their own reasons for ending up here. I, as many readers know, am deeply interested in understanding how open source communities can combine data and analytics with negotiation and management theory to better serve their members. This was the focus on my keynote at OSCON in 2012 (posted below).

For several years I tried with minimal success to create some dashboards that might provide an overview of the community’s health as well as diagnose problems that were harming growth. Despite my own limited success, it has been fascinating to see how more and more individuals across Mozilla – some developers, some managers, others just curious observers – have been scrapping data they control of can access to create dashboards to better understand what is going on in their part of the community. The fact is, there are probably at least 15 different people running community oriented dashboards across Mozilla – and almost none of us are talking to one another about it.

At the Mozilla Summit in Toronto after speaking with Mike Hoye (engineering community manager) and Liz Henry (bugmaster) I proposed that we do a low key mini conference to bring together the various Mozilla stakeholders in this space. Each of us would love to know what others at Mozilla are doing with dashboards and to understand how they are being used. We figured if we wanted to learn from others who were creating and using dashboards and community metrics data – they probably do to. So here we are!

In addition to Mozillians, I’d also love to invite an old colleague, Diederik van Liere, who looks at community metrics for the Wikimedia foundation, as his insights might also be valuable to us.

http://www.youtube.com/watch?v=TvteDoRSRr8

New Zealand: The World’s Lab for Progressive Tech Legislation?

Cross posted with TechPresident.

One of the nice advantage of having a large world with lots of diverse states is the range of experiments it offers us. Countries (or regions within them) can try out ideas, and if they work, others can copy them!

For example, in the world of drug policy, Portugal effectively decriminalized virtually all drugs. The result has been dramatic. And much of it positive. Some of the changes include a decline in both HIV diagnoses amongst drug users by 17% and drug use among adolescents (13-15 yrs). For those interested you can read more about this in a fantastic report by the Cato Institute written by Glenn Greenwald back in 2009 before he started exposing the unconstitutional and dangerous activities of the NSA. Now some 15 years later there have been increasing demands to decriminalize and even legalize drugs, especially in Latin America. But even the United States is changing, with both the states of Washington and Colorado opting to legalize marijuana. The lessons of Portugal have helped make the case, not by penetrating the public’s imagination per se, but by showing policy elites that decriminalization not only works but it saves lives and saves money. Little Portugal may one day be remembered for changing the world.

I wonder if we might see a similar paper written about New Zealand ten years from now about technology policy. It may be that a number of Kiwis will counter the arguments in this post by exposing all the reasons why I’m wrong (which I’d welcome!) but at a glance, New Zealand would probably be the place I’d send a public servant or politician wanting to know more about how to do technology policy right.

So why is that?

First, for those who missed it, this summer New Zealand banned software patents. This is a stunning and entirely sensible accomplishment. Software patents, and the legal morass and drag on innovation they create, are an enormous problem. The idea that Amazon can patent “1-click” (e.g. the idea that you pre-store someone’s credit card information so they can buy an item with a single click) is, well, a joke. This is a grand innovation that should be protected for years?

And yet, I can’t think of single other OECD member country that is likely to pass similar legislation. This means that it will be up to New Zealand to show that the software world will survive just fine without patents and the economy will not suddenly explode into flames. I also struggle to think of an OECD country where one of the most significant industry groups – the Institute of IT Professionals appeared – would not only both support such a measure but help push its passage:

The nearly unanimous passage of the Bill was also greeted by Institute of IT Professionals (IITP) chief executive Paul Matthews, who congratulated [Commerce Minister] Foss for listening to the IT industry and ensuring that software patents were excluded.

Did I mention that the bill passed almost unanimously?

Second, New Zealanders are further up the learning curve around the dangerous willingness their government – and foreign governments – have for illegally surveilling them online.

The arrest of Kim Dotcom over MegaUpload has sparked some investigations into how closely the country’s police and intelligence services follow the law. (For an excellent timeline of the Kim Dotcom saga, check out this link). This is because Kim Dotcom was illegally spied on by New Zealand’s intelligence services and police force, at the behest of the United States, which is now seeking to extradite him. The arrest and subsequent fall out has piqued public interest and lead to investigations including the Kitteridge report (PDF) which revealed that “as many as 88 individuals have been unlawfully spied on” by the country’s Government Communications Security Bureau.

I wonder if the Snowden documents and subsequent furor probably surprised New Zealanders less than many of their counterparts in other countries since it was less a bombshell than another data point on a trend line.

I don’t want to overplay the impact of the Kim Dotcom scandal. It has not, as far as I can tell, lead to a complete overhaul of the rules that govern intelligence gathering and online security. That said, I suspect, it has created a political climate that amy be more (healthily) distrustful of government intelligence services and the intelligence services of the United States. As a result, it is likely that politicians have been more sensitive to this matter for a year or two longer than elsewhere and that public servants are more accustomed at policies through the lens of its impact on rights and privacy of citizens than in many other countries.

Finally, (and this is somewhat related to the first point) New Zealand has, from what I can tell, a remarkably strong open source community. I’m not sure why this is the case, but suspect that people like Nat Torkington – and open source and open data advocate in New Zealand – and others like him play a role in it. More interestingly, this community has had influence across the political spectrum. The centre left labour party deserves much of the credit for the patent reform while the centre-right New Zealand National Party has embraced both open data. The country was among the first to embrace open source as a viable option when procuring software and in 2003 the government developed an official open source policy to help clear the path for greater use of open source software. This contrasts sharply with my experience in Canada where, as late as 2008, open source was still seen by many government officials as a dangerous (some might say cancerous?) option that needed to be banned and/or killed.

All this is to say that in both the public (e.g. civil society and the private sector) and within government there is greater expertise around thinking about open source solutions and so an ability to ask different questions about intellectual property and definitions of the public good. While I recognize that this exists in many countries now, it has existed longer in New Zealand than in most, which suggests that it enjoys greater acceptance in senior ranks and there is greater experience in thinking about and engaging these perspectives.

I share all this for two reasons:

First, I would keep my eye on New Zealand. This is clearly a place where something is happening in a way that may not be possible in other OECD countries. The small size of its economy (and so relative lack of importance to the major proprietary software vendors) combined with a sufficient policy agreement both among the public and elites enables the country to overcome both internal and external lobbying and pressure that would likely sink similar initiatives elsewhere. And while New Zealand’s influence may be limited, don’t underestimate the power of example. Portugal also has limited influence, but its example has helped show the world that the US -ed narrative on the “war on drugs” can be countered. In many ways this is often how it has to happen. Innovation, particularly in policy, often comes from the margins.

Second, if a policy maker, public servant or politician comes to me and asks me who to talk to around digital policy, I increasingly find myself looking at New Zealand as the place that is the most compelling. I have similar advice for PhD students. Indeed, if what I’m arguing is true, we need research to describe, better than I have, the conditions that lead to this outcome as well as the impact these policies are having on the economy, government and society. Sadly, I have no names to give to those I suggest this idea to, but I figure they’ll find someone in the government to talk to, since, as a bonus to all this, I’ve always found New Zealanders to be exceedingly friendly.

So keep an eye on New Zealand, it could be the place where some of the most progressive technology policies first get experimented with. It would be a shame if no one noticed.

(Again If some New Zealanders want to tell me I’m wrong, please do. Obviously, you know your country better than I do).

Announcing the 311 Data Challenge, soon to be launched on Kaggle

The Kaggle – SeeClickFix – Eaves.ca 311 Data Challenge. Coming Soon.

I’m pleased to share that, in conjunction with SeeClickFix and Kaggle I’ll be sponsoring a predictive data competition using 311 data from four different cities. My hope is that – if we can demonstrate that there are some predictive and socially valuable insights to be gained from this data – we might be able to persuade cities to try to work together to share data insights and help everyone become more efficient, address social inequities and address other city problems 311 data might enable us to explore.

Here’s the backstory and some details in anticipation of the formal launch:

The Story

Several months back Anthony Goldbloom, the founder and CEO of Kaggle – a predictive data competition firm – approached me asking if I could think of something interesting that could be done in the municipal space around open data. Anthony generously offered to waive all of Kaggle’s normal fees if I could come up with a compelling contest.

After playing around with some ideas I reached out to Ben Berkowitz, co-founder of SeeClickFix (one of the world’s largest implementers of the Open311 standard) and asked him if we could persuade some of the cities they work for to share their data for a competition.

Thanks to the hard work of Will Cukierski at Kaggle as well as the team at SeeClickFix we were ultimately able to generate a consistent data set with 300,000 lines of data involving 311 issues spanning 4 cities across the United States.

In addition, while we hoped many of who might choose to participate in a municipal open data challenge would do so out curiosity or desire to better understand how cities work, both myself and SeeClickFix agreed to collectively put up $5000 in prize money to help raise awareness about the competition and hopefully stoke some media (as well as broader participant) interest.

The Goal

The goal of the competition will be to predict the number of votes, comments and views an issue is likely to generate. To be clear, this is not a prediction that is going to radically alter how cities work, but it could be a genuinely useful to communications departments, helping them predict problems that are particularly thorny or worthy proactively communicating to residents about. In addition – and this remains unclear – my own hope is that it could help us understand discrepancies in how different socio-economic or other groups use online 311 and so enable city officials to more effectively respond to complaints from marginalized communities.

In addition there will be a smaller competition around visualization the data.

The Bigger Goal

There is, however, for me, a potentially bigger goal. To date, as far as I know, predictive algorithms of 311 data have only ever been attempted within a city, not across cities. At a minimum it has not been attempted in a way in which the results are public and become a public asset.

So while the specific problem  this contest addresses is relatively humble, I’d see it as a creating a larger opportunity for academics, researchers, data scientists, and curious participants to figure out if can we develop predictive algorithms that work for multiple cities. Because if we can, then these algorithms could be a shared common asset. Each algorithm would become a tool for not just one housing non-profit, or city program but a tool for all sufficiently similar non-profits or city programs. This could be exceptionally promising – as well as potentially reveal new behavioral or incentive risks that would need to be thought about.

Of course, discovering that every city is unique and that work is not easily transferable, or that predictive models cluster by city size, or by weather, or by some other variable is also valuable, as this would help us understand what types of investments can be made in civic analytics and what the limits of a potential commons might be.

So be sure to keep an eye on the Kaggle page (I’ll link to it) as this contest will be launching soon.

Beyond Property Rights: Thinking About Moral Definitions of Openness

“The more you move to the right the more radical you are. Because everywhere on the left you actually have to educate people about the law, which is currently unfair to the user, before you even introduce them to the alternatives. You aren’t even challenging the injustice in the law! On the right you are operating at a level that is liberated from identity and accountability. You are hacking identity.” – Sunil Abraham

I have a new piece up on TechPresident titled: Beyond Property Rights: Thinking About Moral Definitions of Openness.

This piece, as the really fun map I recreated is based on a conversation with Sunil Abraham (@sunil_abraham), the Executive Director of the Centre for Internet and Society in Bangalore.

If you find this map interesting… check the piece out here.

map of open

 

CivicOpen: New Name, Old Idea

The other day Zac Townsend published a piece, “Introducing the idea of an open-source suite for municipal governments,” laying out the case for why cities should collaboratively create open source software that can be shared among them.

I think it is a great idea. And I’m thrilled to hear that more people are excited about exploring this model, and think any such discussion would be helped with having some broader context, and more importantly, because any series of posts on this subject that fails to look at previous efforts is, well, doomed to repeat the failures of the past.

Context

I wrote several blog posts making the case for it in 2009. (Rather than CivicOpen, I called it Muniforge.) These post, I’m told, helped influence the creation of CivicCommons, a failed effort to do something similar (and upon which I sat on an advisory board).

Back when I published my posts I thought I was introducing the idea of shared software development in the civic space. It was only shortly after that l learned of Kuali – a simliar effort that was occurring the university sector – and was so enamoured with it I wrote a piece about how we should clone its governance structure and create a simliar organization for the civic space (something that the Kuali leadership told me they would be happy to facilitate). I also tried tried to expose anyone I could to Kuali. I had a Kuali representative speak at the first Code for America Summit and have talked about it from time to time while helping teach the Code for America Fellows at the beginning of each CfA year. I also introduced them to anyone I would meet in municipal IT and helped spur conference calls between them and people in San Francisco, Boston and other cities so they could understand the model better.

But even in 2009 I was not introducing the idea of shared municipal open source projects. While discovering that Kuali existed was great (and perhaps I can be forgiven for my oversight, as they are in the educational and not municipal space) I completely failed to locate the Association des développeurs et utilisateurs de logiciels libres pour les administrations et les collectivités territoriales (ADULLACT) a French project created seven(!) years prior to my piece that does exactly what I described in my Muniforge posts and what Zac describes in his post. (The English translation of ADULLACT’s name is the Association of Developers and Users of Free Software for Governments and Local Authorities; there is no English Wikipedia page that I could see, but a good French version is here.) I know little about why ADULLACT has survived, but their continued existence suggests that they are doing something right – ideas and processes that should inform any post about what a good structure for CivicOpen or another initiative might want to draw upon.

Of course, in addition to these, there are several other groups that have tried – some with little, others with greater, success – to talk about or create open source projects that span cities. Indeed, last year, Berkeley Unviersity’s Smart Cities program proclaiming the Open Source city arrived. In that piece Berkeley references OpenPlans, which has for years tried to do something similar (and was indeed one of the instigators behind CivicCommons – a failed effort to get cities to share code. Here you can read Philip Ashlock, who was then at OpenPlans, talk about the desirability of cities creating and sharing open source code. In addition to CivicCommons, there is CityMart, a private sector actor that seeks to connect cities with solutions, including those that are open source; in essence it could be a catalyst for building municipal open source communities, but, as far as I can tell, it isn’t. (update) There was also the US Federal Government’s Open Code Initiative, which now seems defunct, and Mark Headd of Code for America tells me Google “Government Open Code Collaborative” but to which I can find no information on.

To understand why some models have failed and why some models have succeeded, Andrew Oram’s article in Journal of Information Technology and Politics is another good starting point. There was also some research into these ideas shared at the Major Cities of Europe IT Users Group back in 2009 by James Fogarty and Willoughby that can be read here; this includes several mini-case studies from several cities and a crude, but good, cost-benefit analysis.

And there are other efforts that are more random. Like  The Intelligent/Smart Cities Open Source Community which for “anyone interested on intelligent / smart cities development and looks for applications and solutions which have been successfully implemented in other cities, mainly open source applications.”

Don’t be ahistorical

I share all this for several reasons.

  1. I want Zac to succeed in figuring out a model that works.
  2. I’d love to help.
  3. To note that there has been a lot of thought into this idea already. I myself thought I was breaking ground when I wrote my Muniforge piece back in 2009. I was not. There were a ton of lessons I could have incorporated into that piece that I did not, and previous successes and failures I could have linked to, but didn’t (until at least discovering Kuali).

I get nervous when I see posts – like that on the Ash Centre’s blog – that don’t cite previous efforts and that feel, to put it bluntly, ahistorical. I think laying out a model is a great idea. But we have a lot of data and stories about what works and doesn’t work. To not draw on these examples (or even mention or link to them) seems to be a recipe for repeating the mistakes of the past. There are reasons CivicCommons failed, and why Kuali and ADDULACT have succeeded. I’ve interviewed a number of people at the former (and sadly no one at the latter) and this feels like table stakes before venturing down this path. It also feels like a good way of modelling what you eventually want a municipal/civic open source community to do – build and learn from the social code as well as business and organizational models of those that have failed and succeeded before you. That is the core of what the best and most successful open source (and, frankly many successful non-open source) projects do.

What I’ve learned is that the biggest challenge are not technical, but cultural and institutional. Many cities have policies, explicit or implicit that prevent them from using open source software, to say nothing of co-creating open source software. Indeed, after helping draft the Open Motion adopted by the City of Vancouver, I helped the city revise their procurement policies to address these obstacles. Indeed, drawing on the example mentioned in Zac’s post, you will struggle to find many small and medium sized municipalities that use Linux, or even that let employees install Firefox on computers. Worse, many municipal IT staff have been trained that Open Source is unstable, unreliable and involves questionable people. It is a slow process to reverse these opinions.

Another challenge that needs to be addressed is that many city IT departments have been hollowed out and don’t have the capacity to write much code. For many cities IT is often more about operations and selecting who to procure from, not writing software. So a CivicOpen/Muniforge/CivicCommons/ADULLACT approach will represent a departure into an arena where many cities have little capacity and experience. Many will be reluctant to built this capacity.

There are many more concerns of course and, despite them, I continue to think the idea is worth pursuing.

I also fear this post will be read as a critique. That is not my intent. Zac is an ally and I want to help. Above all, I share the above because   the good news is this isn’t an introduction. There is a rich history of ideas and efforts from which to learn and build upon. We don’t need do this on our own or invent anew. There is a community of us who are thinking about these things and have lessons to share, so let’s share and make this work!

A note to the Ash Centre

As an aside, I’d have loved to linked to this at the bottom of Zac’s post on the Havard Kennedy School Ash Centre website, but webmaster has opted to not allow comments. Even more odd is that it does not show any dates on their posts. Again, fearing I sound critical but just wanted to be constructive, I believe it is important for anyone, but academic institution especially to have dates listed on articles so that we can better understand the timing and context in which they were written. In addition, (and I understand this sentiment may not be shared) but a centre focused on Democratic Governance and Innovation should allow for some form of feedback or interaction, at least some way people can respond to and/or build on the ideas they publish.

Making Bug Fixing more Efficient (and pleasant) – This Made Me Smile

The other week I was invited down to the Bay Area Drupal Camp (#BadCamp) to give a talk on community management to a side meeting of the 100 or so core Drupal developers.

I gave a hour long version of my OSCON keynote on the Science of Community Management and had a great time engaging what was clearly a room of smart, caring people who want to do good things, ship great code, and work well with one anther. As part of my talk I ran them through some basic negotiation skills – particularly around separating positions (a demand) from interests (the reasons/concerns that created that demand). Positions are challenging to work with as they tend to lock people into what they are asking and makes outcomes either binary or fosters compromises that may make little sense, where as interests (which you get by being curious and asking lots of whys) can create the conditions for make creative, value generative outcomes that also strengthen the relationship.

Obviously, understanding the difference is key, but so is acting on it, e.g. asking questions are critical moments to try to open up the dialogue and uncover interests.

Seems like someone was listening during the workshop since I just sent this link to a conversation about a tricky drupal bug (Screen shot below)

Drupal-bug-fixing2

I love the questions. This is exactly the type of skill and community norms I think we need to build tino more of bug tracking environments/communities, which can sometimes be pretty hostile and aggressive, something that I think turns off many potentially good contributors.

Is the Internet bringing us together or is it tearing us apart?

The other day the Vancouver Sun – via Simon Fraser University’s Public Square program – asked me to pen a piece answering the questions: Is the Internet bringing us together or is it tearing us apart?

Yesterday, they published the piece.

My short answer?

Trying to unravel whether the Internet is bringing us together or tearing us apart is impossible. It does both. What really matters is how we build generative communities, online and off.

My main point?

That community organizing is both growing and democratizing. On MeetUp alone there are 423 coming events in Vancouver. That’s 423 emergent community leaders, all learning how to mobilize people, whether it is for a party, to teach them how to knit, grow a business or learn how to speak Spanish.

This is pretty exciting.

A secondary point?

Is that it is not all good news. There are lots of communities, online and off, that are not generative. So if we are creating more communities, many of them will also be those we don’t agree with, and that are even destructive.

Check it

It always remains exciting to me what you can squeeze into 500 words. Yesterday, the Sun published the piece here, if you’re interested, please do consider checking it out.

Community Managers: Expectations, Experience and Culture Matter

Here’s an awesome link to grind home my point from my OSCON keynote on Community Management, particularly the part where I spoke about the importance of managing wait times – the period between when a volunteer/contributor takes and action and when they get feedback on that action.

In my talk I referenced code review wait times. For non-developers, in open source projects, a volunteer (contributor) will often write a patch which they must be reviewed by someone who oversees the project before it gets incorporated into the software’s code base. This is akin to a quality assurance process – say, like if you are baking brownies for the church charity event, the organizer probably wants to see the brownies first, just to make sure they aren’t a disaster. The period between which you write the patch (or make the brownies) and when the project manager reviews them and say they are ok/not ok, that’s the wait time.

The thing is, if you never tell people how long they are going to have to wait – expect them to get unhappy. More importantly, if, while their waiting, other contributors come and make negative comments about their contributions, don’t be surprised if they get even more unhappy and become less and less inclined to submit patches (or brownies, or whatever makes your community go round).

In other words while your code base may be important but expectations, experience and culture matter, probably more. I don’t think anyone believes Drupal is the best CMS ever invented, but its community has a pretty good expectations, a great experience and fantastic culture, so I suspect it kicks the ass of many “technically” better CMS’s run by lesser managed communities.

Because hey, if I’ve come to expect that I have to wait an infinite or undetermined amount of time, if the experience I have interacting with others suck and if the culture of the community I’m trying to volunteer with is not positive… Guess what. I’m probably going to stop contributing.

This is not rocket science.

And you can see evidence of people who experience this frustration in places around the net. Edd Dumbill sent me this link via hacker news of a frustrated contributor tired of enduring crappy expectations, experience and culture.

Heres what happens to pull requests in my experience:

  • you first find something that needs fixing
  • you write a test to reproduce the problem
  • you pass the test
  • you push the code to github and wait
  • then you keep waiting
  • then you wait a lot longer (it’s been months now)
  • then some ivory tower asshole (not part of the core team) sitting in a basement finds a reason to comment in a negative way.
  • you respond to the comment
  • more people jump on the negative train and burry your honestly helpful idea in sad faces and unrelated negativity
  • the pull dies because you just don’t give a fuck any more

If this is what your volunteer community – be it software driven, or for poverty, or a religious org, or whatever – is like, you will bleed volunteers.

This is why I keep saying things like code review dashboards matter. I bet if this user could at least see what the average wait time is for code review he’d have been much, much happier. Even if that wait time were a month… at least he’d have known what to expect. Of course improving the experience and community culture are harder problems to solve… but they clearly would have helped as well.

Most open source projects have the data to set up such a dashboard, it is just a question of if we will.

Okay, I’m late for an appointment, but really wanted to share that link and write something about it.

NB: Apologies if you’ve already seen this. I accidentally publishes this as a page, not a post on August 24th, so it escaped most people’s view.