Tag Archives: open source

Mozillians: Announcing Community Metrics DashboardCon – January 21, 2014

Please read background below for more info. Here’s the skinny.

What

A one day mini-conference, held (tentatively) in Vancouver on January 14th  San Francisco on January 21st and 22nd, 2014 (remote participating possible) for Mozillians about community metrics and dashboards.

Update: Apologies for the change of date and location, this event has sparked a lot of interest and so we had to change it so we could manage the number of people.

Why?

It turns out that in the past 2-3 years a number of people across Mozilla have been tinkering with dashboards and metrics in order to assess community contributions, effectiveness, bottlenecks, performance, etc… For some people this is their job (looking at you Mike Hoye) for others this is something they arrived at by necessity (looking at you SUMO group) and for others it was just a fun hobby or experiment.

Certainly I (and I believe co-collaborators Liz Henry and Mike Hoye) think metrics in general and dashboards in particular can be powerful tools, not just to understand what is going in the Mozilla Community, but as a way to empower contributors and reduce the friction to participating at Mozilla.

And yet as a community of practice, I’m not sure those interested in converting community metrics into some form of measurable output have ever gathered together. We’ve not exchanged best practices, aligned around a common nomenclature or discussed the impact these dashboards could have on the community, management and other aspects of Mozilla.

Such an exercise, we think, could be productive.

Who

Who should come? Great question. Pretty much anyone who is playing around with metrics around community, participation, or something parallel at Mozilla. If you are interested in participating please contact sign up here.

Who is behind this? I’ve outlined more in the background below, but this event is being hosted by myself, Mike Hoye (engineering community manager) and Liz Henry (bugmaster)

Goal

As you’ve probably gathered the goals are to:

  • Get a better understanding of what community metrics and dashboards exist across Mozilla
  • Learn about how such dashboards and metrics are being used to engage, manage or organize communities and/or influence operations
  • Exchange best around both the development of and use/application of dashboards and metrics
  • Stretch goal – begin to define some common definitions for metrics that exists across mozilla to enable portability of metrics across dashboards.

Hope this sounds compelling. Please feel free to email or ping me if you have questions.

—–

Background

I know that my cocollaborators – Mike Hoye and Liz Henry have their own reasons for ending up here. I, as many readers know, am deeply interested in understanding how open source communities can combine data and analytics with negotiation and management theory to better serve their members. This was the focus on my keynote at OSCON in 2012 (posted below).

For several years I tried with minimal success to create some dashboards that might provide an overview of the community’s health as well as diagnose problems that were harming growth. Despite my own limited success, it has been fascinating to see how more and more individuals across Mozilla – some developers, some managers, others just curious observers – have been scrapping data they control of can access to create dashboards to better understand what is going on in their part of the community. The fact is, there are probably at least 15 different people running community oriented dashboards across Mozilla – and almost none of us are talking to one another about it.

At the Mozilla Summit in Toronto after speaking with Mike Hoye (engineering community manager) and Liz Henry (bugmaster) I proposed that we do a low key mini conference to bring together the various Mozilla stakeholders in this space. Each of us would love to know what others at Mozilla are doing with dashboards and to understand how they are being used. We figured if we wanted to learn from others who were creating and using dashboards and community metrics data – they probably do to. So here we are!

In addition to Mozillians, I’d also love to invite an old colleague, Diederik van Liere, who looks at community metrics for the Wikimedia foundation, as his insights might also be valuable to us.

http://www.youtube.com/watch?v=TvteDoRSRr8

Mission Driven Orgs: Don’t Alienate Alumni, Leverage Them (I’m looking at you, Mozilla)

While written for Mozilla, this piece really applies to any mission-driven organization. In addition, if you are media, please don’t claim this is written by Mozilla. I’m a contributor, and Mozilla is at its best when it encourages debate and discussion. This post says nothing about Mozilla official policy and I’m sure there Mozillians who will agree and disagree with me.

The Opportunity

Mozilla is an amazing organization. With a smaller staff, and aided by a community of supporters, it not only competes with the Goliaths of Silicon Valley but uses its leverage whenever possible to fight for users’ rights. This makes it simultaneously a world leading engineering firm and, for most who work there, a mission driven organization.

That was on full display this weekend at the Mozilla Summit, taking place concurrently in Brussels, Toronto and Santa Clara. Sadly, so was something else. A number of former Mozillians, many of whom have been critical to the organization and community were not participating. They either weren’t invited, or did not feel welcome. At times, it’s not hard to see why:

You_chose_Facebook

Again this is not an official Mozilla response. And that is part of the problem. There has never been much of an official or coordinated approach to dealing with former staff and community members. And it is a terrible, terrible lost opportunity – one that hinders Mozilla from advancing its mission in multiple ways.

The main reason is this: The values we Mozillians care about may be codified in the Mozilla Manifesto, but they don’t reside there. Nor do they reside in a browser, or even in an organization. They reside in us. Mozilla is about creating power by foster a community of people who believe in and advocate for an open web.

Critically, the more of us there are, the stronger we are. The more likely we will influence others. The more likely we will achieve our mission.

And power is precisely what many of our alumni have in spades. Given Mozilla’s success, its brand, and its global presence, Mozilla’s contributors (both staff and volunteers) are sought-after – from startups to the most influential companies on the web. This means there are Mozillians influencing decisions – often at the most senior levels – at companies that Mozilla wants to influence. Even if these Mozillians only injected 5% of what Mozilla stands for into their day-to-day lives, the web would still be a better place.

So it begs the question: What should Mozilla’s alumni strategy be? Presently, from what I have seen, Mozilla has no such strategy. Often, by accident or neglect, alumni are left feeling guilty about their choice. We let them – and sometimes prompt them to – cut their connections not just with Mozilla but (more importantly) with the personal connection they felt to the mission. This at a moment when they could be some of the most important contributors to our mission. To say nothing about continuing to contribute their expertise to coding, marketing or any number of other skills they may have.

As a community, we need to accept that as amazing as Mozilla (or any non-profit) is, most people will not spend their entire career there nor volunteer forever. Projects end. Challenges get old. New opportunities present themselves. And yes, people burn out on mission – which no longer means they don’t believe in it – they are just burned out. So let’s not alienate these people, let’s support them. They could be a killer advantage one of our most important advantages. (I mean, even McKinsey keeps an alumni group, and that is just so they can sell to them… we can offer so much more meaning than that. And they can offer us so much more than that).

How I would do it

At this point, I think it is too late to start a group and hope people will come. I could be wrong, but I suspect many feel – to varying degrees – alienated. We (Mozilla) will probably have to do more than just reach out a hand.

I would find three of the most respected, most senior Mozillians who have moved on and I’d reach out privately and personally. I’d invite them to lunch individually. And I’d apologize for not staying more connected with them. Maybe it is their fault, maybe it is ours. I don’t care. It’s in our interests to fix this, so let’s look inside ourselves and apologize for our contribution as a way to start down the path.

I’d then ask them if them if they would be willing to help oversee an alumni group. If they would reach out to their networks and, with us, bring these Mozillians back into the fold.

There is ample opportunity for such a group. They could be hosted once a year and be shown what Mozilla is up to and what it means for the companies they work for. They could open doors to C-suite offices. They could mentor emerging leaders in our community and they could ask for our advice as they build new products that will impact how people use the web. In short, they could be contributors.

Let’s get smart about cultivating our allies – even those embedded in organizations with don’t completely agree with. Let’s start thinking about how we tap into and help keep alive the values that made them Mozillians in the first place, and find ways to help them be effective in promoting them.

OSCON Community Management Keynote Video, Slides and some Bonus Material

Want to thank everyone who came to my session and who sent me wonderful feedback from both the keynote and the session. I was thrilled to see ZDnet wrote a piece about the keynote as well as have practioners, such as Sonya Barry, the Community Manager for Java write things like this about the longer session:

Wednesday at OSCON we kicked off the morning with the opening plenaries. David Eaves’ talk inspired me to attend his longer session later in the day – Open Source 2.0 – The Science of Community Management. It was packed – in fact the most crowded session I’ve ever seen here. People sharing chairs, sitting on every available spot on the floor, leaning up against the back wall and the doors. Tori did a great writeup of the session, so I won’t rehash, but if you haven’t, you should read it – What does this have to do with the Java Community? Everything. Java’s strength is the community just as much as the technology, and individual project communities are so important to making a project successful and robust.

That post pretty much made my day. It’s why we come to OSCON, to hopefully pass on something helpful, so this conference really felt meaningful to me.

So, to be helpful I wanted to lay out a bunch of the content for those who were and were not there in a single place, plus a fun photo of my little guy – Alec – hanging out at #OSCON.

A Youtube video of the keynote is now up – and I’ve posted my slides here.

In addition, I did an interview in the O’Reilly boothif it goes up on YouTube, I’ll post it.

There is no video of my longer session, formally titled Open Source 2.0 – The Science of Community Management, but informally titled Three Myths of Open Source Communities, but Jeff Longland helpfully took these notes and I’ll try to rewrite it as a series of blog posts in the near future.

Finally, I earlier linked to some blog posts I’ve written about open source communities, and on open source community management as these are a deeper dive on some of the ideas I shared.

Some other notes about OSCON…

If you didn’t catch Robert “r0ml” Lefkowitz’s talk: How The App Store Killed Free Software, And Why We’re OK With That which, contrary to some predictions was neither trolling nor link bait but a very thoughtful talk which I did not entirely agree with but has left me with many, many things to think about (a sign of a great talk) do try to see if an audio copy can be tracked down.

Jono Bacon, Brian Fitzpatrick and Ben Collins-Sussman are all menches of the finest type – I’m grateful for their engagement and support given I’m late arriving at a party they all started. While you are reading this, check out buying Brian and Ben’s new book – Team Geek: A Software Developer’s Guide to Working Well with Others.

Also, if you haven’t watched Tim O’Reilly’s opening keynote, The Clothesline Paradox and the Sharing Economy, take a look. My favourite part is him discussing how we break down the energy sector and claim “solar” only provides us with a tiny fraction of our energy mix (around the 9 minutes mark). Of course, pretty much all energy is solar, from the stuff we count (oil, hydroelectic, etc.. – its all made possible by solar) or the stuff we don’t count like growing our food, etc.. Loved that.

Oh, and this ignite talk on Cryptic Crosswords by Dan Bentley from OSCON last year, remains one of my favourite. I didn’t get to catch is talk this year on why the metric system sucks – but am looking forward seeing it once it is up on YouTube.

Finally, cause I’m a sucker dad, here’s early attempts to teach my 7 month old hitting the OSCON booth hall. As his tweet says “Today I may be a mere pawn, but tomorrow I will be the grandmaster.”

Alec-Chess

Weaving Foreign Ministries into the Digital Era: Three ideas

Last week I was in Ottawa giving a talk at the Department of Foreign Affairs talking about how technology, new media and open innovation will impact the department’s it work internally, across Ottawa and around the world.

While there is lots to share, here are three ideas I’ve been stewing on:

Keep more citizens safe when abroad – better danger zone notification

Some people believe that open data isn’t relevant to departments like Foreign Affairs or the State Department. Nothing could be further than the truth.

One challenge the department has is getting Canadians to register with them when they visit or live in a country labeled by the department as problematic for traveling in its travel reports (sample here). As you can suspect, few Canadians register with the embassy as they are likely not aware of the program or travel a lot and simply don’t get around to  it.

There are other ways of tackling this problem that might yield broader participation.

Why not turn the Travel Report system into an open data with an API? I’d tackle this by approaching a company like TripIt. Every time I book an airplane ticket or a hotel I simply forward TripIt the reservation, which they scan and turn into events that then automatically appear my calendar. Since they scan my travel plans they also know which country, city and hotel I’m staying in… they also know where I live and could easily ask me for my citizenship. Working with companies like TripIt (or Travelocity, Expedia, etc…) DFAIT could co-design an API into the departments travel report data that would be useful to them. Specifically, I could imagine that if TripIt could query all my trips against those reports then any time they notice I’m traveling somewhere the Foreign Ministry has labelled “exercise a high-degree of caution” or worse trip TripIt could ask me if I’d be willing to let them forward my itinerary to the department. That way I could registry my travel automatically, making the service more convenient for me, and getting the department more information that it believes to be critical as well.

Of course, it might be wise to work with the State Department so that their travel advisories used a similarly structured API (since I can assume TripIt will be more interested in the larger US market than the Canadian market) But facilitating that conversation would be nothing but wins for the department.

More bang for buck in election monitoring

One question that arose during my talk came from an official interested in elections monitoring. In my mind, one thing the department should be considering is a fund to help local democracy groups spin up installations of Ushahidi in countries with fragile democracies that are gearing up for elections. For those unfamiliar with Ushahidi it is a platform developed after the disputed 2007 presidential election in Kenya that plotted eyewitness reports of violence sent in by email and text-message on a google map.

Today it is used to track a number of issues – but problems with elections remain one of its core purposes. The department should think about grants that would help spin up a Ushahidi install to enable citizens of the country register concerns and allegations around fraud, violence, intimidation, etc… It could then verify and inspect issues that are flagged by the countries citizens. This would allow the department to deploy its resources more effectively and ensure that its work was speaking to concerns raised by citizens.

A Developer version of DART?

One of the most popular programs the Canadian government has around international issues is the Disaster Assistance Response Team (DART). In particular, Canadians have often been big fans of DART’s work in purifying water after the boxing day tsunami in Asia as well as its work in Haiti. Maybe the department could have a digital DART team, a group of developers that, in an emergency could help spin up Ushahidi, Fixmystreet, or OpenMRS installations to provide some quick but critical shared infrastructure for Canadians, other countries’ response teams and for non-profits. During periods of non-crisis the team could work on these projects or supporting groups like CrisisCommons or OpenStreetMaps, helping contribute to open source projects that can be instrumental in a humanitarian crisis.

 

Calling all Mozilla Contributors Past & Present

As some friends know, I’ve been working with Mozilla, helping them design an engagement audit, something to enable them assess how effective they are at engaging and empowering the community. This work has a number of aspects, much of which builds on ideas I’ve blogged about here and spoken about in the last year or so (most recently at DjangoCon and the Drupal Pacific Northwest Summit).

The hardest thing of course, is getting feedback from volunteer contributors themselves. This group of talented people are dispersed and, unsurprisingly, busy. But they also have the best data about their experience and so capturing it, sharing it, and using it to provide recommendations to help Mozilla is essential.

DinoheadIn pursuit of that goal I’ve worked a number of staff at Mozilla, and sought the advice of survey expert Peter Loewen to create a Mozilla Volunteer Contributor Survey.

So…! If you are a Mozilla contributor, or have been in the past, we would be deeply indebted to you if you took the time to fill this out. We are trying to push the survey link into various networks we think contributors will see it, but anything you can do to let e fellow Mozillian know about the survey would be great.

Really, really can’t thank anyone who takes this survey enough.

Smarter Ways to Have School Boards Update Parents

Earlier this month the Vancouver School Board (VSB) released an iPhone app that – helpfully – will use push notifications to inform parents about school holidays, parent interviews, and scheduling disruptions such as snow days. The app is okay, it’s a little clunky to use, and a lot of the data – such as professional days – while helpful in an app, would be even more helpful as an iCal feed parents could subscribe to in their calendars.

That said, the VSB deserves credit for having the vision of developing an app. Positively, the VSB app team hopes to add new features, such as letting parents know about after school activities like concerts, plays and sporting events.

This is a great innovation and without a doubt, other school boards will want apps of their own. The problem is, this is very likely to lead to an enormous amount of waste and duplication. The last thing citizens want is for every school board to be spending $15-50K developing iPhone apps.

Which leads to a broader opportunity for the Minister of Education.

Were I the Education Minister, I’d have my technology team recreate the specs of the VSB app and propose an RFP for it but under an open source license and using phonegap so it would work on both iPhone and Android. In addition, I’d ensure it could offer reminders – like we do at recollect.net – so that people could get email or text messages without a smart phone at all.

I would then propose the ministry cover %60 percent of the development and yearly upkeep costs. The other 40% would be covered by the school boards interested in joining the project. Thus, assuming the app had a development cost of $40K and a yearly upkeep of $5K, if only one school board signed up it would have to pay $16K for the app (a pretty good deal) and $2K a year in upkeep. But if 5 school districts signed up, each would only pay $3.2K in development costs and $400 dollars a year in upkeep costs. Better still, the more that sign up, the cheaper it gets for each of them. I’d also propose a governance model in which those who contribute money for develop would have the right to elect a sub-group to oversee the feature roadmap.

Since the code would be open source other provinces, school districts and private schools could also use the app (although not participate in the development roadmap), and any improvements they made to the code base would be shared back to the benefit of BC school districts.

Of course by signing up to the app project school boards would be committing to ensure their schools shared up to date notifications about the relevant information – probably a best practice that they should be doing anyways. This process work is where the real work lies. However, a simple webform (included in the price) would cover much of the technical side of that problem. Better still the Ministry of Education could offer its infrastructure for hosting and managing any data the school boards wish to collect and share, further reducing costs and, equally important, ensuring the data was standardized across the participating school boards.

So why should the Ministry of Education care?

First, creating new ways to update parents about important events – like when report cards are issued so that parents know to ask for them – helps improve education outcomes. That should probably reason enough, but there are other reasons as well.

Second, it would allow the ministry, and the school boards, to collect some new data: professional day dates, average number of snow days, frequency of emergency disruptions, number of parents in a district interested in these types of notifications. Over time, this data could reveal important information about educational outcomes and be helpful.

But the real benefit would be in both cost savings and in enabling less well resourced school districts to benefit from technological innovation wealthier school districts will likely pursue if left to their own devices. Given there are 59 english school districts in BC, if even half of them spent 30K developing their own iPhone apps, then almost $1M dollars would be collectively spent on software development. By spending $24K, the ministry ensures that this $1M dollars instead gets spent on teachers, resources and schools. Equally important, less tech savvy or well equipped school districts would be able to participate and benefit.

Of course, if the City of Vancouver school district was smart, they’d open source their app, approach the Ministry of Education and offer it as the basis of such a venture. Doing that wouldn’t just make them head of the class, it’d be helping everyone get smarter, faster.

Shared IT Services across the Canadian Government – three opportunities

Earlier this week the Canadian Federal Government announced it will be creating Shared Services Canada which will absorb the resources and functions associated with the delivery of email, data centres and network services from 44 departments.

These types of shared services projects are always fraught with danger. While they sometimes are successful, they are often disasters. Highly disruptive with little to show for results (and eventually get unwound). However, I suspect there is a significant amount of savings that can be made and I remain optimistic. With luck the analogy here is the work outgoing US CIO Vivek Kundra accomplished as he has sought to close down and consolidate 800 data centres across the US which is yielding some serious savings.

So here’s what I’m hoping Shared Services Canada will mean:

1) A bigger opportunity for Open Source

What I’m still more hopeful about – although not overly optimistic – is the role that open source solutions could play in the solutions Shared Services Canada will implement. Over on the Drupal site, one contributor claims government officials have been told to hold off buying web content management systems as the government prepares to buy a single solution for across all departments.

If the government is serious about lowering its costs it absolutely must rethink its procurement models so that open source solutions can at least be made a viable option. If not this whole exercise will mean the government may save money, but it will be the we move from 5 expensive solutions to one expensive solution variety.

On the upside some of that work has clearly taken place. Already there are several federal government websites running on Drupal such as this Ministry of Public Works website, the NRCAN and DND intranet. Moreover, there are real efforts in the open source community to accommodate government. In the United States OpenPublic has fostered a version of Drupal designed for government’s needs.

Open source solutions have the added bonus of allowing you the option of using more local talent, which, if stimulus is part of the goal, would be wise. Also, any open source solutions fostered by the federal government could be picked up by the provinces, creating further savings to tax payers. As a bonus, you can also fire incompetent implementors, something that needs to happen a little more often in government IT.

2) More accountability

Ministers Ambrose and Clement are laser focused on finding savings – pretty much every ministry needs to find 5 or 10% savings across the board. I also know both speak passionately about managing tax payers dollars: “Canadians work hard for their money and expect our Government to manage taxpayers dollars responsibly, Shared Services Canada will have a mandate to streamline IT, save money, and end waste and duplication.”

Great. I agree. So one of Shared Service Canada’s first act should be to follow in the footsteps of another Vivek Kundra initiative and recreate his incredibly successful IT Dashboard. Indeed it was by using the dashboard Vivek was able to “cut the time in half to deliver meaningful [IT system] functionality and critical services, and reduced total budgeted [Federal government IT] costs by over $3 billion.” Now that some serious savings. It’s a great example of how transparency can drive effective organizational change.

And here’s the kicker. The White House open sourced the IT Dashboard (the code can be downloaded here). So while it will require some work adapting it, the software is there and a lot of the heavy work has been done. Again, if we are serious about this, the path forward is straightforward.

3) More open data

Speaking of transparency… one place shared services could really come in handy is creating some data warehouses for hosting critical government data sets (ideally in the cloud). I suspect there are a number of important datasets that are used by public servants across ministries, and so getting them on a robust platform that is accessible would make a lot of sense. This of course, would also be an ideal opportunity to engage in a massive open data project. It might be easier to create policy for making the data managed by Shared Service Canada “open.” Indeed, this blog post covers some of the reasons why now is the time to think about that issue.

So congratulations on the big move everyone and I hope these suggestions are helpful. Certainly we’ll be watching with interest – we can’t have a 21st century government unless we have 21st century infrastructure, and you’re now the group responsible for it.

International Open Data Hackathon – IRC Channel and project ideas

Okay, going to be blogging a lot more about the international open data hackathon over the next few days. Last count had us at 63 other cities in 25 countries on over 5 continents.

So first and foremost, here are three thoughts/ideas/actions I’m taking right now:

1. Communicating via IRC

First, for those who have been wondering… yes, there will be an IRC channel on Dec 4th (and as of now) that I will try to be on most of the day.

irc.oftc.net #odhd

This could be a great place for people with ideas or open sourced projects to share them with others or for cities that would like to present some of the work they’ve done on the day with others to find an audience. If, by chance, work on a specific project becomes quite intense on the IRC channel, it may be polite for those working on it to start a project specific channel, but we’ll cross the bridge on the day.

Two additional thoughts:

2. Sharing ideas

Second, some interesting projects brainstorms have been cropping up on the wiki. Others have been blogging about them, like say these ideas from Karen Fung in Vancouver.

Some advice to people who have ideas (which is great).

a) describe who the user(s) would be and what the application will it do, why would someone use it, and what value would they derive from it.

b) even if you aren’t a coder (like me) lay out what data sets the application or project will need to draw upon

c) use powerpoint or keynote to create a visual of what you think the end product should look like!

d) keep it simple. Simple things get done and can always get more complicated. Complicated things don’t get done (and no matter how simple you think it is… it’s probably more complicated than you think

These were the basic principles I adhered when laying out the ideas behind what eventually became Vantrash and Emitter.ca.

Look at the original post where I described what I think a garbage reminder service could look like. Look how closely the draft visual resembles what became the final product… it was way easier for Kevin and Luke (who I’d never met at the time) to model vantrash after an image than just a description.

Garbage%20App

Mockup

Vantrash screen shot

3. Some possible projects to localize:

A number of projects have been put forward as initatives that could be localized. I wanted to highlight a few here:

a) WhereDoesMyMoneyGo?

People could create new instances of the site for a number of different countries. If you are interested, please either ping wdmmg-discuss or wdmmg (at) okfn.org.

Things non-developers could do:

  1. locate the relevant spending data on their government’s websites
  2. right up materials explaining the different budget areas
  3. help with designing the localized site.

b) OpenParliament.ca
If you live in a country with a parliamentary system (or not, and you just want to adapt it) here is a great project to localize. The code’s at github.com/rhymeswithcycle.

Things non-developers can do:

  1. locate all the contact information, twitter handles, websites, etc… of all the elected members
  2. help with design and testing

c) How’d They Vote
This is just a wonderful example of a site that creates more data that others can use. The API’s coming out of this site save others a ton of work and essentially “create” open data…

d) Eatsure
This app tracks health inspection data of restaurants done by local health authorities. Very handy. Would love to see someone create a widget or API that companies like Yelp could use to insert this data into the restaurant review… that would be a truly powerful use of open data.

The code is here:  https://github.com/rtraction/Eat-Sure
Do you have a project you’d like to share with other hackers on Opendataday? Let me know! I know this list is pretty North American specific so would love to share some ideas from elsewhere.

Launching datadotgc.ca 2.0 – bigger, better and in the clouds

Back in April of this year we launched datadotgc.ca – an unofficial open data portal for federal government data.

At a time when only a handful of cities had open data portals and the words “open data” were not being even talked about in Ottawa, we saw the site as a way to change the conversation and demonstrate the opportunity in front of us. Our goal was to:

  • Be an innovative platform that demonstrates how government should share data.
  • Create an incentive for government to share more data by showing ministers, public servants and the public which ministries are sharing data, and which are not.
  • Provide a useful service to citizens interested in open data by bringing it all the government data together into one place to both make it easier to find.

In every way we have achieved this goal. Today the conversation about open data in Ottawa is very different. I’ve demoed datadotgc.ca to the CIO’s of the federal government’s ministries and numerous other stakeholders and an increasing number of people understand that, in many important ways, the policy infrastructure for doing open data already exists since datadotgc.ca show the government is already doing open data. More importantly, a growing number of people recognize it is the right thing to do.

Today, I’m pleased to share that thanks to our friends at Microsoft & Raised Eyebrow Web Studio and some key volunteers, we are taking our project to the next level and launching Datadotgc.ca 2.0.

So what is new?

In short, rather than just pointing to the 300 or so data sets that exist on federal government websites members may now upload datasets to datadotg.ca where we can both host them and offer custom APIs. This is made possible since we have integrated Microsoft’s Azure cloud-based Open Government Data Initiative into the website.

So what does this mean? It means people can add government data sets, or even mash up government data sets with their own data to create interest visualization, apps or websites. Already some of our core users have started to experiment with this feature. London Ontario’s transit data can be found on Datadotgc.ca making it easier to build mobile apps, and a group of us have taken Environment Canada’s facility pollution data, uploaded it and are using the API to create an interesting app we’ll be launching shortly.

So we are excited. We still have work to do around documentation and tracking some more federal data sets we know are out there but, we’ve gone live since nothing helps us develop like having users and people telling us what is, and isn’t working.

But more importantly, we want to go live to show Canadians and our governments, what is possible. Again, our goal remains the same – to push the government’s thinking about what is possible around open data by modeling what should be done. I believe we’ve already shifted the conversation – with luck, datadotgc.ca v2 will help shift it further and faster.

Finally, I can never thank our partners and volunteers enough for helping make this happen.

Rethinking Wikipedia contributions rates

About a year ago news stories began to surface that wikipedia was losing more contributors that it was gaining. These stories were based on the research of Felipe Ortega who had downloaded and analyzed millions the data of contributors.

This is a question of importance to all of us. Crowdsourcing has been a powerful and disruptive force socially and economically in the short history of the web. Organizations like Wikipedia and Mozilla (at the large end of the scale) and millions of much smaller examples have destroyed old business models, spawned new industries and redefined the idea about how we can work together. Understand how the communities grow and evolve is of paramount importance.

In response to Ortega’s research Wikipedia posted a response on its blog that challenged the methodology and offered some clarity:

First, it’s important to note that Dr. Ortega’s study of editing patterns defines as an editor anyone who has made a single edit, however experimental. This results in a total count of three million editors across all languages.  In our own analytics, we choose to define editors as people who have made at least 5 edits. By our narrower definition, just under a million people can be counted as editors across all languages combined.  Both numbers include both active and inactive editors.  It’s not yet clear how the patterns observed in Dr. Ortega’s analysis could change if focused only on editors who have moved past initial experimentation.

This is actually quite fair. But the specifics are less interesting then the overall trend described by the Wikmedia Foundation. It’s worth noting that no open source or peer production project can grow infinitely. There is (a) a finite number of people in the world and (b) a finite amount of work that any system can absorb. At some point participation must stabilize. I’ve tried to illustrate this trend in the graphic below.

Open-Source-Lifecyclev2.0021-1024x606

As luck would have it, my friend Diederik Van Liere was recently hired by the Wikimedia Foundation to help them get a better understanding of editor patterns on Wikipedia – how many editors are joining and leaving the community at any given moment, and over time.

I’ve been thinking about Diederik’s research and three things have come to mind to me when I look at the above chart:

1. The question isn’t how do you ensure continued growth, nor is it always how do you stop decline. It’s about ensuring the continuity of the project.

Rapid growth should probably be expected of an open source or peer production project in the early stage that has LOTS of buzz around it (like Wikipedia was back in 2005). There’s lots of work to be done (so many articles HAVEN’T been written).

Decline may also be reasonable after the initial burst. I suspect many open source lose developers after the product moves out of beta. Indeed, some research Diederik and I have done of the Firefox community suggests this is the case.

Consequently, it might be worth inverting his research question. In addition to figuring out participation rates, figure out what is the minimum critical mass of contributors needed to sustain the project. For example, how many editors does wikipedia need to at a minimum (a) prevent vandals from destroying the current article inventory and/or at the maximum (b) sustain an article update and growth rate that sustains the current rate of traffic rate (which notably continues to grow significantly). The purpose of wikipedia is not to have many or few editors, it is to maintain the world’s most comprehensive and accurate encyclopedia.

I’ve represented this minimum critical mass in the graphic above with a “Maintenance threshold” line. Figuring out the metric for that feels like it may be more important than participation rates independently as such as metric could form the basis for a dashboard that would tell you a lot about the health of the project.

2. There might be an interesting equation describing participation rates

Another thing that struck me was that each open source project may have a participation quotient. A number that describes the amount of participation required to sustain a given unit of work in the project. For example, in wikipedia, it may be that every new page that is added needs 0.000001 new editors in order to be sustained. If page growth exceeds editors (or the community shrinks) at a certain point the project size outstrips the capacity of the community to sustain it. I can think of a few variables that might help ascertain this quotient – and I accept it wouldn’t be a fixed number. Change the technologies or rules around participation and you might make increase the effectiveness of a given participant (lowering the quotient) or you might make it harder to sustain work (raising the quotient). Indeed, the trend of a participation quotient would itself be interesting to monitor… projects will have to continue to find innovative ways to keep it constant even as the projects article archive or code base gets more complex.

3. Finding a test case – study a wiki or open source project in the decline phase

One things about open source projects is that they rarely die. Indeed, there are lots of open source projects out there that are the walking zombies. A small, dedicated community struggles to keep a code base intact and functioning that is much too large for it to manage. My sense is that peer production/open source projects can collapse (would MySpace count as an example?) but the rarely collapse and die.

Diederik suggested that maybe one should study a wiki or open source project that has died. The fact that they rarely do is actually a good thing from a research perspective as it means that the infrastructure (and thus the data about the history of participation) is often still intact – ready to be downloaded and analyzed. By finding such a community we might be able to (a) ascertain what “maintenance threshold” of the project was at its peak, (b) see how its “participation quotient” evolved (or didn’t evolve) over time and, most importantly (c) see if there are subtle clues or actions that could serve as predictors of decline or collapse. Obviously, in some cases these might be exogenous forces (e.g. new technologies or processes made the project obsolete) but these could probably be controlled for.

Anyways, hopefully there is lots here for metric geeks and community managers to chew on. These are only some preliminary thoughts so I hope to flesh them out some more with friends.