Category Archives: open source

Free Software and Open-Source Symposium

Friends! I want to make sure everybody and anybody who might be interested knows about the upcoming 6th annual Free Software and Open-Source Symposium in Toronto, this October 25-26th.

What is Open-Source? There is a good definition here.

Non-techies should not be shy… I (and I’m very non-techie, I couldn’t code if my life, quite literally, depended on it) for example will be talking about Community Management as the core competency of Open Source projects. While open-source is usually talked about in reference to software, the conference organizers are interested in open systems more generally, and how they can be applied in various fields. I’m interested in open-source public policy (which, if they’ll have me back, I’d like to talk about next year…) and others are interested in its application to theater, meeting design, etc…

For more information I would suggest the blog of David Humphrey, one of the event’s coordinators, where one can read about cool insider info (e.g. prizes) and juicy gossip (e.g. the public, but just, shaming of me for being delinquent in submitting my talk summary).

You can also check out the conference’s webpage, where you can find the agenda, a place to register and other info.

The Free Software and Open Source Symposium
October 25-26th, 2007 – 9:00 a.m. to 5:00 p.m.
Seneca@York Campus, Toronto

The Symposium is a two-day event aimed at bringing together educators, developers and other interested parties to discuss common free software and open source issues, learn new technologies and to promote the use of free and open source software. At Seneca College, we think free and open source software are real alternatives.

more on segmenting open source communities

I wanted to following up on yesterday’s post about the topology and segmentation of open source communities  with one or two addition comments.

My friend Rahul R. reminded me of the fact that one critical reason we segment is to more effectively allocate time and resources. In a bell shaped vision of the open source community (to get this you really have to read yesterday’s post) it would make sense to allocate the bulk of time on the centre (or average) user. But in a power law distribution, with a massive majority of community participants poorly networked to the project a community may face an important dilemma. Consolidate and focus on current community members, or invest in enabling a group so massive it may seem impossible to have an impact.

But as I reflect on it, this segmentation may create a false choice.

Concentrating on the more connected and active members may not be beneficial. A community needs to cultivate a pipeline of future users. Focusing on current community leaders at the expense of future community leaders will damage the project’s long term viability. More importantly, as discussed yesterday, consolidating and insulating this group may acutally create barriers to entry for new community members by saturating the current key members relationship capacity.

The reverse however, concentrating on a mass of passive users, trying to transform them into more active community members is a daunting task (especially when you are considering a user base of 20-30 million, or even just a beta tester community of 100,000 people). While I think there are a number of exciting things that one can and should do to tackle this segment, it can, and does feel overwhelming. How can you have impact?

The key may be to leverage the super users (or super-nodes – those who are more likely to be connected to people throughout the community) to create a culture that is more inclusive and participatory. Over the long term, successful open source communities will be those capable of not only drawing in new members, but networking them with key operators, decision makers and influencers so that new branches of the community are seeded.

I suspect this does not necessarily occur on to its own. It requires an explicit strategy, supported by training, all of which must be aligned with the community’s values. This will be especially true as newer entrants will have a diverse background (and set of goals and values) then the original community members. Possibly the most effective way to achieve this is to inoculate the super nodes within the community with a degree of openness to diversity, and a capacity for relationship cultivation and management so as to create an open culture that functions well even with a diverse community.

So let’s segment the community- but let’s also use that segmentation to build skills, awareness, etc… in each segment that allows it to contribute to a strategy that transcends each individual segment. For an open source community, I would suggest that, at a minimum, this means offering some training around relationship management, dispute resolution, facilitation and mediation to its super-nodes – e.g. the people most directly shaping the community’s culture.

Open Source Communities – Mapping and Segmentation

I’ve just finished “Linked” by Albert-Laszlo Barabasi (review to come shortly) and the number of applications of his thesis are startling.

A New Map for Open Source Communities

The first that jumps to mind is how it nicely the book’s main point provides a map that explains both the growth and structure of open source communities. Most people likely assume that networks (such as an open source community) are randomly organized – with lots people in the network connected to lots of other people. Most likely, these connections would be haphazard, randomly created (perhaps when two people meet or work together) and fairly evenly distributed.

linked1

If an open-source community was organized randomly and experienced random growth, participants would join the community and over time connect with others and generate new relationships. Some participants would opt to create more relationships (perhaps because they volunteer more time), others would create fewer relationships (perhaps because they volunteer less time and/or stick to working with the same group of people). Over time, the law of averages should balance out active and non-active users.

New entrants would be less active in part because they possess fewer relationships in the community (let’s say 1 or 2 connections). However, these new entrants would eventually became more active as they made new relationships and became more connected. As a result they would join the large pool of average community members who would possess an average number of connections (say 10 other people) and who might be relatively active. Finally, at the other extreme we would find veterans and/or super active members. A small band of relatively well connected members who know a great deal of people (say 60 or even 80 people).

Map out the above described community and you get a bell curve (taken from the book Linked). A few users (nodes) with weak links and a few better connected than the average. The bulk of the community lies in the middle with most people possessing more or less the same number of links and contributing more or less the same amount as everyone else. Makes sense, right?

Or maybe not. People involved in open-source communities probably will remark that their community participation levels does not look like this. This, according to Barabasi, should not surprise us. Many networks aren’t structured this way. The rules that govern the growth and structure of many network – rules that create what Barabasi terms “scale-free networks” – create something that looks, and acts, very differently.

In the above graph we can talk about about the average user (or node) with confidence. And this makes sense… most of us assume that there is such thing as an average user (in the case of opensource movements, it’s probably a “he,” with a comp-sci background, and an avid Simpson’s fan). But in reality, most networks don’t have an average node (or user). Instead they are shaped by what is called a “power law distribution.” This means that there is no “average” peak, but a gradually diminishing curve with many, many, many small nodes coexisting with a few extremely large nodes.

linked2

In an open source community this would mean that there are a few (indeed very few, in relation to the community’s size) power users and a large number of less active or more passive users.

Applying this description to the Firefox community we should find the bulk of users at the extreme left. People who – for example – are like me. They use Firefox and have maybe even registered a bug or two on Firefox’s Bugzilla webpage. I don’t know many people in the community and I’m not all the active. To my right are more active members, people who probably do more – maybe beta test or even code – and who are better connected in the community. At the very extreme and the super-users (or super nodes). These are people who contribute daily or are like Mike Shaver (bio, blog) and Mike Beltzner (bio, blog): paid employees of the Mozilla corporation with deep connections into the community.

Indeed, Beltzner’s presentation on the FireFox community (blog post here, presentation here and relevant slides posted below) lists a hierarchy of participation level that appears to mirror a power law distribution.

mbslide3

I think we can presume that those at the beginning of the slide set (e.g Beltzner, the 40 member Mozilla Dev Team and the 100 Daily Contributors) are significantly more active and connected within the community than the Nightly Testers, Beta Testers and Daily Users. So the FireFox community (or network) may be more accurately described by a Power Law Distribution.

Implications for Community Management

So what does this mean for open source communities? If Barabasi’s theory of networks can be applied to open source communities – there are at least 3 issues/ideas worth noting:

1. Scaling could be a problem

If open source communities do indeed look like “scale-free networks” then it maybe be harder then previously assumed to cultivate (and capitalize on) a large community. Denser “nodes” (e.g. highly networked and engaged participants) may not emerge. Indeed the existence of a few “hyper-nodes” (super-users) may actually prevent new super-users (i.e. new leaders, heavy participants) from arising since new relationships will tend to gravitate towards existing hubs.

Paradoxically, the problem may be made worse by the fact that most humans can only maintain a limited number of relationships at any given time. According to Barabasi, new users (or nodes) entering the community (or network) will generally attempt to forge relationships with hub-like individuals (this is, of course, where the information and decision-making resides). However, if these hubs are already saturated with relationships, then these new users will have hard time forging the critical relationships that will solidify their connection with the community.

Indeed, I’ve heard of this problem manifesting itself in open source communities. Those central to the project (the hyper nodes) constantly rely on the same trusted people over and over again. As a result the relationships between these individuals get denser while the opportunities for forging new relationships (by proving yourself capable at a task) with critical hubs diminishes.

2. Segmentation model

Under a Bell Shaped curve model of networks it made little sense to devote resource and energy to supporting and helping those who participate least because they made up a small proportion on the community. Time and energy would be devoted to enabling the average participant since they represented the bulk of the community’s participants.

A Power Law distribution radically alters the makeup of the community. Relatively speaking, there are an incredibly vast number of users/participants who are only passively and/or loosely connected to the community compared to the tiny cohort of active members. Indeed, as Beltzner’s slides point out 100,000 Beta testers and 20-30M users vs. 100 Daily Contributors and 1000 Regular Contributors.

The million dollar question is how do we move people up the food chain? How do we convert users and Beta testers and contributors and daily contributors? Or, as Barabasi might put it: how do increase nodes density generally and the number of super-nodes specifically?Obviously Mozilla and others already do this, but segmenting the community – perhaps into the groups laid out by Beltzner – and providing them with tools to not only perform well at that level, but that enable them to migrate up the network hierarchy is essential. One way to accomplish this task would be to have more people contributing to a given task, however, another possibility (one I argue in an earlier blog post) is to simply open source more aspects of the project, including items such as marketing, strategy, etc…

3. Grease the networks nodes

Finally, another way to over come the potential scaling problem of open source is to improve the capacity of hubs to handle relationships thereby enabling them to a) handle more and/or b) foster new relationships more effectively. This is part of what I was highlighting on my post about relationship management as the core competency of open source projects.

Conclusion

This post attempts to provide a more nuanced topology of open source communities by describing them as scale-free networks. The goal is not to ascertain that there is some limit to the potential of open source communities but instead to flag and describe possible structural limitations so as to being a discussion on how they can be addressed and overcome. My hope is that others will find this post interesting and use its premise to brainstorm ideas for how we can improve these incredible communities.

As a final note, given the late hour, I’m confident there may be a typo or two in the text, possible even a flawed argument. Please don’t hesitate to point either out. I’d be deeply appreciative. If this was an interesting read you may find – in addition to the aforementioned post on community management – this post on collaboration vs cooperation in open source communities to be interesting.

Open Source Chamber of Commerce

One of my favourite sessions from last week’s Open Cities unconference was a session Mark Surman proposed around what an Open Source Chamber of Commerce might look like.

So what is an Open Source Chamber of Commerce? Good question. Mark’s initial thinking was…

…to focus and build buzz around the significant volume of open source activity that is quietly (and disconnectedly) happening in Toronto. The number of companies, projects and research labs focused on open source is growing in this city, yet they are spread out a thousand nooks and crannies. There is no sense of community, no sense of anything bigger. Of course, that’s totally okay on one level. No need to invent community, especially when most people are tapped in globally. However, there is another level where staying disconnected locally represents a missed opportunity to make Toronto a better place to work on open source. (read Mark’s full post here.)

The session sparked a good debate about what such a Chamber might look like – or even what membership would entail.

The possibility that most excited me was how such a Chamber could serve as a home and talk shop for corporations or organizations that agree to “donate” a specific number or percent of their workforces’ hours, towards an open source projects. Many organizations (indeed most) use open source products, and some of them allow their IT employees to contribute towards them (for which there is a good business case). The Chamber could serve to connect CIO’s and other representatives from these firms and organizations with one another as well as with key figures within open-source projects. Up and coming open-source communities could pitch their software and community. Members could exchange best practices on how to best contribute to OS projects and on how their organizations can most effectively leverage OS software.

In addition, the Chamber could serve as an interest group, an advocate for infrastructure and policies that would make Toronto a more attractive location for Open-Source projects and contributors specifically and IT workers generally. According to the Municipal Government, Toronto already has the third largest cluster of Information and Communication Technology in North America (around 90,000 ITC facilities with 100 employees or greater) so there is a rich pool to draw from. On top of that – as Mark also notes – there is an interesting group of people affiliated with various open-source projects in Toronto. Why not figure out what Toronto is doing right and amplify it?

(As a brief aside, check out the Seneca FSOSS conference website if you haven’t yet – here’s a group that’s been doing a lot of heavy lifting on this front already.)

Mark and I are simply batting around the idea and would love feedback (positive and critical).

Open Cities – A Success…

Finally beginning to relax after a hectic week of speeches, work and helping out with the Open Cities unconference.

Open Cities was dynamite – it attracted an interesting cross section of people from the arts, publishing, IT, non-profit and policy sectors (to name a few). This was my first unconference and so the most interesting take away was seeing how an openly conducted (un)conference – one with virtually no agenda or predetermined speakers – can work so well. Indeed, it worked better than most conferences I’ve been to. (Of course, it helps when it is being expertly facilitated by someone like Misha G.)

Here’s a picture chart of the agenda coming together mid-morning (thank you to enigmatix1 for the photos)

There was no shortage of panels convened by the participants. I know Mark K. is working on getting details from each of them up on the Open Cities wiki as quickly as possible. Hopefully these can be organized more succinctly in the near future (did I just volunteer myself?).

There were several conversation I enjoyed – hope to share more on them over the coming days – but wanted to start with the idea of helping grow the Torontopedia. The conversation was prompted by several people asking why Toronto does not have its own wiki (it does). Fortunately, Himy S. – who is creating the aforementioned Torontopedia – was on hand to share in the conversation.

A Toronto wiki – particularly one that leverages Google Maps’ functionality could provide an endless array of interesting content. Indeed the conversation about what information could be on such a wiki forked many times over. Two ideas seemed particularly interesting:

The first idea revolved around getting the city’s history up on a wiki. This seemed like an interesting starting point. Such information, geographically plotted using Google Maps, would be a treasure trove for tourists, students and interested citizens. More importantly, there is a huge base of public domain content, hidden away in the city’s archives, that could kick start such a wiki. The ramp up costs could be kept remarkably low. The software is open sourced and the servers would not be that expensive. I’m sure an army of volunteer citizens would emerge to help transfer the images, stories and other media online. Indeed I’d wage a $100,000 grant from the Trillium Foundation, in connection with the City Archives, Historica and/or the Dominion Institute, as well as some local historical societies could bring the necessary pieces together. What a small price to pay to give citizens unrestricted access to, and the opportunity to add to, they stories and history of their city.

The interesting part about such a wiki is that it wouldn’t have to be limited to historical data. Using tags, any information about the city could be submitted. As a result, the second idea for the wiki was to get development applications and proposals online so citizens can learn about how or if their neighborhoods will be changing and how they have evolved.

Over the the course of this discussion I was stunned to learn that a great deal of this information is kept hidden by what – in comparison to Vancouver at least – is a shockingly secretive City Hall. In Vancouver, development applications are searchable online and printed out on giant billboards (see photo) and posted on the relevant buildings.Development application According to one participant, Toronto has no such requirements! To learn anything about a development proposal you must first learn about it (unclear how this happens) and then go down to City Hall to look at a physical copy of the proposal (it isn’t online?). Oh, and you are forbidden to photocopy or photograph any documents. Heaven forbid people learn about how their neighbourhood might change…

Clearly a wiki won’t solve this problem in its entirety – as long as Toronto City Hall refuses to open up access to its development applications. However, collecting the combined knowledge of citizens on a given development will help get more informed and hopefully enable citizens to better participate in decisions about how their neighbourhood will evolve. It may also create pressure on Toronto City Hall to start sharing this information more freely.

To see more photo’s go to flickr and search the tags for “open cities.”

Open Cities Unconference tomorrow

Great news! Open Cities has maxed out capacity (at 90 people). Big thank you to The Centre for Social Innovation who’ve been kind enough to host us…

There has been some good media coverage for our humble event… BlogTO talks about here as well as shares Will Pate’s and my thoughts,and Boing Boing gave us a shout out.

For those unable to participate (cause you’re busy during the day or are on the waiting list) come join us afterwords for a little BBQ at Fort York in Toronto.  The BBQ will likely get going around 5pm.

Looking forward to share more about the event after the weekend…

Crisis Management? Try Open Source Public Service

Does anyone still believe that government services can’t be designed to rely on volunteers? Apparently so. We continue to build whole systems so that we don’t have to rely on people (take the bus system for example, it doesn’t rely on constant customer input – indeed I think it actively discourages it).

So I was struck the other day when I stumbled into an unfortunate situation that reminded me of how much one of our most critical support system relies on ordinary citizens volunteering their time and resources to provide essential information.

Last Sunday night, through the review mirror, I witnessed a terrible car accident.

A block behind me, two cars hit head on at a 90 degree angle – with one car flipping end over end and landing on its roof in the middle of the intersection.

Although it was late in the evening there were at least 20-30 people on the surrounding streets… and within 5 seconds of the crash I could saw over the soft glow of over 15 cellphone LCD screens light up the night. Within 60 seconds, I could hear the ambulance sirens.

It was a terrible situation, but also an excellent example of how governments already rely on open system – even to deliver essential, life saving services. 911 services rely on unpaid, volunteer citizens to take the time and expend the (relatively low) resources to precisely guide emergency resources. It is an interesting counterpoint to government officials who design systems that pointedly avoid citizen feedback. More importantly, if we trust on volunteers to provide information to improve an essential service, why don’t we trust them to provide a constant stream of feedback on other government services?

Open Media

For those interested in how ‘open source‘ systems can drive down the costs of establishing a media presence should take a look at The Article 13 Initiative.

By leveraging open-source technologies and providing training The Article 13 Initiative reduces the barriers to entry into the journalism market and reduces the costs of technology for established players (Article 13 is currently working closely with Rafigui, a French language journal focused on the youth market). With less money being spent on software, more money can be devoted to other priorities, like reporters and/or other staffers.

For those interested in open-source, be it in the arts, policy, software, media, etc… consider signing up for the Open Cities unconference taking place in Toronto on June 23rd. No one has updated me on how many slots are left so apologies if it is already full…

OpenCities and Seneca College

As many of you know I’m deeply interested in Open-Source systems and so was super thrilled when David Humphrey invited me over to Seneca College for a reception at the Centre for Development of Open Technology (CDOT). Who knew such a place existed. And in Toronto no less! There is something in the air around Toronto and open-source systems… why is that?

This is exactly one of the questions those of us planning OpenCities are hoping it answers… (as our more formal blurb hints at)

What is OpenCities Toronto 2007? Our goal is to gather 80 cool people to ask how do we collaboratively add more open to the urban landscape we share? What happens when people working on open source, public space, open content, mash up art, and open business work together? How do we make Toronto a magnet for people playing with the open meme?

Registration for OpenCities starts today. If you have any questions please feel free to ask in the comment box below, or, drop me an email. I’m doubly pumped since the whole event will be taking place at the Centre for Social Innovation – I can’t imagine a better space. (If you wondering – do I live in Toronto or Vancouver, I don’t blame you, I sometimes wonder myself).

Don't Ban Facebook – Op-ed in today's G&M

You can download the op-ed here.

The Globe and Mail published an op-ed I wrote today on why the government shouldn’t ban face book, but hire it.

The point is that Web 2.0 technologies, properly used, can improve communication and coordination across large organizations and communities. If the government must ban Facebook then it should also hire it to provide a similar service across its various ministries. If not it risks sending a strong message that it wants its employees to stay in your little box.

One thing I didn’t get into in the op-ed is the message this action sends to prospective (younger) employees. Such a ban is a great example of how the government sees its role as manager. Essential the public service is telling its employees “we don’t trust that you will do your job and will waste your (and our) time doing (what we think are) frivolous things. Who wants to work in an environment where there own boss doesn’t trust them? Does that sound like a learning environment? Does it sound like a fun environment?

Probably not.

—–

Facebook Revisited

DAVID EAVES
SPECIAL TO GLOBE AND MAIL
MAY 17, 2007 AT 12:38 AM EDT

Today’s federal and provincial governments talk a good game about public-service renewal, reducing hierarchy, and improving inter-ministry co-operation. But actions speak louder than words, and our bureaucracies’ instincts for secrecy and control still dominate their culture and frame their understanding of technology.

Last week, these instincts revealed themselves again when several public-service bureaucracies — including Parliament Hill and the Ontario Public Service — banned access to Facebook.

To public-service executives, Facebook may appear to be little more than a silly distraction. But it needn’t be. Indeed, it could be the very opposite. These technology platforms increasingly serve as a common space, even a community, a place where public servants could connect, exchange ideas and update one another on their work. Currently, the public service has a different way of achieving those goals: It’s called meetings, or worse, e-mail. Sadly, as anyone who works in a large organizations knows, those two activities can quickly consume a day, pulling one away from actual work. Facebook may “waste time” but it pales in comparison to the time spent in redundant meetings and answering a never-ending stream of e-mails.

An inspired public service shouldn’t ban Facebook, it should hire it.

A government-run Facebook, one that allowed public servants to list their interests, current area of work, past experiences, contact information and current status, would be indispensable. It would allow public servants across ministries to search out and engage counterparts with specialized knowledge, relevant interests or similar responsibilities. Moreover, it would allow public servants to set up networks, where people from different departments, but working on a similar issue, could keep one another abreast of their work.

In contrast, today’s public servants often find themselves unaware of, and unable to connect with, colleagues in other ministries or other levels of government who work on similar issues. This is not because their masters don’t want them to connect (although this is sometimes the case) but because they lack the technology to identify one another. As a result, public servants drafting policy on interconnected issues — such as the Environment Canada employee working on riverbed erosion and the Fisheries and Oceans employee working on spawning salmon — may not even know the other exists.

One goal of public-sector renewal is to enable better co-operation. Ian Green, the Public Policy Forum chair of Public Service
Governance noted in an on-line Globe and Mail commentary (Ensuring Our Public Service Is A Force For Good In The Lives Of Canadians — May 8) that governments face “increasingly complex and cross-cutting issues … such as environmental and health policy.” If improving co-ordination and the flow of information within and across government ministries is a central challenge, then Facebook isn’t a distraction, it’s an opportunity.

Better still, implementing such a project would be cheap and simple. After all, the computer code that runs Facebook has already been written. More importantly, it works, and, as the government is all too aware, government employees like using it. Why not ask Facebook to create a government version? No expensive scaling or customization would be required. More importantly, by government-IT standards, it would be inexpensive.

It would certainly be an improvement over current government online directories. Anyone familiar with the federal government’s Electronic Directory Services (GEDS) knows it cannot conduct searches based on interests, knowledge or experience. Indeed, searches are only permissible by name, title, telephone and department. Ironically, if you knew any of that information, you probably wouldn’t need the search engine to begin with.

Retired public servants still talk of a time when ministries were smaller, located within walking distance of one another, and where everyone knew everyone else. In their day — 60 years ago — inter-ministerial problems were solved over lunch and coffee in a shared cafeteria or local restaurant. Properly embraced, technologies like Facebook offer an opportunity to recapture the strengths of this era.

By facilitating communication, collaboration and a sense of community, the public services of Canada may discover what their
employees already know: Tools like Facebook are the new cafeterias, where challenges are resolved, colleagues are kept up to date, and inter-ministerial co-operation takes place. Sure, ban Facebook if you must. But also hire it. The job of the public services will be easier and Canadians interests will be more effectively served.

David Eaves is a frequent speaker and consultant on public policy and negotiation. He recently spoke at the Association of Professional Executives conference on Public Service Renewal.