Tag Archives: open source

How to make $240M of Microsoft equity disappear

Last week a few press articles described how Google apparently lost to Microsoft in a bidding war to invest in Facebook. (MS won – investing $240M in Facebook)

Did Google lose? I’m not so sure… by “losing” it may have just pulled off one of the savviest negotiations I’ve ever seen. Google may never have been interested in Facebook, only in pumping up its value to ensure Microsoft overpaid.

Why?

Because Google is planning to destroy Facebook’s value.

Facebook – like all social network sites – is a walled garden. It’s like a cellphone company that only allows its users to call people on the same network – for example if you were a Rogers cellphone user, you wouldn’t be allowed to call your friend who is a Bell cellphone user. In Facebook’s case you can only send notes, play games (like my favourite, scrabblelicious) and share info with other people on Facebook. Want to join a group on Friendster? To bad.

Social networking sites do this for two reasons. First, if a number of your friends are on Facebook, you’ll also be inclined to join. Once a critical mass of people join, network effects kick in, and pretty soon everybody wants to join.

This is important for reason number two. The more people who join and spend time on their site, the more money they make on advertising and the higher the fees they can charge developers for accessing their user base. But this also means Facebook has to keep its users captive. If Facebook users could join groups on any social networking site, they might start spending more time on other sites – meaning less revenue for Facebook. Facebook’s capacity to generate revenue, and thus its value, therefor depends in large part on two variables: a) the size of its user base; and b) its capacity to keep users captive within your site’s walled garden.

This is why Google’s negotiation strategy was potentially devastating.

MicroSoft just paid $240M for a 1.6% stake in Facebook. The valuation was likely based in part, on the size of Facebook’s user base and the assumption that these users could be kept within the site’s walled garden.

Let’s go back to our cell phone example for a moment. Imagine if a bunch of cellphone companies suddenly decided to let their users call one another. People would quickly start gravitating to those cellphone companies because they could call more of their friends – regardless of which network they were on.

This is precisely the idea behind Google’s major announcement earlier this week. Google launched OpenSocial – a set of common APIs that let developers create applications that work on any social networks that choose to participate. In short, social networks that participate will be able to let their users share information with each other and join each other’s groups. Still more interesting MySpace has just announced it will participate in the scheme.

This is a lose-lose story for Facebook. If other social networking sites allow their users to connect with one another then Facebook’s users will probably drift over to one of these competitors – eroding Facebook’s value. If Facebook decides to jump on the bandwagon and also use the OpenSocial API’s then its userbase will no longer be as captive – also eroding its value.

Either way Google has just thrown a wrench into Facebook’s business model, a week after Microsoft paid top dollar for it.

As such, this could be a strategically brilliant move. In short, Google:

  • Saves spending $240M – $1B investing in Facebook
  • Creates a platform that, by eroding Facebook’s business model, makes Microsoft’s investment much riskier
  • Limit their exposure to an anti-trust case by not dominating yet another online service
  • Creates an open standard in the social network space, making it easier for Google to create its own social networking site later, once a clear successful business model emerges

Nice move.

Open source fun, Open source problems…

I had a thoroughly enjoyable time at the Free-Software and Open Source Symposium (FSOSS) at Seneca college. I had a great time giving my talk on community management as the core competency of open source communities. The audience was really engaged and asked great questions – I just wish we’d had more time.

The talk was actually filmed and can be downloaded, but it is only available as an OGG file wihch is large (416Mb) but rumor has it they may get converted into a smaller more streamable format in the future. Once the video is available I’ll also post the slides.

Also, I want to thank Coop and Shane for blogging the positive feedback. I’m looking forward to building on and refining the ideas…

One of the key ideas I’m interested in pushing is how “open” open source communities are – and how they can make themselves easier to join. I actually had an interesting experience while at FSOSS that highlighted how subtle this challenge can be.

During one of the lunch breaks Mark Surman and I ran a Birds of a Feather session on Community Management as the Core Competency of Open Source Communities. In the lead up to the session, a leader of a prominent open source community (I knew this because it said so on his name tag) walked up to me and asked:

Are you running this BoF?” (Birds of a Feather)

Not being hip to the lingo I replied… “What’s a BoF? I’m not super techie so I don’t know all the terms.

To which he replied “Evidently.” and walked away.

And thus ended my first contact with this particular open source community. With its titular leader nonetheless. Needless to say, it didn’t leave a positive impression.

I’ll admit this is an anecdotal piece of data. But it affirms my thinking that while open source communities may be open – to whom they are open may not be as broad a cross section of the population as we are lead to believe (e.g. you’d better already know the lingo and cultural norms of the community).

There is another important lesson here. One that impacts directly the scalability of open source communities. At some point everyone has to have a first contact with a community – that first impression may be a strong determinant about where they volunteer their time and contribute their free labour. Any good open-source community will probably want to get it right.

The Dunbar number in open source

For those interested in open-source systems (everything from public policy to software) should listen to Christopher Allen’s talk (his blog here) on the Dunbar Number.

Dunbar’s number, which is 150, represents a theoretical maximum number of individuals with whom a set of people can maintain a social relationship, the kind of relationship that goes with knowing who each person is and how each person relates socially to every other person.

Malcolm Gladwell brought the Dunbar number into popular discourse when he referenced it in his book The Tipping Point.

However, Allen’s talk tries to nuance the debate. Specifically, he wishes that those who reference the Dunbar number would be more aware that in he research literature, the mean group size of 150 only applies to groups with high incentives to stay together. As examples he cites nomadic tribes, armies, terrorist organizations, mafias, etc… in short, groups in which mutual trust and strong relationships are essential for survival. This is in part due to the fact that there is a cost group members must pay to maintaining this groups of this size: one must spend 40% of ones time engaged in “social grooming.” This means sitting around listening to one another, talking, being engaged, etc… Without this social grooming it is difficult to develop and maintain the unstructured trust that holds the group together.

More interestingly Allen’s research suggests that in modern groups there is a correlation between group satisfaction and the size of the group. Things work well between 3-12 people and from 25-80. But in between there is this hole. Groups in this “chasm” are too be too big to use many of the tools (like meetings) that small groups can use, but too small to successfully rely on the tools (such as hierarchies and reporting mechanisms) that allow larger groups to function.

Open source projects (and really any new project) should find this interesting. There is a group size chasm that must, at some point, be crossed. When I’m less tired I will try to wander over to sourceforge and see if I can plot the size of the projects there to see if they scale up nicely against Allen’s graph.

In addition, I’m curious as to whether some softer skills around facilitation would allow groups to function more effectively, even within this “chasm.”

Where are the progressives on Net Neutrality?

I’m excited to see that the Green Party has included a section on Net Neutrality in it’s platform.

4. Supporting the free flow of information

The Internet has become an essential tool in knowledge storage and the free flow of information between citizens. It is playing a critical role in democratizing communications and society as a whole. There are corporations that want to control the content of information on the internet and alter the free flow of information by giving preferential treatment to those who pay extra for faster service.

Our Vision

The Green Party of Canada is committed to the original design principle of the internet – network neutrality: the idea that a maximally useful public information network treats all content, sites, and platforms equally, thus allowing the network to carry every form of information and support every kind of application.

Green Solutions

Green Party MPs will:

  • Pass legislation granting the Internet in Canada the status of Common Carrier – prohibiting Internet Service Providers from discriminating due to content while freeing them from liability for content transmitted through their systems.

Liberals, NDP… we are waiting…

Free Software and Open-Source Symposium

Friends! I want to make sure everybody and anybody who might be interested knows about the upcoming 6th annual Free Software and Open-Source Symposium in Toronto, this October 25-26th.

What is Open-Source? There is a good definition here.

Non-techies should not be shy… I (and I’m very non-techie, I couldn’t code if my life, quite literally, depended on it) for example will be talking about Community Management as the core competency of Open Source projects. While open-source is usually talked about in reference to software, the conference organizers are interested in open systems more generally, and how they can be applied in various fields. I’m interested in open-source public policy (which, if they’ll have me back, I’d like to talk about next year…) and others are interested in its application to theater, meeting design, etc…

For more information I would suggest the blog of David Humphrey, one of the event’s coordinators, where one can read about cool insider info (e.g. prizes) and juicy gossip (e.g. the public, but just, shaming of me for being delinquent in submitting my talk summary).

You can also check out the conference’s webpage, where you can find the agenda, a place to register and other info.

The Free Software and Open Source Symposium
October 25-26th, 2007 – 9:00 a.m. to 5:00 p.m.
Seneca@York Campus, Toronto

The Symposium is a two-day event aimed at bringing together educators, developers and other interested parties to discuss common free software and open source issues, learn new technologies and to promote the use of free and open source software. At Seneca College, we think free and open source software are real alternatives.

more on segmenting open source communities

I wanted to following up on yesterday’s post about the topology and segmentation of open source communities  with one or two addition comments.

My friend Rahul R. reminded me of the fact that one critical reason we segment is to more effectively allocate time and resources. In a bell shaped vision of the open source community (to get this you really have to read yesterday’s post) it would make sense to allocate the bulk of time on the centre (or average) user. But in a power law distribution, with a massive majority of community participants poorly networked to the project a community may face an important dilemma. Consolidate and focus on current community members, or invest in enabling a group so massive it may seem impossible to have an impact.

But as I reflect on it, this segmentation may create a false choice.

Concentrating on the more connected and active members may not be beneficial. A community needs to cultivate a pipeline of future users. Focusing on current community leaders at the expense of future community leaders will damage the project’s long term viability. More importantly, as discussed yesterday, consolidating and insulating this group may acutally create barriers to entry for new community members by saturating the current key members relationship capacity.

The reverse however, concentrating on a mass of passive users, trying to transform them into more active community members is a daunting task (especially when you are considering a user base of 20-30 million, or even just a beta tester community of 100,000 people). While I think there are a number of exciting things that one can and should do to tackle this segment, it can, and does feel overwhelming. How can you have impact?

The key may be to leverage the super users (or super-nodes – those who are more likely to be connected to people throughout the community) to create a culture that is more inclusive and participatory. Over the long term, successful open source communities will be those capable of not only drawing in new members, but networking them with key operators, decision makers and influencers so that new branches of the community are seeded.

I suspect this does not necessarily occur on to its own. It requires an explicit strategy, supported by training, all of which must be aligned with the community’s values. This will be especially true as newer entrants will have a diverse background (and set of goals and values) then the original community members. Possibly the most effective way to achieve this is to inoculate the super nodes within the community with a degree of openness to diversity, and a capacity for relationship cultivation and management so as to create an open culture that functions well even with a diverse community.

So let’s segment the community- but let’s also use that segmentation to build skills, awareness, etc… in each segment that allows it to contribute to a strategy that transcends each individual segment. For an open source community, I would suggest that, at a minimum, this means offering some training around relationship management, dispute resolution, facilitation and mediation to its super-nodes – e.g. the people most directly shaping the community’s culture.

Open Source Communities – Mapping and Segmentation

I’ve just finished “Linked” by Albert-Laszlo Barabasi (review to come shortly) and the number of applications of his thesis are startling.

A New Map for Open Source Communities

The first that jumps to mind is how it nicely the book’s main point provides a map that explains both the growth and structure of open source communities. Most people likely assume that networks (such as an open source community) are randomly organized – with lots people in the network connected to lots of other people. Most likely, these connections would be haphazard, randomly created (perhaps when two people meet or work together) and fairly evenly distributed.

linked1

If an open-source community was organized randomly and experienced random growth, participants would join the community and over time connect with others and generate new relationships. Some participants would opt to create more relationships (perhaps because they volunteer more time), others would create fewer relationships (perhaps because they volunteer less time and/or stick to working with the same group of people). Over time, the law of averages should balance out active and non-active users.

New entrants would be less active in part because they possess fewer relationships in the community (let’s say 1 or 2 connections). However, these new entrants would eventually became more active as they made new relationships and became more connected. As a result they would join the large pool of average community members who would possess an average number of connections (say 10 other people) and who might be relatively active. Finally, at the other extreme we would find veterans and/or super active members. A small band of relatively well connected members who know a great deal of people (say 60 or even 80 people).

Map out the above described community and you get a bell curve (taken from the book Linked). A few users (nodes) with weak links and a few better connected than the average. The bulk of the community lies in the middle with most people possessing more or less the same number of links and contributing more or less the same amount as everyone else. Makes sense, right?

Or maybe not. People involved in open-source communities probably will remark that their community participation levels does not look like this. This, according to Barabasi, should not surprise us. Many networks aren’t structured this way. The rules that govern the growth and structure of many network – rules that create what Barabasi terms “scale-free networks” – create something that looks, and acts, very differently.

In the above graph we can talk about about the average user (or node) with confidence. And this makes sense… most of us assume that there is such thing as an average user (in the case of opensource movements, it’s probably a “he,” with a comp-sci background, and an avid Simpson’s fan). But in reality, most networks don’t have an average node (or user). Instead they are shaped by what is called a “power law distribution.” This means that there is no “average” peak, but a gradually diminishing curve with many, many, many small nodes coexisting with a few extremely large nodes.

linked2

In an open source community this would mean that there are a few (indeed very few, in relation to the community’s size) power users and a large number of less active or more passive users.

Applying this description to the Firefox community we should find the bulk of users at the extreme left. People who – for example – are like me. They use Firefox and have maybe even registered a bug or two on Firefox’s Bugzilla webpage. I don’t know many people in the community and I’m not all the active. To my right are more active members, people who probably do more – maybe beta test or even code – and who are better connected in the community. At the very extreme and the super-users (or super nodes). These are people who contribute daily or are like Mike Shaver (bio, blog) and Mike Beltzner (bio, blog): paid employees of the Mozilla corporation with deep connections into the community.

Indeed, Beltzner’s presentation on the FireFox community (blog post here, presentation here and relevant slides posted below) lists a hierarchy of participation level that appears to mirror a power law distribution.

mbslide3

I think we can presume that those at the beginning of the slide set (e.g Beltzner, the 40 member Mozilla Dev Team and the 100 Daily Contributors) are significantly more active and connected within the community than the Nightly Testers, Beta Testers and Daily Users. So the FireFox community (or network) may be more accurately described by a Power Law Distribution.

Implications for Community Management

So what does this mean for open source communities? If Barabasi’s theory of networks can be applied to open source communities – there are at least 3 issues/ideas worth noting:

1. Scaling could be a problem

If open source communities do indeed look like “scale-free networks” then it maybe be harder then previously assumed to cultivate (and capitalize on) a large community. Denser “nodes” (e.g. highly networked and engaged participants) may not emerge. Indeed the existence of a few “hyper-nodes” (super-users) may actually prevent new super-users (i.e. new leaders, heavy participants) from arising since new relationships will tend to gravitate towards existing hubs.

Paradoxically, the problem may be made worse by the fact that most humans can only maintain a limited number of relationships at any given time. According to Barabasi, new users (or nodes) entering the community (or network) will generally attempt to forge relationships with hub-like individuals (this is, of course, where the information and decision-making resides). However, if these hubs are already saturated with relationships, then these new users will have hard time forging the critical relationships that will solidify their connection with the community.

Indeed, I’ve heard of this problem manifesting itself in open source communities. Those central to the project (the hyper nodes) constantly rely on the same trusted people over and over again. As a result the relationships between these individuals get denser while the opportunities for forging new relationships (by proving yourself capable at a task) with critical hubs diminishes.

2. Segmentation model

Under a Bell Shaped curve model of networks it made little sense to devote resource and energy to supporting and helping those who participate least because they made up a small proportion on the community. Time and energy would be devoted to enabling the average participant since they represented the bulk of the community’s participants.

A Power Law distribution radically alters the makeup of the community. Relatively speaking, there are an incredibly vast number of users/participants who are only passively and/or loosely connected to the community compared to the tiny cohort of active members. Indeed, as Beltzner’s slides point out 100,000 Beta testers and 20-30M users vs. 100 Daily Contributors and 1000 Regular Contributors.

The million dollar question is how do we move people up the food chain? How do we convert users and Beta testers and contributors and daily contributors? Or, as Barabasi might put it: how do increase nodes density generally and the number of super-nodes specifically?Obviously Mozilla and others already do this, but segmenting the community – perhaps into the groups laid out by Beltzner – and providing them with tools to not only perform well at that level, but that enable them to migrate up the network hierarchy is essential. One way to accomplish this task would be to have more people contributing to a given task, however, another possibility (one I argue in an earlier blog post) is to simply open source more aspects of the project, including items such as marketing, strategy, etc…

3. Grease the networks nodes

Finally, another way to over come the potential scaling problem of open source is to improve the capacity of hubs to handle relationships thereby enabling them to a) handle more and/or b) foster new relationships more effectively. This is part of what I was highlighting on my post about relationship management as the core competency of open source projects.

Conclusion

This post attempts to provide a more nuanced topology of open source communities by describing them as scale-free networks. The goal is not to ascertain that there is some limit to the potential of open source communities but instead to flag and describe possible structural limitations so as to being a discussion on how they can be addressed and overcome. My hope is that others will find this post interesting and use its premise to brainstorm ideas for how we can improve these incredible communities.

As a final note, given the late hour, I’m confident there may be a typo or two in the text, possible even a flawed argument. Please don’t hesitate to point either out. I’d be deeply appreciative. If this was an interesting read you may find – in addition to the aforementioned post on community management – this post on collaboration vs cooperation in open source communities to be interesting.

Open Source Chamber of Commerce

One of my favourite sessions from last week’s Open Cities unconference was a session Mark Surman proposed around what an Open Source Chamber of Commerce might look like.

So what is an Open Source Chamber of Commerce? Good question. Mark’s initial thinking was…

…to focus and build buzz around the significant volume of open source activity that is quietly (and disconnectedly) happening in Toronto. The number of companies, projects and research labs focused on open source is growing in this city, yet they are spread out a thousand nooks and crannies. There is no sense of community, no sense of anything bigger. Of course, that’s totally okay on one level. No need to invent community, especially when most people are tapped in globally. However, there is another level where staying disconnected locally represents a missed opportunity to make Toronto a better place to work on open source. (read Mark’s full post here.)

The session sparked a good debate about what such a Chamber might look like – or even what membership would entail.

The possibility that most excited me was how such a Chamber could serve as a home and talk shop for corporations or organizations that agree to “donate” a specific number or percent of their workforces’ hours, towards an open source projects. Many organizations (indeed most) use open source products, and some of them allow their IT employees to contribute towards them (for which there is a good business case). The Chamber could serve to connect CIO’s and other representatives from these firms and organizations with one another as well as with key figures within open-source projects. Up and coming open-source communities could pitch their software and community. Members could exchange best practices on how to best contribute to OS projects and on how their organizations can most effectively leverage OS software.

In addition, the Chamber could serve as an interest group, an advocate for infrastructure and policies that would make Toronto a more attractive location for Open-Source projects and contributors specifically and IT workers generally. According to the Municipal Government, Toronto already has the third largest cluster of Information and Communication Technology in North America (around 90,000 ITC facilities with 100 employees or greater) so there is a rich pool to draw from. On top of that – as Mark also notes – there is an interesting group of people affiliated with various open-source projects in Toronto. Why not figure out what Toronto is doing right and amplify it?

(As a brief aside, check out the Seneca FSOSS conference website if you haven’t yet – here’s a group that’s been doing a lot of heavy lifting on this front already.)

Mark and I are simply batting around the idea and would love feedback (positive and critical).

Open Cities – A Success…

Finally beginning to relax after a hectic week of speeches, work and helping out with the Open Cities unconference.

Open Cities was dynamite – it attracted an interesting cross section of people from the arts, publishing, IT, non-profit and policy sectors (to name a few). This was my first unconference and so the most interesting take away was seeing how an openly conducted (un)conference – one with virtually no agenda or predetermined speakers – can work so well. Indeed, it worked better than most conferences I’ve been to. (Of course, it helps when it is being expertly facilitated by someone like Misha G.)

Here’s a picture chart of the agenda coming together mid-morning (thank you to enigmatix1 for the photos)

There was no shortage of panels convened by the participants. I know Mark K. is working on getting details from each of them up on the Open Cities wiki as quickly as possible. Hopefully these can be organized more succinctly in the near future (did I just volunteer myself?).

There were several conversation I enjoyed – hope to share more on them over the coming days – but wanted to start with the idea of helping grow the Torontopedia. The conversation was prompted by several people asking why Toronto does not have its own wiki (it does). Fortunately, Himy S. – who is creating the aforementioned Torontopedia – was on hand to share in the conversation.

A Toronto wiki – particularly one that leverages Google Maps’ functionality could provide an endless array of interesting content. Indeed the conversation about what information could be on such a wiki forked many times over. Two ideas seemed particularly interesting:

The first idea revolved around getting the city’s history up on a wiki. This seemed like an interesting starting point. Such information, geographically plotted using Google Maps, would be a treasure trove for tourists, students and interested citizens. More importantly, there is a huge base of public domain content, hidden away in the city’s archives, that could kick start such a wiki. The ramp up costs could be kept remarkably low. The software is open sourced and the servers would not be that expensive. I’m sure an army of volunteer citizens would emerge to help transfer the images, stories and other media online. Indeed I’d wage a $100,000 grant from the Trillium Foundation, in connection with the City Archives, Historica and/or the Dominion Institute, as well as some local historical societies could bring the necessary pieces together. What a small price to pay to give citizens unrestricted access to, and the opportunity to add to, they stories and history of their city.

The interesting part about such a wiki is that it wouldn’t have to be limited to historical data. Using tags, any information about the city could be submitted. As a result, the second idea for the wiki was to get development applications and proposals online so citizens can learn about how or if their neighborhoods will be changing and how they have evolved.

Over the the course of this discussion I was stunned to learn that a great deal of this information is kept hidden by what – in comparison to Vancouver at least – is a shockingly secretive City Hall. In Vancouver, development applications are searchable online and printed out on giant billboards (see photo) and posted on the relevant buildings.Development application According to one participant, Toronto has no such requirements! To learn anything about a development proposal you must first learn about it (unclear how this happens) and then go down to City Hall to look at a physical copy of the proposal (it isn’t online?). Oh, and you are forbidden to photocopy or photograph any documents. Heaven forbid people learn about how their neighbourhood might change…

Clearly a wiki won’t solve this problem in its entirety – as long as Toronto City Hall refuses to open up access to its development applications. However, collecting the combined knowledge of citizens on a given development will help get more informed and hopefully enable citizens to better participate in decisions about how their neighbourhood will evolve. It may also create pressure on Toronto City Hall to start sharing this information more freely.

To see more photo’s go to flickr and search the tags for “open cities.”

Open Cities Unconference tomorrow

Great news! Open Cities has maxed out capacity (at 90 people). Big thank you to The Centre for Social Innovation who’ve been kind enough to host us…

There has been some good media coverage for our humble event… BlogTO talks about here as well as shares Will Pate’s and my thoughts,and Boing Boing gave us a shout out.

For those unable to participate (cause you’re busy during the day or are on the waiting list) come join us afterwords for a little BBQ at Fort York in Toronto.  The BBQ will likely get going around 5pm.

Looking forward to share more about the event after the weekend…