Category Archives: technology

The Fascinating Phenomenon that is Andrew Keen

So Andrew Keen oozed his way on to the Colbert Report the other night and I just caught the video off the website.

For those unfamiliar with Keen he is the author of “The Cult of the Amateur: How Today’s Internet Is Killing Our Culture” a book in which he describes how the internet is making arts and media unprofitable. As innumerable blogs, and most notable, David Weinberger have documented, there are so many holes in Keen’s thesis it is hard to know where to start.

On the Colbert Report Keen cycled through some of his regulars. He correctly pointed out that certain types of media and arts are unprofitable because of the internet… and ignored how the internet has simultaneously given rise to a multitude of new ways for artist to earn a living. New business models are emerging. On the hilarious side, Keen briefly tried to argue that it was blogs and the internet – not American’s old media institutions such as CNN and FOX – that allowed the American public to be hoodwinked into believing there were weapons of mass destruction in Iraq. My god, blogs were the only place in the US where an honest discussion about Iraqi WMDs took place! I could go on… but why repeat what so many others have said.

What it interesting to me is that a phenomenon like Keen even exists. Why is it that we repeatedly listen to people who take a simple ideas and overextend it in illogical ways. Does anyone else remember the fear mongering 1994 book – The End of Work – about how “worldwide unemployment will increase as new computer-based and communications technologies eliminate tens of millions of jobs in the manufacturing, agricultural and service sectors”? Keen’s book is similar in that it nicely capture our society’s paranoias and fears about change (this time wrought by “web 2.0”).

Interestingly I think Keen is a necessary and positive phenomenon. Indeed, Keen exists, is championed and raised up on a pedestal not because he is right, but because is so glamorously wrong. Societies need Keen so that his arguments can be publicly destroyed in a manner that satisfies even the strongest doubters. In this case, Keen’s book helps advance the larger narrative of how centralized authorities that offer the illusion of certainty are collapsing in the face of probabilistic networks that offer a reality of uncertainty. This narrative is hardly new – books that preceded “The Cult of the Amateur” like “The Long Tail” and “Free Culture” more than aptly dealt with Keen’s arguments, they just didn’t do it publicly enough. The publics’ appetite to understand and hear this debate is merely climaxing and has not yet been sated.

The San Francisco Chronicle is right to say “every good movement needs a contrarian. Web 2.0 has Andrew Keen.” Sometimes being contrarian is about offering a valuable insight and keeping the debate honest. And then, sometime its about encapsulating the worst fears, insecurities, and power dynamics of a dying era so that a society can exorcise it. I guess it’s a role someone has to play, and Keen will be well rewarded for it… as his book is reviewed and sold, online.

(NB: If you are looking for a good book on the internet, skip The Cult of the Amateur and consider David Weinberger’s Small Pieces Loosely Joined or Everything Is Miscellaneous: The Power of the New Digital Disorder they are both excellent).

Old Media – was the golden era ever that golden?

“There is a country in the world where only 15% of the population has completed high school and just 5% have university degrees. Television sets are something of a rarity, cable is nonexistent; programs are available for only a limited number of hours a day – in black-and-white. The total circulation of weekly newspapers comes in at about 20% of the population. There is only one national magazine. No one has access to the Internet. No one owns a cell phone. The best bets for information seem to be radio, libraries, and access to a few knowledgeable people.

The country? Canada. The year? 1960.”

The Boomer Factor by Reginald Bibby

Friends and proponents of “old media” keep referring to the “good old days” when people read allegedly high quality newspapers. More importantly they lament the decline of the number of people who read newspapers and who are news literate.

At the root of this fear is an assmuption that in an earlier era we had a better informed, more active and more engaged citizenry. As a result our democracy, social cohesion and rates of social engagement were stronger. What I love about the above statistics is how they vividly show that this idealized view of the past is a complete myth. Even at the height of this era, the 1960’s, newspaper subscription rates were at a mere 20% of the population.

It is worth noting that today 81% of households and 67.8% of Canadian have high speed access to the Internet. While not all of them are reading the New York Times of the Globe and Mail, I am willing to bet a good number of them are consuming a written, online media of some form. All this begs the question was the golden age of old media really all that golden?

Social networking vs. Government Silos'

As some of you may remember, back in May I published an op-ed on Facebook and government bureaucracy in the Globe and Mail. The response to the article has been significant, including emails from public servants across the country and several speaking engagements. As a result, I’m in the process of turning the op-ed into a full blown policy article. With luck somewhere like Policy Options will be interested in it.

So… if anyone has any stories – personal or in the media- they think might be relevant please do send them along. For example Debbie C. recently sent me this story, about the establishment of A-Space, a social networking site for US intelligence analysts, that is proving to be a very interesting case study.

Small piece from small pieces

After being side tracked by the final Harry Potter book (which was excellent), weddings (congrats to Irfhan and Gen) and work (Chicago is a good a place as any to find oneself)… I’m finally back to reading David Weinberger’s Small Pieces Loosely Joined and it is fantastic.

Favourite line so far:

It is no accident that the web is distracting. It is the Web’s hyperlinked nature to pull our attention here and there. But it is not at all clear that a new distractedness represents a weakening of our culture’s intellectual powers, a lack of focus, a diversion from the important work that needs to be done, a disruption of our very important schedule. Distraction may instead represent our interests finally finding the type of time that suits it best. Maybe when set free in a field of abundance, our hunger moves us from three meals a day to day-long grazing. Our experience of time on the Web, its ungluing and re-gluing of threads, may be less an artifact of the Web than the Web’s enabling our interest to find its own rhythm. Perhaps the Web isn’t shortening our attention span. Perhaps the world is just getting more interesting.

Amen.

Check out Weinberger’s blogs: Everything is Miscellaneous and Joho the Blog

Speaking of hyperlinked… Harley Y., a frequent reader and a fellow open-source affectionado noticed in Monday’s post that I was in Chicago. Being in town himself by chance he dropped me and email and we met up for dinner. Good times and conversation ensued… welcome to the world of the web, it’s not just online anymore.

The iphone review redux

So I’m down in Chicago for the week (at The Drake!) for work and my colleague has an iphone. Some of you may remember my negative predictions when the iphone was first announced.

Iphone

I have to admit that the iphone is one sexy beast. The screen is stunning and many of the features – such as surfing that web and looking at photos are amazingly clear and fluid. Possibly the coolest feature is how, as you rotate the phone, the image/screen always rotates with you so it remains in the upright position. That’s some clever work with the gyroscopes…

However, the one of the main concerns I flagged back in January was not assuaged. This being that the keyboard, because it is simply part of the screen, is not easy to use. Simply put, your thumb often presses the wrong key making one feel like the fat Homer Simpson in the episode where he needs the special “fat phone” because his fingers are too big to use a normal push button phone. Typing out email on the iphone will likely be too cumbersome and frustrating a process for the regular or business user to do regularly. More importantly, it pales in comparison to the Blackberry keyboard.

But my criticism pale in comparison to this increadibly thoughtful critique delivered by Peter S. Magnusson on Yahoo! (and sent to me by Rikia S.). Sadly, Magnusson’s comments are no longer available on Yahoo! so I’ve reposted them below. I wish I’d been half this clever:

I don’t think the iPhone fundamentally innovates over and above the existing offerings, in the manner that the iPod, the Macintosh, and the Apple II all did in their day. To the contrary, I find that the iPhone reveals that Mr. Jobs, and thus Apple, does not (yet) understand a paradigm of 21st-century computer usage.

At its heart, the iPhone is a projection of the original vision of bringing clunky desktop applications such as e-mail, contact databases, to-do lists, telephones, note taking, and Web browsing to the palm of your hand. Because that is essentially Jobs’s generation – transitioning from the mainframe office environment to the PC-based office – he can’t quite get rid of the notion that a mobile device is nothing but a really small personal computer.

Here’s my theory: Apple can only create really interesting products if Jobs understands the end-user. And Jobs does not understand the 21st-century user. In this century, people don’t send memos to each other.

Today, people chat; they blog; they share multimedia such as pictures, video, and audio; they debate (“flame”) each other on forums; they link with each other in intricate webs; they switch effortlessly between different electronic personae and avatars; they listen to Internet radio; they battle over reputation; they podcast; they do mash-ups; they vote on this, that, and the other; they argue on wiki discussion groups.

With the exception of a minimalist widget for text messaging, the iPhone does not have direct support for any of that. No support for sharing photos, no recording of podcasts, no text communities, no location awareness.

Without going through a computer with a cable, the iPhone doesn’t really communicate very much with anything.

In fact, when you want to communicate with somebody, the method (application) comes before the person. You first have to choose how to communicate (SMS, phone call, e-mail, Web service). Only then can you choose whom you want to talk to. That is a classical “code-centric” view of the world. Apple completely misses the opportunity to present text messaging, visual voice mail, and multimedia e-mails in a coherent view.

This is not a simple lack of features. This is not a “one-dot-oh” effect inherent in a brand-new product category. This is a fundamental lack of understanding of social networking.

What made the iPod a breakthrough product was that Jobs really knows music. He’s an artsy guy. He’s even known to have a really good musical ear. That’s why the iPod was awesome.

Social networking and Web 2.0 are apparently another matter. It’s a generational thing, I guess. Jobs is even older than I am, and I’m having a really hard time keeping up with the times. Plus he’s busier than I am.

What the iPhone should have done was put the social network front and center. It would happily invite the “play” aspect of modern computing, which is increasingly interacting with “work” – personal blogs morph to full-time jobs; YouTube postings lead to advertising agency job offers; entrepreneurial musings lead to investor contacts; and so forth. Chatting and sharing media should have direct support.

But Apple has a unique asset that may yet save the day: the sheer moral support it can draw from the tech community. This past weekend, for example, an entire impromptu developer conference was assembled with the sole purpose of “making the Web a better place for [the iPhone].” So, ironically, social networking technologists are busy arranging themselves such that Apple will, yes, recognize their significance and treat them as first-class citizens. It’s not too late.

I hope Apple listens.

Op-Ed in Yesterday's Toronto Star

Taylor Owen and I published this piece in the Toronto Star on the 10th anniversary of blogging and its impact on news media. (PDF version here)

Blogosphere at age 10 is improving journalism
Jul 30, 2007 04:30 AM
David Eaves & Taylor Owen

Although hard to believe, this month marks the 10th anniversary of blogging, a method for regularly publishing content online.

And what a milestone it is. A recent census of “the blogosphere” counted more than 70 million blogs covering an unimaginable array of topics.

Moreover, every day an astounding 120,000 new blogs are created and 1.5 million new posts are published (about 17 posts per second). Never before have so many contributed so much to our media landscape.

Despite this exponential growth, blogging continues to be misunderstood by both technophiles and technophobes. For the past decade the former have maintained that blogs will replace traditional journalism, ushering in an era of citizen-run media. Conversely, the latter have argued that a wave of amateurs threatens the quality and integrity of journalism – and possibly even democracy.

Both are wrong.

Blogging is not a substitute for journalism. If anything, this past decade shows that blogging and journalism are symbiotic – to the benefit of everyone.

To its many ardent advocates, blogging is displacing traditional journalism. But journalism – unlike blogging – is a practice with a particular set of norms and structures that guide the creation of content. Blogging, despite its unique properties (virtually anyone can reach a potentially enormous audience at little cost), has few, if any norms.

Consider another, more established medium. Books enable various practices, such as fiction, poetry, science and sometimes journalism, to be disseminated. Do books pose a threat to journalism? Of course not. They do the opposite. Journalistic books, like blogs, increase interest in the subjects they tackle and so promote further media consumption.

The same market forces that apply to books and newspapers apply to blogs.

Readers will judge and elect to read based on the same standard: Does it inform, is it well researched and does it add value?

Because blogs are cheaper to maintain they will always be numerous, but this makes them neither unique nor more likely to be read regularly.

Ultimately blogs, like books, don’t replace journalism; they simply provide another medium for its dissemination and consumption.

If technophiles mistakenly claim that blogging competes with – and will ultimately replace – traditional journalism, then technophobes’ fear of being swept away by a tsunami of irrelevant and amateurish blogs is equally misplaced.

Traditionalists’ concern with blogging is rooted in the fact that the average blog is of questionable quality. Ask anyone who has looked, and cringed, at a friend’s blog.

But this conclusion is based on a flawed understanding of how people use the Internet. The Internet’s most powerful property is its capacity to connect users quickly to exactly what they are looking for, including high-quality writing on any subject.

This accounts for the tremendous amount of traffic high-quality blogs receive and explains why these bloggers are print journalists’ true competition. As technology expert Paul Graham argues: “Those in the print media who dismiss online writing because of its low average quality miss the point. No one reads the average blog.”

Once this capability of the Internet is taken into account, the significance of blogging shifts. Imagine that only 5 per cent – or 75,000 – of daily posts are journalistic in content, and that only 1 per cent of these are of high quality. That still leaves 750 high-quality posts published every day.

Even by this conservative assessment, the blogo- sphere still yields a quantity of content that can challenge the world’s best newspapers.

In addition, as a wider range of writers and citizens try blogging, the diversity and quantity of high-quality blogs will continue to increase. Currently, the number of blogs doubles every 300 days. Consequently, the situation is going to get much worse, or depending on your perspective, much better.

As bloggers continue to gain tangible influence in public debates, our understanding of this phenomenon will mature.

And this past decade should serve as a good guide. Contrary to the predictions of both champions and skeptics, blogging has neither displaced nor debased the practice of journalism. If anything, it has made journalism more accurate, democratic and widely read.

Let’s hope blogging’s next decade will be as positive and transformative as the first.

The Trust Economy (or, on why Gen Yers don't trust anyone, except Jon Stewart)

I was listening to Dr. Moira Gunn’s podcast interview of Andrew Keen – author of “The Cult of the Amateur: How Today’s Internet Is Killing Our Culture” – and was struck not only by how Keen’s arguments ate themselves, but how he failed to grasp the internet’s emerging trust economy.

Keen is the new internet contrarian. He argues that the anonymous nature of the internet makes it impossible to trust what anyone says. For example: How do you know this blog really is written by David Eaves? And who is David Eaves? Is he even real? And why should you trust him?

According to Keen, the internet’s “cult of anonymity” creates a low-trust environment rife with lies and spin. But the real problem is how this erosion of trust is spilling over and negatively impacting the credibility of “old media” institutions such as newspapers, news television, movie studies, record labels, and publishing houses. With fewer people trusting – and thus consuming – their products, the traditional “trustworthy” institutions are going out of business and leaving the public with fewer reliable news sources.

Let’s put aside the fact that the decline of deference to authority set in long before the rise of the internet and tackle Keen’s argument head on. Is there a decline of trust?

I’d argue the opposite is true. The more anonymous the internet becomes, and the more it becomes filled with lies and spin, the more its users seek to develop ways to assess credibility and honesty. While there may be lots of people saying lots of silly things anonymously, the truth is, not a lot of people are paying attention, and when they do, they aren’t ascribing it very much value. If anything the internet is spawning a new “trust economy,” one whose currency takes time to cultivate, spreads slowly, is deeply personal, and is easily lost. And who has this discerning taste for media? Generation Y (and X), possibly the most media literate generation(s) to date.

The simple fact is: Gen Yers don’t trust anyone, be it bloggers, newscasters, reporters, movie stars, etc… This is why “The Daily Show with Jon Stewart” is so popular. Contrary to popular opinion, The Daily Show doesn’t target politics or politicians – they’re simply caught in the crossfire – the real target of Stewart et al. is the media. Stewart (and his legions of Gen Y fans) love highlighting how the media – especially Keen’s venerable sources of trustworthy news – lie, spin, cheat and err all the time (and fail to report on the lying, spinning, cheating and errors of those they cover). In short, The Daily Show is about media literacy, and that’s why Gen Yers eat it up.

In contrast, what is being lost is the “blind trust” of a previous era. What Keen laments isn’t a decline in trust, but the loss of a time when people outsourced trust to an established elite who filtered the news and, assessed what was important, and decided what was true. And contrary to Keen’s assertions, those who struggle with this shift are not young people. It is rather the generation unaccustomed to the internet and who lack the media literacy is being made transparent – sometimes for the first time. I recently encountered an excellent example of this while speaking to a baby boomer (a well educated PhD to boot) who was persuaded Conrad Black was innocent because his news source from the trial was Mark Steyn (someone, almost literally, on Black’s payroll). He blindly trusted the Maclean’s brand to deliver him informed and balanced news coverage, a trust that a simple wikipedia search might have revealed as misplaced.

Is there a decline in trust? Perhaps of a type. But it is “blind trust” that is in decline. A new generation of media literates is emerging who, as Dr. Gunn termed it “know that it’s Julia Robert’s face, and someone else’s body, on the Pretty Women posters.” And this skepticism is leading them on their own quest for trust mechanisms. Ironically, it is this very fact that makes Keen’s concerns about old-media unfounded. This search for trust may kill off some established, but untrustworthy “old media” players, but it will richly reward established brands that figure out how to create a more personal relationship with their readers.

The end of TV and the end of CanCon?

A few weeks ago I blogged about how the arrival of Joost could eventually require the rethinking of Canadian content rules (CanCon).

For those unfamiliar with CanCon, it is a policy, managed (I believe) by Heritage Canada and enforced by Canada’s broadcasting regulator, the Canadian Radio-television and Telecommunications Commission (CRTC), that establishes a system of quotas to ensure a certain amount of Canadian programming (e.g. music, TV) is broadcast within Canada.

In laymen terms: CanCon ensures that Canadian radio and TV stations broadcast at least some Canadian content. This can be good – making stars out of artists that might not have have received airplay – think The Bare Naked Ladies. And it can be bad, making (usually temporarily) stars out of artists that should never have received airplay – think Snow.

Well I’ve been allowed to serve as a Joost beta tester. After getting my email invitation last week I downloaded a copy.

In essence Joost is like You-Tube, but bigger, faster,  and sleeker. It’s as though Apple’s design team revamped You-Tube from the ground up and, while they were at it, grabbed themselves some partners to provide some more professional content.

But what makes Joost so interesting is how it’s organized. Joost feels like on-demand TV, with content divided into “categories” – such as “documentaries films” – and subdivided into “channels” – such as the “Indieflix channel” and the “Witness channel.” There is already a fair amount of content already available including a number of hour long (or longer) documentaries that are worth watching. (I can’t WAIT until Frontline has a channel up and running. I’d love to be able to watch any Frontline episode, anywhere, anytime, on a full screen.)

So what happens to Canadian content rules when anyone, anywhere can create and distribute content directly to my computer, and eventually, my TV? At this point, the only options left appear to be a) give up, or b) regulate content on the internet. Problematically, regulating internet content and access may be both impossible (even China struggles with this policy objective) and unpopular (I hope you’re as deeply uncomfortable as I am with the government regulating internet content).

The internet has (so far) enabled users to vastly expand the number of media sources available to them, and even create their own media. This has been a nightmare for “traditional media” such as newspapers and television stations, whose younger market demographic has significantly eroded. As a result, these same forces are eroding the government’s capacity to control what Canadians watch.

Which brings us back to option (a). At worst, CanCon is going the way of the Dodo – it will be too difficult to implement and maintain. Indeed a crisis in cultural policy may be looming. On the bright side however, the internet enables ordinary Canadians to create their own media (blogs, podcasts and now even videos) and distribute it over the internet, across the country and around the world. This is a better outcome than CanCon – which essential supports large, established media conglomerates who do Canadian content out of necessity, not passion – could ever have hoped for. Ordinary canadians may now be in the driver seat in creating content. That is a good outcome. Let’s hope any policy that replaces CanCon bears this in mind.

Open Source Communities – Mapping and Segmentation

I’ve just finished “Linked” by Albert-Laszlo Barabasi (review to come shortly) and the number of applications of his thesis are startling.

A New Map for Open Source Communities

The first that jumps to mind is how it nicely the book’s main point provides a map that explains both the growth and structure of open source communities. Most people likely assume that networks (such as an open source community) are randomly organized – with lots people in the network connected to lots of other people. Most likely, these connections would be haphazard, randomly created (perhaps when two people meet or work together) and fairly evenly distributed.

linked1

If an open-source community was organized randomly and experienced random growth, participants would join the community and over time connect with others and generate new relationships. Some participants would opt to create more relationships (perhaps because they volunteer more time), others would create fewer relationships (perhaps because they volunteer less time and/or stick to working with the same group of people). Over time, the law of averages should balance out active and non-active users.

New entrants would be less active in part because they possess fewer relationships in the community (let’s say 1 or 2 connections). However, these new entrants would eventually became more active as they made new relationships and became more connected. As a result they would join the large pool of average community members who would possess an average number of connections (say 10 other people) and who might be relatively active. Finally, at the other extreme we would find veterans and/or super active members. A small band of relatively well connected members who know a great deal of people (say 60 or even 80 people).

Map out the above described community and you get a bell curve (taken from the book Linked). A few users (nodes) with weak links and a few better connected than the average. The bulk of the community lies in the middle with most people possessing more or less the same number of links and contributing more or less the same amount as everyone else. Makes sense, right?

Or maybe not. People involved in open-source communities probably will remark that their community participation levels does not look like this. This, according to Barabasi, should not surprise us. Many networks aren’t structured this way. The rules that govern the growth and structure of many network – rules that create what Barabasi terms “scale-free networks” – create something that looks, and acts, very differently.

In the above graph we can talk about about the average user (or node) with confidence. And this makes sense… most of us assume that there is such thing as an average user (in the case of opensource movements, it’s probably a “he,” with a comp-sci background, and an avid Simpson’s fan). But in reality, most networks don’t have an average node (or user). Instead they are shaped by what is called a “power law distribution.” This means that there is no “average” peak, but a gradually diminishing curve with many, many, many small nodes coexisting with a few extremely large nodes.

linked2

In an open source community this would mean that there are a few (indeed very few, in relation to the community’s size) power users and a large number of less active or more passive users.

Applying this description to the Firefox community we should find the bulk of users at the extreme left. People who – for example – are like me. They use Firefox and have maybe even registered a bug or two on Firefox’s Bugzilla webpage. I don’t know many people in the community and I’m not all the active. To my right are more active members, people who probably do more – maybe beta test or even code – and who are better connected in the community. At the very extreme and the super-users (or super nodes). These are people who contribute daily or are like Mike Shaver (bio, blog) and Mike Beltzner (bio, blog): paid employees of the Mozilla corporation with deep connections into the community.

Indeed, Beltzner’s presentation on the FireFox community (blog post here, presentation here and relevant slides posted below) lists a hierarchy of participation level that appears to mirror a power law distribution.

mbslide3

I think we can presume that those at the beginning of the slide set (e.g Beltzner, the 40 member Mozilla Dev Team and the 100 Daily Contributors) are significantly more active and connected within the community than the Nightly Testers, Beta Testers and Daily Users. So the FireFox community (or network) may be more accurately described by a Power Law Distribution.

Implications for Community Management

So what does this mean for open source communities? If Barabasi’s theory of networks can be applied to open source communities – there are at least 3 issues/ideas worth noting:

1. Scaling could be a problem

If open source communities do indeed look like “scale-free networks” then it maybe be harder then previously assumed to cultivate (and capitalize on) a large community. Denser “nodes” (e.g. highly networked and engaged participants) may not emerge. Indeed the existence of a few “hyper-nodes” (super-users) may actually prevent new super-users (i.e. new leaders, heavy participants) from arising since new relationships will tend to gravitate towards existing hubs.

Paradoxically, the problem may be made worse by the fact that most humans can only maintain a limited number of relationships at any given time. According to Barabasi, new users (or nodes) entering the community (or network) will generally attempt to forge relationships with hub-like individuals (this is, of course, where the information and decision-making resides). However, if these hubs are already saturated with relationships, then these new users will have hard time forging the critical relationships that will solidify their connection with the community.

Indeed, I’ve heard of this problem manifesting itself in open source communities. Those central to the project (the hyper nodes) constantly rely on the same trusted people over and over again. As a result the relationships between these individuals get denser while the opportunities for forging new relationships (by proving yourself capable at a task) with critical hubs diminishes.

2. Segmentation model

Under a Bell Shaped curve model of networks it made little sense to devote resource and energy to supporting and helping those who participate least because they made up a small proportion on the community. Time and energy would be devoted to enabling the average participant since they represented the bulk of the community’s participants.

A Power Law distribution radically alters the makeup of the community. Relatively speaking, there are an incredibly vast number of users/participants who are only passively and/or loosely connected to the community compared to the tiny cohort of active members. Indeed, as Beltzner’s slides point out 100,000 Beta testers and 20-30M users vs. 100 Daily Contributors and 1000 Regular Contributors.

The million dollar question is how do we move people up the food chain? How do we convert users and Beta testers and contributors and daily contributors? Or, as Barabasi might put it: how do increase nodes density generally and the number of super-nodes specifically?Obviously Mozilla and others already do this, but segmenting the community – perhaps into the groups laid out by Beltzner – and providing them with tools to not only perform well at that level, but that enable them to migrate up the network hierarchy is essential. One way to accomplish this task would be to have more people contributing to a given task, however, another possibility (one I argue in an earlier blog post) is to simply open source more aspects of the project, including items such as marketing, strategy, etc…

3. Grease the networks nodes

Finally, another way to over come the potential scaling problem of open source is to improve the capacity of hubs to handle relationships thereby enabling them to a) handle more and/or b) foster new relationships more effectively. This is part of what I was highlighting on my post about relationship management as the core competency of open source projects.

Conclusion

This post attempts to provide a more nuanced topology of open source communities by describing them as scale-free networks. The goal is not to ascertain that there is some limit to the potential of open source communities but instead to flag and describe possible structural limitations so as to being a discussion on how they can be addressed and overcome. My hope is that others will find this post interesting and use its premise to brainstorm ideas for how we can improve these incredible communities.

As a final note, given the late hour, I’m confident there may be a typo or two in the text, possible even a flawed argument. Please don’t hesitate to point either out. I’d be deeply appreciative. If this was an interesting read you may find – in addition to the aforementioned post on community management – this post on collaboration vs cooperation in open source communities to be interesting.

Keeping the internet free

For those worried (or not yet worried, but who should be) about maintaining the internet as a open platform upon which anyone can participate and attract an audience, please let me point you to David Weinberger’s most recent ramblings on the subject.

He makes a strong case for why companies that provide us with internet access may have to be regulated.

I recently discovered Weinberger while listening to am interview on his newest book “Everything is Miscellaneously.” Great stuff. Vancouver Public Library has been kind enough to hook me up with his other books as well… (these libraries, they are amazing! Did you know you can read a book without buying it? Crazy.)

He also maintains a blog, for those who are curious.