Category Archives: technology

IPAC Conference

Today I’m doing a panel on Networks and Networking in the Public Service at “Beyond Bureaucracy” a conference hosted by the Toronto Regional branch of IPAC.

As the description states “Informal channels of communication are vital networks that allow people to socialize and collaborate and, arguably, work more efficiently. Technology can make these networks indispensable, as shown by user-driven wikis and social networking sites like Facebook. ”

True and true. And then here’s a kicker. These networks exist whether organizations sanction them or not. Although not perfect, social networking software at least brings old hidden networks out into the open and at best helps subject them to other societal norms (think gender parity and racial diversity). Telling employees they can’t use facebook doesn’t destroy the network. It just forces it somewhere else, somewhere where you have even less visibility into how it manifests itself, who it benefits and how it grows.

In essence you strengthen old hidden networks. That thing we use to call the old boys club.

How to make $240M of Microsoft equity disappear

Last week a few press articles described how Google apparently lost to Microsoft in a bidding war to invest in Facebook. (MS won – investing $240M in Facebook)

Did Google lose? I’m not so sure… by “losing” it may have just pulled off one of the savviest negotiations I’ve ever seen. Google may never have been interested in Facebook, only in pumping up its value to ensure Microsoft overpaid.

Why?

Because Google is planning to destroy Facebook’s value.

Facebook – like all social network sites – is a walled garden. It’s like a cellphone company that only allows its users to call people on the same network – for example if you were a Rogers cellphone user, you wouldn’t be allowed to call your friend who is a Bell cellphone user. In Facebook’s case you can only send notes, play games (like my favourite, scrabblelicious) and share info with other people on Facebook. Want to join a group on Friendster? To bad.

Social networking sites do this for two reasons. First, if a number of your friends are on Facebook, you’ll also be inclined to join. Once a critical mass of people join, network effects kick in, and pretty soon everybody wants to join.

This is important for reason number two. The more people who join and spend time on their site, the more money they make on advertising and the higher the fees they can charge developers for accessing their user base. But this also means Facebook has to keep its users captive. If Facebook users could join groups on any social networking site, they might start spending more time on other sites – meaning less revenue for Facebook. Facebook’s capacity to generate revenue, and thus its value, therefor depends in large part on two variables: a) the size of its user base; and b) its capacity to keep users captive within your site’s walled garden.

This is why Google’s negotiation strategy was potentially devastating.

MicroSoft just paid $240M for a 1.6% stake in Facebook. The valuation was likely based in part, on the size of Facebook’s user base and the assumption that these users could be kept within the site’s walled garden.

Let’s go back to our cell phone example for a moment. Imagine if a bunch of cellphone companies suddenly decided to let their users call one another. People would quickly start gravitating to those cellphone companies because they could call more of their friends – regardless of which network they were on.

This is precisely the idea behind Google’s major announcement earlier this week. Google launched OpenSocial – a set of common APIs that let developers create applications that work on any social networks that choose to participate. In short, social networks that participate will be able to let their users share information with each other and join each other’s groups. Still more interesting MySpace has just announced it will participate in the scheme.

This is a lose-lose story for Facebook. If other social networking sites allow their users to connect with one another then Facebook’s users will probably drift over to one of these competitors – eroding Facebook’s value. If Facebook decides to jump on the bandwagon and also use the OpenSocial API’s then its userbase will no longer be as captive – also eroding its value.

Either way Google has just thrown a wrench into Facebook’s business model, a week after Microsoft paid top dollar for it.

As such, this could be a strategically brilliant move. In short, Google:

  • Saves spending $240M – $1B investing in Facebook
  • Creates a platform that, by eroding Facebook’s business model, makes Microsoft’s investment much riskier
  • Limit their exposure to an anti-trust case by not dominating yet another online service
  • Creates an open standard in the social network space, making it easier for Google to create its own social networking site later, once a clear successful business model emerges

Nice move.

Open source fun, Open source problems…

I had a thoroughly enjoyable time at the Free-Software and Open Source Symposium (FSOSS) at Seneca college. I had a great time giving my talk on community management as the core competency of open source communities. The audience was really engaged and asked great questions – I just wish we’d had more time.

The talk was actually filmed and can be downloaded, but it is only available as an OGG file wihch is large (416Mb) but rumor has it they may get converted into a smaller more streamable format in the future. Once the video is available I’ll also post the slides.

Also, I want to thank Coop and Shane for blogging the positive feedback. I’m looking forward to building on and refining the ideas…

One of the key ideas I’m interested in pushing is how “open” open source communities are – and how they can make themselves easier to join. I actually had an interesting experience while at FSOSS that highlighted how subtle this challenge can be.

During one of the lunch breaks Mark Surman and I ran a Birds of a Feather session on Community Management as the Core Competency of Open Source Communities. In the lead up to the session, a leader of a prominent open source community (I knew this because it said so on his name tag) walked up to me and asked:

Are you running this BoF?” (Birds of a Feather)

Not being hip to the lingo I replied… “What’s a BoF? I’m not super techie so I don’t know all the terms.

To which he replied “Evidently.” and walked away.

And thus ended my first contact with this particular open source community. With its titular leader nonetheless. Needless to say, it didn’t leave a positive impression.

I’ll admit this is an anecdotal piece of data. But it affirms my thinking that while open source communities may be open – to whom they are open may not be as broad a cross section of the population as we are lead to believe (e.g. you’d better already know the lingo and cultural norms of the community).

There is another important lesson here. One that impacts directly the scalability of open source communities. At some point everyone has to have a first contact with a community – that first impression may be a strong determinant about where they volunteer their time and contribute their free labour. Any good open-source community will probably want to get it right.

The Dunbar number in open source

For those interested in open-source systems (everything from public policy to software) should listen to Christopher Allen’s talk (his blog here) on the Dunbar Number.

Dunbar’s number, which is 150, represents a theoretical maximum number of individuals with whom a set of people can maintain a social relationship, the kind of relationship that goes with knowing who each person is and how each person relates socially to every other person.

Malcolm Gladwell brought the Dunbar number into popular discourse when he referenced it in his book The Tipping Point.

However, Allen’s talk tries to nuance the debate. Specifically, he wishes that those who reference the Dunbar number would be more aware that in he research literature, the mean group size of 150 only applies to groups with high incentives to stay together. As examples he cites nomadic tribes, armies, terrorist organizations, mafias, etc… in short, groups in which mutual trust and strong relationships are essential for survival. This is in part due to the fact that there is a cost group members must pay to maintaining this groups of this size: one must spend 40% of ones time engaged in “social grooming.” This means sitting around listening to one another, talking, being engaged, etc… Without this social grooming it is difficult to develop and maintain the unstructured trust that holds the group together.

More interestingly Allen’s research suggests that in modern groups there is a correlation between group satisfaction and the size of the group. Things work well between 3-12 people and from 25-80. But in between there is this hole. Groups in this “chasm” are too be too big to use many of the tools (like meetings) that small groups can use, but too small to successfully rely on the tools (such as hierarchies and reporting mechanisms) that allow larger groups to function.

Open source projects (and really any new project) should find this interesting. There is a group size chasm that must, at some point, be crossed. When I’m less tired I will try to wander over to sourceforge and see if I can plot the size of the projects there to see if they scale up nicely against Allen’s graph.

In addition, I’m curious as to whether some softer skills around facilitation would allow groups to function more effectively, even within this “chasm.”

Where are the progressives on Net Neutrality?

I’m excited to see that the Green Party has included a section on Net Neutrality in it’s platform.

4. Supporting the free flow of information

The Internet has become an essential tool in knowledge storage and the free flow of information between citizens. It is playing a critical role in democratizing communications and society as a whole. There are corporations that want to control the content of information on the internet and alter the free flow of information by giving preferential treatment to those who pay extra for faster service.

Our Vision

The Green Party of Canada is committed to the original design principle of the internet – network neutrality: the idea that a maximally useful public information network treats all content, sites, and platforms equally, thus allowing the network to carry every form of information and support every kind of application.

Green Solutions

Green Party MPs will:

  • Pass legislation granting the Internet in Canada the status of Common Carrier – prohibiting Internet Service Providers from discriminating due to content while freeing them from liability for content transmitted through their systems.

Liberals, NDP… we are waiting…

How we humbled the NYT

Taylor and I published this piece in the Tyee yesterday. In short, the newspapers are dying, and they have completely failed to understand the internet. Most importantly, they think they are above the rest of the online community… and so long as they act they way they won’t be above the community, they’ll be outside it.

It’s derived from a larger magazine styled piece on old and new media that we are looking for a home to publish. If anyone has any suggestions of a possible home I would be grateful for your ideas.

How We Educated the New York Times

A zillion clicks taught newspapers they aren’t in control.
Published: October 10, 2007

The New York Times made waves in the media world recently by dismantling its subscription paywall. As a result, anyone with a computer and an Internet connection can now read the entire paper online for free.

The failed paywall experiment of the New York Times is emblematic of the newspaper industry’s two-decade-old struggle to survive online. So long as the Internet is perceived as nothing more than a new tool for distributing the news to a passive audience — readers, citizen and the community more generally, will continue to tune out. For newspapers to survive, a more nuanced understanding of the online world is needed.

The key is grasping that the relationship between communities and their news has fundamentally changed.

You and I are in charge now

Prior to the Internet, people determined what was important by reading what newspaper editors thought was important. Today, people have a host of ways to determine what is important and to connect quickly with stories on those issues. Newspapers can shift their content, and advertising, online, but as long as they believe they are the arbiters of a community’s agenda, they will continue to struggle.

Online, people engage with news in two new ways, both of which deviate significantly from the traditional newspaper model.

First, algorithm-based aggregators, such as Google News and Del.icio.us, and human-run websites, such as National Newswatch and the Huffington Post, provide powerful alternatives to the traditional newspaper editor.

 

Aggregators, both human and algorithm-based, don’t care about content’s origins, only its relevance to readers. They ferret out the best content from across the web and deposit it on your computer screen. This begs the question: if you could read the best articles drawn from a pool of 100 authors (the approximate number of journalists at a daily newspaper) vs. a pool of 1.5 million posts (the amount of new content created online each day), which would you choose?But it is the second reason that should most concern newspapers. Younger readers don’t just use aggregators. They increasingly read articles found through links from blogs. Rather than roaming within a newspaper’s walled gardens, younger readers build their own media communities where a trusted network of bloggers guide them to interesting content. Online, bloggers are the new editors.

Take, for example, the relationship many Canadians have with the prominent blogger Andrew Potter. While most people have never met him in person, his readers know his perspectives and biases, and this personal connection creates a loyal following.

Antithetically, people are also drawn back because they are interested in the places Potter links to, virtually all of which direct readers away from the site he blogs for, Macleans.ca.

Share the good stuff

To most newspapers, the idea of directing traffic away from their news site remains an anathema. Newspaper websites contain virtually no external links. Ironically, this follows the design parameters of a Las Vegas casino — the goal is to get you in, and not let you leave. Does anyone really believe that all the news and perspectives relevant and important to a community can reside on a single website?

In this manner, newspapers are fighting the very thing that makes the Internet community compelling: its interconnectedness. Like Potter’s blog, the Internet’s best sites are attractive, not simply because their content is good, but rather because they link to content around the web. And if that content is compelling, readers keep coming back for more guidance.

People enjoy a sense of community, and democracy is strengthened when citizens are informed. The problem is, the New York Times, and virtually every traditional newspaper, fails to understand that a model has emerged that is far better at both delivering information and fostering community than the traditional news industries.

Bad neighbourhood?

Traditional media supporters will assert that these online communities are fragmented, in disagreement, full of scallywags, immature ranters, educated snobs and partisan hacks. And they’d be right. It’s messy and it’s imperfect. But then, so is the democratic community in which we live. The difference is, in an online community, everyone is telling us and directing us to issues and news items they believe are important.

The New York Times learned this lesson the hard way. After spending two years trying to wall its exclusive content off from the web, it discovered that rather than becoming more exclusive, it was becoming less relevant. Unable to link to its content, aggregators, bloggers and the online community more generally, simply stopped talking about them. Newspapers should heed this lesson. If newspapers want to transition into the online age — they’ll have to join this community, rather than seek to control it.

Free Software and Open-Source Symposium

Friends! I want to make sure everybody and anybody who might be interested knows about the upcoming 6th annual Free Software and Open-Source Symposium in Toronto, this October 25-26th.

What is Open-Source? There is a good definition here.

Non-techies should not be shy… I (and I’m very non-techie, I couldn’t code if my life, quite literally, depended on it) for example will be talking about Community Management as the core competency of Open Source projects. While open-source is usually talked about in reference to software, the conference organizers are interested in open systems more generally, and how they can be applied in various fields. I’m interested in open-source public policy (which, if they’ll have me back, I’d like to talk about next year…) and others are interested in its application to theater, meeting design, etc…

For more information I would suggest the blog of David Humphrey, one of the event’s coordinators, where one can read about cool insider info (e.g. prizes) and juicy gossip (e.g. the public, but just, shaming of me for being delinquent in submitting my talk summary).

You can also check out the conference’s webpage, where you can find the agenda, a place to register and other info.

The Free Software and Open Source Symposium
October 25-26th, 2007 – 9:00 a.m. to 5:00 p.m.
Seneca@York Campus, Toronto

The Symposium is a two-day event aimed at bringing together educators, developers and other interested parties to discuss common free software and open source issues, learn new technologies and to promote the use of free and open source software. At Seneca College, we think free and open source software are real alternatives.

Making the shuffle better

My geek squad (or is it nerd herd?) suggestion.

I have an a iPod shuffle (which BTW) I love. And, as many of you know, I’ve committed myself to walking at least one direction to any meeting I have in Vancouver, no matter how far. As a result, I end up in some long walks, which I use as an opportunity to listen to audiobooks and podcasts. Shuffle

The problem is that some of the books, and even some podcasts, come as a single large file. If while listening, you accidentally push the forward button, you lose your place and have to spend the next 5 minutes fast forwarding through the mp3 to find your place.

I know, I know, I know… I could “lock” the buttons by pressing down the play/pause button for 3 seconds, but then I can’t adjust the volume – something that is essential when walking in the city and shift from busy main streets, to pleasant quiet side streets.

All this goes to say that it would be nice if the shuffle let you lock all the buttons except the volume buttons. Then you could increase and decrease the volume without fear of losing your place.

But then, I thought of something cooler. What if Apple let you reprogram their shuffle buttons however you saw fit? Say, for example, you only want your shuffle to skip to the next song if you click the fast forward button twice in quick succession… no problem, you just program it that way. Now that would be cool.

My assumption is that this type of reprogramming would not be that hard. Apple already allows you to limit the maximum volume of your shuffle. How hard can it be to hand over control of the other keys?

Anyone know anyone at Apple I could pitch the idea to?

Canadian Technophobia: Privacy Commissioner vs. Google

How is it that, as individuals, Canadians are such avid internet users, but our institutions, governments and companies are somewhere between technophobic and luddite?
Take for example the recent story Alison L. sent me from Stephen Taylor‘s blog in which he comment on this CBC news story. The story? That Canada’s Privacy Commissioner has written Google about her concern that Google Maps’ Street View functionality may violate Canada’s federal privacy legislation if it is implemented here.

For the uninitiated Google (and/or a partner firm) creates this street map feature by literally driving a car along a street with a camera on its roof and it takes a photo about every 5 seconds. This allows the user to “see” what the street looks like from various 5 meter increments. The commissioners concern is:

“Our Office considers images of individuals that are sufficiently clear to allow an individual to be identified to be personal information within the meaning of PIPEDA [the privacy act]”

One wonders where the Privacy Commissioner has been for the last 5, 10 or even 25 years (ok, ok, I concede that the privacy laws are relatively new… but still!). As Stephen points out – why hasn’t the Privacy Commissioner shut down Flickr? Indeed, virtually all Web 2.0 content could be suspect. It might be safer to shut down whole swaths of the web.

What’s interesting to me is that it is a website that has prompted this discussion. When this problem existed in traditional forms of media – ones’ presumably the commissioner is more comfortable with – it didn’t bother her.

City TV and Muchmusic are famous for doing interviews while showing live streetscapes in the background. Given the bar the commissioner has set, isn’t this footage illegal? And if we really want to take it to an extreme… what about the street level cameras on apartment buildings that enable people to see who is ringing their doorbell. Many of these camera’s are always on and can be watched from tenants TVs… if the Privacy Commissioners above statement is the standard we are to use… isn’t this a violation of privacy as well? Shouldn’t all these cameras be unplugged?

The above example highlights the prevailing attitude many organizations in Canada have towards the internet: move slowly, move cautiously, and, if possible, don’t move at all. Don’t believe me? Or perhaps we can hope the problem is limited to government? Well… Katie M. recently sent me this survey of Canada and the internet. According to it Canada is on par, and even ahead of, the United States when it comes to internet – and in particular broadband – access and usage. Even our blogosphere is strong. And yet, despite all this, e-commerce in Canada lags far behind the US. Name a single Canadian retailer with a strong online presence. Many Canadian stories don’t even allow people to shop online.

Why is this? Who knows. Could it be a weak tech sector in Canada? A business culture that is shockingly conservative? A brain drain of tech savvy people to San Francisco, Boston and other technology centres? A lack of venture capital? I don’t know.

What I do know is that this should concern Canadians. Individually, we are leaving our government and large corporations in the dust. At some point our capacity to innovate, to seek social change, to capitalize on economic opportunities will be limited by their narrow vision and understanding of the internet phenomenon.

review of small pieces loosely joined

I’m not sure where to begin with Small Pieces Loosely Joined.

Maybe with my regrets. My biggest regret is that it took me so long pick it up and read it. And I had no excuse, Beltzner had been trying to get me to read it for months. I now understand why.

Small Pieces Loosely Joined

Lawrence Lessig’s Free Culture took me into new territory by introducing me to the dangers and important issues confronting our emerging online world. In contrast, Small Pieces Loosely Joined did the opposite, it was a homecoming, a book that explained to me things I intuitively knew or felt, but in a manner that expanded my understanding and appreciation. It’s as though the author, David Weinberger, took me on a tour of my own home, a place I knew intimately, and explained to me its history, the reason and method of its construction, its impact on my life and its significance to my community. Suddenly, the meaning of a thing I use and live in everyday was expanded in ways that were consistent with what I already knew, but didn’t. Wienberger accomplishes all this, but in talking about the internet.

Weinberger achieves this by outlining how our sense of time, space, knowledge and matter is shaped by the online experience. Initially, the book could be mistaken as a more sophisticated Wikinomics, but as each concept builds on the other, the book becomes an increasingly philosophical and thoughtful treatise. Indeed, unlike Wikinomics, which anyone can scream through like a normal business book, Small Pieces took longer to read than anticipated because I wanted (and needed) to slow down and play with its ideas.

Indeed, you can see how so many ideas connect with this book. From The Naked Corporation (Weinberger’s discusses how our desire for authenticity drives form on the internet), to The Wisdom of Crowds to The Long Tail, this book is essential reading to those interesting in understanding of our emerging new world, one overlaid with an internet. Even I was caught in the vortex. For example, I recently wrote a post on the emerging trust economy (all while pitching in my two cents on Keen). I knew the ideas weren’t completely novel, but there was Weinberger, filling in the holes of my thoughts, outlining why we keep going back to the internet even though it is filled with so much disinformation (unlike FOX, CNN, or CBS or any corporate brochures that preceded the internet). Weinberger recognizes that:

…we don’t process information the way philosophers or computer programmers expect us to. We don’t use a systematic set of steps for evaluating what should be believed. Instead, we do on the web we do in the real world: we listen to the context, allow ourselves to be guided by details that we think embody the whole, and decide how much of what this person says were going to believe.

It’s not perfect. But then, neither are we.

But even without all that perfection, we still managed to create this amazing thing called the internet. This is singularly significant accomplishment and one Weinberger believes we must celebrate. And he’s right. At almost no time in history have we built something that is, and can become still more, broad and representative. And it is important that we remember the values that made it possible. A culture of freedom.

…consider how we would’ve gone about building the Web had we deliberately set out to do so. Generating the billions of pages on the web, all interlinked, would have required a mobilization on the order of world war. Because complexity requires management, we would have planned it, budgeted it, managed it,… and we would have failed miserably… We’d have editors pouring through those pages, authenticating them, vetting them for scandalous and pornographic material, classifying them, and obtaining signoff and permissions to avoid the inevitable lawsuits. Yet we — all of us — have built the global web without a single person with a business card that says “manager, WWW.”

Our biggest joint undertaking as a species is working out splendidly, but not only because we forgot to apply the theory that has guided us ever since determines were built. Whether we’ve thought about it explicitly or not, we all tacitly recognize — it’s part of the Web’s common sense — that what’s on the Web was put there without permission. We know that we can go where we want on the web without permission. The sense of freedom on the web is palpable. The web is profoundly permission free and management free, and we all know it.

More recently, Weinberger has emerged as a champion of the internet, probably most famously for taking on Andrew Keen in a now famous debate whose transcript can be read on the WSJ. His book explains the knowledge and understanding that allows Weinberger to be optimistic in the face of people like Keen. Indeed this book serves as a map to what has become Weinberger’s larger thesis – that the internet is not just a human project, but a humanizing project.

The Web is a social place. It is built page by page by people alone in groups of that other people can read those pages. It is an expression of points of view is diversion as human beings. In almost every case, what’s written is either explicitly or implicitly a view of how the world looks; the Web is a multimillion-part refraction of the world. Most of all, at the center of the web is human passion. We build each page because we care about something, whether we are telling other shoppers that a Maytag wasn’t as reliable as the ads promise, giving tips on how to build a faster racer for a soap box derby, arguing that the 1969 moon landing was a hoax, or even ripping off strangers.

What we see when we look into the internet is ourselves.

Increasingly, understanding humanity will require understanding the internet, and Weinberger’s book is a good departure point for that education.