Tag Archives: technology

FireFox 3 Beta and other cool gadgets

If you aren’t technically inclined, but are interested in impressing your co-workers, consider downloading the recently released beta version of FireFox 3.

This is your chance to look cooler than everybody else in your cubicle farm… pimping out your computer with the latest in open-source coolware.

And since we are speaking of gadgets… Gayle D. recently gave me this very cool pedometer. As some of you know, I try to walk at least one direction to all my meetings. This little device isn’t radically radically changing my life… but it is keeping me aware of my decision to walk everywhere. More importantly it’s enabled me to both set a target of taking 10,000 steps and given me the capacity to measure my progress. This is definitely pushing me make better, healthier decisions.

I’d heard a while back that Ontario Health Promotion Minister Jim Watson pitched to Research in Motion the idea that Blackberry devices should have an integrated pedometer.

I thought was a fantastic idea. Obviously it hasn’t gone anywhere – and to be fair, these advanced pedometers would add to the size of any Blackberry device… but I hope RIM hasn’t dropped the idea altogether.

Government Networks – Easy or Hard?

At the IPAC conference last week I did a panel on creating government networks. Prior to my contribution fellow panelist Dana Richardson, an ADM with the Government of Ontario, presented on her experience with creating inter-government networks. Her examples were interesting and insightful. More interesting still was her conclusion: creating networks is difficult.

Networked Snail - a metaphor for government

What makes this answer interesting is not it is correct (I’m not sure it is) but how it is a window into the problematic manner by which governments engage in network based activities.

While I have not studied Richardson’s examples I nonetheless have a hypothesis: these networks were difficult to create because they were between institutions. Consequently those being networked together weren’t be connected because they saw value in the network but because someone in their organization (likely their boss) felt it was important for them to be connected. In short, network participation was contrived and mandated.

This runs counter to what usually makes for an effective networks. Facebook, MySpace, the internet, fax machines, etc… these networks became successful not because someone ordered people to participate in them but because various individuals saw value in joining them and gauged their level of participation and activity accordingly. As more people joined, the more people found there was someone within the network with whom they wanted to connect – so they joined too.

This is because, often, a critical ingredient to successful networks is freedom of association. Motivated individuals are the best judges of what is interesting, useful and important to them. Consequently, when freedom of association exists, people will gravitate towards, and even form, epistemic communities with others that share or can give them, the knowledge and experience they value

I concede that you could be ordered to join a network, discover its utility, and then use it ever more. But in this day and age, when creating networks is getting easier and easier, people who want to self organize increasingly can and do. This means the obvious networks are already emerging/have already emerged. This brings us back to the problem. The reason mandated networks don’t work is because their participants either don’t know how to work together or don’t see the value in doing so. For governments (and really, any large organization), I suspect both are at play. Indeed, there is probably a significant gap between the number of people who are genuinely interested in their field of work (and so who join and participate in communities related to their work), and the number of people on payroll working for the organization in that field.

This isn’t to say mandated networks can’t be created or aren’t important. However, described this way Richardson’s statement becomes correct: they are hard to create. Consequently, you’d better be sure it is important enough to justify creating.

More interestingly however, you might find that you can essential create these networks without mandating them… just give your people the tools to find each other rather than forcing them together. You won’t get anywhere close to 100% participation, but those who see value in talking and working together will connect.

And if nobody does… maybe it is because they don’t see the value in it. If that is the case – all the networking in the world isn’t going to help. In all likelihood, you are probably asking the wrong question. Instead of: “how do we create a network for these people” try asking “why don’t they see the value in networking with one another.” Answer that, and I suspect you’ll change the equation.

IPAC Conference

Today I’m doing a panel on Networks and Networking in the Public Service at “Beyond Bureaucracy” a conference hosted by the Toronto Regional branch of IPAC.

As the description states “Informal channels of communication are vital networks that allow people to socialize and collaborate and, arguably, work more efficiently. Technology can make these networks indispensable, as shown by user-driven wikis and social networking sites like Facebook. ”

True and true. And then here’s a kicker. These networks exist whether organizations sanction them or not. Although not perfect, social networking software at least brings old hidden networks out into the open and at best helps subject them to other societal norms (think gender parity and racial diversity). Telling employees they can’t use facebook doesn’t destroy the network. It just forces it somewhere else, somewhere where you have even less visibility into how it manifests itself, who it benefits and how it grows.

In essence you strengthen old hidden networks. That thing we use to call the old boys club.

How to make $240M of Microsoft equity disappear

Last week a few press articles described how Google apparently lost to Microsoft in a bidding war to invest in Facebook. (MS won – investing $240M in Facebook)

Did Google lose? I’m not so sure… by “losing” it may have just pulled off one of the savviest negotiations I’ve ever seen. Google may never have been interested in Facebook, only in pumping up its value to ensure Microsoft overpaid.

Why?

Because Google is planning to destroy Facebook’s value.

Facebook – like all social network sites – is a walled garden. It’s like a cellphone company that only allows its users to call people on the same network – for example if you were a Rogers cellphone user, you wouldn’t be allowed to call your friend who is a Bell cellphone user. In Facebook’s case you can only send notes, play games (like my favourite, scrabblelicious) and share info with other people on Facebook. Want to join a group on Friendster? To bad.

Social networking sites do this for two reasons. First, if a number of your friends are on Facebook, you’ll also be inclined to join. Once a critical mass of people join, network effects kick in, and pretty soon everybody wants to join.

This is important for reason number two. The more people who join and spend time on their site, the more money they make on advertising and the higher the fees they can charge developers for accessing their user base. But this also means Facebook has to keep its users captive. If Facebook users could join groups on any social networking site, they might start spending more time on other sites – meaning less revenue for Facebook. Facebook’s capacity to generate revenue, and thus its value, therefor depends in large part on two variables: a) the size of its user base; and b) its capacity to keep users captive within your site’s walled garden.

This is why Google’s negotiation strategy was potentially devastating.

MicroSoft just paid $240M for a 1.6% stake in Facebook. The valuation was likely based in part, on the size of Facebook’s user base and the assumption that these users could be kept within the site’s walled garden.

Let’s go back to our cell phone example for a moment. Imagine if a bunch of cellphone companies suddenly decided to let their users call one another. People would quickly start gravitating to those cellphone companies because they could call more of their friends – regardless of which network they were on.

This is precisely the idea behind Google’s major announcement earlier this week. Google launched OpenSocial – a set of common APIs that let developers create applications that work on any social networks that choose to participate. In short, social networks that participate will be able to let their users share information with each other and join each other’s groups. Still more interesting MySpace has just announced it will participate in the scheme.

This is a lose-lose story for Facebook. If other social networking sites allow their users to connect with one another then Facebook’s users will probably drift over to one of these competitors – eroding Facebook’s value. If Facebook decides to jump on the bandwagon and also use the OpenSocial API’s then its userbase will no longer be as captive – also eroding its value.

Either way Google has just thrown a wrench into Facebook’s business model, a week after Microsoft paid top dollar for it.

As such, this could be a strategically brilliant move. In short, Google:

  • Saves spending $240M – $1B investing in Facebook
  • Creates a platform that, by eroding Facebook’s business model, makes Microsoft’s investment much riskier
  • Limit their exposure to an anti-trust case by not dominating yet another online service
  • Creates an open standard in the social network space, making it easier for Google to create its own social networking site later, once a clear successful business model emerges

Nice move.

Open source fun, Open source problems…

I had a thoroughly enjoyable time at the Free-Software and Open Source Symposium (FSOSS) at Seneca college. I had a great time giving my talk on community management as the core competency of open source communities. The audience was really engaged and asked great questions – I just wish we’d had more time.

The talk was actually filmed and can be downloaded, but it is only available as an OGG file wihch is large (416Mb) but rumor has it they may get converted into a smaller more streamable format in the future. Once the video is available I’ll also post the slides.

Also, I want to thank Coop and Shane for blogging the positive feedback. I’m looking forward to building on and refining the ideas…

One of the key ideas I’m interested in pushing is how “open” open source communities are – and how they can make themselves easier to join. I actually had an interesting experience while at FSOSS that highlighted how subtle this challenge can be.

During one of the lunch breaks Mark Surman and I ran a Birds of a Feather session on Community Management as the Core Competency of Open Source Communities. In the lead up to the session, a leader of a prominent open source community (I knew this because it said so on his name tag) walked up to me and asked:

Are you running this BoF?” (Birds of a Feather)

Not being hip to the lingo I replied… “What’s a BoF? I’m not super techie so I don’t know all the terms.

To which he replied “Evidently.” and walked away.

And thus ended my first contact with this particular open source community. With its titular leader nonetheless. Needless to say, it didn’t leave a positive impression.

I’ll admit this is an anecdotal piece of data. But it affirms my thinking that while open source communities may be open – to whom they are open may not be as broad a cross section of the population as we are lead to believe (e.g. you’d better already know the lingo and cultural norms of the community).

There is another important lesson here. One that impacts directly the scalability of open source communities. At some point everyone has to have a first contact with a community – that first impression may be a strong determinant about where they volunteer their time and contribute their free labour. Any good open-source community will probably want to get it right.

The Dunbar number in open source

For those interested in open-source systems (everything from public policy to software) should listen to Christopher Allen’s talk (his blog here) on the Dunbar Number.

Dunbar’s number, which is 150, represents a theoretical maximum number of individuals with whom a set of people can maintain a social relationship, the kind of relationship that goes with knowing who each person is and how each person relates socially to every other person.

Malcolm Gladwell brought the Dunbar number into popular discourse when he referenced it in his book The Tipping Point.

However, Allen’s talk tries to nuance the debate. Specifically, he wishes that those who reference the Dunbar number would be more aware that in he research literature, the mean group size of 150 only applies to groups with high incentives to stay together. As examples he cites nomadic tribes, armies, terrorist organizations, mafias, etc… in short, groups in which mutual trust and strong relationships are essential for survival. This is in part due to the fact that there is a cost group members must pay to maintaining this groups of this size: one must spend 40% of ones time engaged in “social grooming.” This means sitting around listening to one another, talking, being engaged, etc… Without this social grooming it is difficult to develop and maintain the unstructured trust that holds the group together.

More interestingly Allen’s research suggests that in modern groups there is a correlation between group satisfaction and the size of the group. Things work well between 3-12 people and from 25-80. But in between there is this hole. Groups in this “chasm” are too be too big to use many of the tools (like meetings) that small groups can use, but too small to successfully rely on the tools (such as hierarchies and reporting mechanisms) that allow larger groups to function.

Open source projects (and really any new project) should find this interesting. There is a group size chasm that must, at some point, be crossed. When I’m less tired I will try to wander over to sourceforge and see if I can plot the size of the projects there to see if they scale up nicely against Allen’s graph.

In addition, I’m curious as to whether some softer skills around facilitation would allow groups to function more effectively, even within this “chasm.”

Where are the progressives on Net Neutrality?

I’m excited to see that the Green Party has included a section on Net Neutrality in it’s platform.

4. Supporting the free flow of information

The Internet has become an essential tool in knowledge storage and the free flow of information between citizens. It is playing a critical role in democratizing communications and society as a whole. There are corporations that want to control the content of information on the internet and alter the free flow of information by giving preferential treatment to those who pay extra for faster service.

Our Vision

The Green Party of Canada is committed to the original design principle of the internet – network neutrality: the idea that a maximally useful public information network treats all content, sites, and platforms equally, thus allowing the network to carry every form of information and support every kind of application.

Green Solutions

Green Party MPs will:

  • Pass legislation granting the Internet in Canada the status of Common Carrier – prohibiting Internet Service Providers from discriminating due to content while freeing them from liability for content transmitted through their systems.

Liberals, NDP… we are waiting…

How we humbled the NYT

Taylor and I published this piece in the Tyee yesterday. In short, the newspapers are dying, and they have completely failed to understand the internet. Most importantly, they think they are above the rest of the online community… and so long as they act they way they won’t be above the community, they’ll be outside it.

It’s derived from a larger magazine styled piece on old and new media that we are looking for a home to publish. If anyone has any suggestions of a possible home I would be grateful for your ideas.

How We Educated the New York Times

A zillion clicks taught newspapers they aren’t in control.
Published: October 10, 2007

The New York Times made waves in the media world recently by dismantling its subscription paywall. As a result, anyone with a computer and an Internet connection can now read the entire paper online for free.

The failed paywall experiment of the New York Times is emblematic of the newspaper industry’s two-decade-old struggle to survive online. So long as the Internet is perceived as nothing more than a new tool for distributing the news to a passive audience — readers, citizen and the community more generally, will continue to tune out. For newspapers to survive, a more nuanced understanding of the online world is needed.

The key is grasping that the relationship between communities and their news has fundamentally changed.

You and I are in charge now

Prior to the Internet, people determined what was important by reading what newspaper editors thought was important. Today, people have a host of ways to determine what is important and to connect quickly with stories on those issues. Newspapers can shift their content, and advertising, online, but as long as they believe they are the arbiters of a community’s agenda, they will continue to struggle.

Online, people engage with news in two new ways, both of which deviate significantly from the traditional newspaper model.

First, algorithm-based aggregators, such as Google News and Del.icio.us, and human-run websites, such as National Newswatch and the Huffington Post, provide powerful alternatives to the traditional newspaper editor.

 

Aggregators, both human and algorithm-based, don’t care about content’s origins, only its relevance to readers. They ferret out the best content from across the web and deposit it on your computer screen. This begs the question: if you could read the best articles drawn from a pool of 100 authors (the approximate number of journalists at a daily newspaper) vs. a pool of 1.5 million posts (the amount of new content created online each day), which would you choose?But it is the second reason that should most concern newspapers. Younger readers don’t just use aggregators. They increasingly read articles found through links from blogs. Rather than roaming within a newspaper’s walled gardens, younger readers build their own media communities where a trusted network of bloggers guide them to interesting content. Online, bloggers are the new editors.

Take, for example, the relationship many Canadians have with the prominent blogger Andrew Potter. While most people have never met him in person, his readers know his perspectives and biases, and this personal connection creates a loyal following.

Antithetically, people are also drawn back because they are interested in the places Potter links to, virtually all of which direct readers away from the site he blogs for, Macleans.ca.

Share the good stuff

To most newspapers, the idea of directing traffic away from their news site remains an anathema. Newspaper websites contain virtually no external links. Ironically, this follows the design parameters of a Las Vegas casino — the goal is to get you in, and not let you leave. Does anyone really believe that all the news and perspectives relevant and important to a community can reside on a single website?

In this manner, newspapers are fighting the very thing that makes the Internet community compelling: its interconnectedness. Like Potter’s blog, the Internet’s best sites are attractive, not simply because their content is good, but rather because they link to content around the web. And if that content is compelling, readers keep coming back for more guidance.

People enjoy a sense of community, and democracy is strengthened when citizens are informed. The problem is, the New York Times, and virtually every traditional newspaper, fails to understand that a model has emerged that is far better at both delivering information and fostering community than the traditional news industries.

Bad neighbourhood?

Traditional media supporters will assert that these online communities are fragmented, in disagreement, full of scallywags, immature ranters, educated snobs and partisan hacks. And they’d be right. It’s messy and it’s imperfect. But then, so is the democratic community in which we live. The difference is, in an online community, everyone is telling us and directing us to issues and news items they believe are important.

The New York Times learned this lesson the hard way. After spending two years trying to wall its exclusive content off from the web, it discovered that rather than becoming more exclusive, it was becoming less relevant. Unable to link to its content, aggregators, bloggers and the online community more generally, simply stopped talking about them. Newspapers should heed this lesson. If newspapers want to transition into the online age — they’ll have to join this community, rather than seek to control it.

Free Software and Open-Source Symposium

Friends! I want to make sure everybody and anybody who might be interested knows about the upcoming 6th annual Free Software and Open-Source Symposium in Toronto, this October 25-26th.

What is Open-Source? There is a good definition here.

Non-techies should not be shy… I (and I’m very non-techie, I couldn’t code if my life, quite literally, depended on it) for example will be talking about Community Management as the core competency of Open Source projects. While open-source is usually talked about in reference to software, the conference organizers are interested in open systems more generally, and how they can be applied in various fields. I’m interested in open-source public policy (which, if they’ll have me back, I’d like to talk about next year…) and others are interested in its application to theater, meeting design, etc…

For more information I would suggest the blog of David Humphrey, one of the event’s coordinators, where one can read about cool insider info (e.g. prizes) and juicy gossip (e.g. the public, but just, shaming of me for being delinquent in submitting my talk summary).

You can also check out the conference’s webpage, where you can find the agenda, a place to register and other info.

The Free Software and Open Source Symposium
October 25-26th, 2007 – 9:00 a.m. to 5:00 p.m.
Seneca@York Campus, Toronto

The Symposium is a two-day event aimed at bringing together educators, developers and other interested parties to discuss common free software and open source issues, learn new technologies and to promote the use of free and open source software. At Seneca College, we think free and open source software are real alternatives.

Making the shuffle better

My geek squad (or is it nerd herd?) suggestion.

I have an a iPod shuffle (which BTW) I love. And, as many of you know, I’ve committed myself to walking at least one direction to any meeting I have in Vancouver, no matter how far. As a result, I end up in some long walks, which I use as an opportunity to listen to audiobooks and podcasts. Shuffle

The problem is that some of the books, and even some podcasts, come as a single large file. If while listening, you accidentally push the forward button, you lose your place and have to spend the next 5 minutes fast forwarding through the mp3 to find your place.

I know, I know, I know… I could “lock” the buttons by pressing down the play/pause button for 3 seconds, but then I can’t adjust the volume – something that is essential when walking in the city and shift from busy main streets, to pleasant quiet side streets.

All this goes to say that it would be nice if the shuffle let you lock all the buttons except the volume buttons. Then you could increase and decrease the volume without fear of losing your place.

But then, I thought of something cooler. What if Apple let you reprogram their shuffle buttons however you saw fit? Say, for example, you only want your shuffle to skip to the next song if you click the fast forward button twice in quick succession… no problem, you just program it that way. Now that would be cool.

My assumption is that this type of reprogramming would not be that hard. Apple already allows you to limit the maximum volume of your shuffle. How hard can it be to hand over control of the other keys?

Anyone know anyone at Apple I could pitch the idea to?