Tag Archives: google

More on Google Transit and how it is Reshaping a Public Service

Some of you know I’ve written a fair bit on Google transit and how it is reshaping public transit – this blog post in particular comes to mind. For more reading I encourage you to check out the Xconomy article Google Transit: How (and Why) the Search Giant is Remapping Public Transportation as it provides a lot of good details as to what is going on in this space.

Two things about this article:

First, it really is a story about how the secret sauce for success is combining open data with a common standard across jurisdictions. The fact that the General Transit Feed Specification (a structured way of sharing transit schedules) is used by over 400 transit authorities around the world has helped spur a ton of other innovations.

Couple of money quotes include this one about the initial reluctance of some authorities to share their data for free (I’m looking at you Translink board):

“I have watched transit agencies try to monetize schedules for years and nobody has been successful,” he says. “Markets like the MTA and the D.C. Metro fought sharing this data for a very long time, and it seems to me that there was a lot of fallout from that with their riders. This is not our data to hoard—that’s my bottom line.”

and this one about iBart, an app that uses the GTFS to power an app for planning transit trips:

in its home city, San Francisco, the startup’s app continues to win more users: about 3 percent of all trips taken on BART begin with a query on iBART

3%? That is amazing. Last year my home town of Vancouver’s transit authority, Translink, had 211.3 million trips. If the iBart app were ported to here and enjoyed similar success that would man 6.4 million trips planned on iBart (or iTranslink?). That’s a lot of trips made easier to plan.

The second thing I encourage you to think about…

Where else could this model be recreated? What’s the data set, where is the demand from the public, and what is the company or organization that can fulfill the role of google to give it scale. I’d love to hear thoughts.

Research Request – Transit Study

After writing yesterday’s post on the economics of opendata and transit I’ve really been reflecting on a research question that emerged in the piece: Does having transit data embedded in Google Maps increase ridership?

My hypothesis is that it would… but I did some googling on the topic and couldn’t find anything written on the subject, not to mention something that had been rigorously researched and would stand up to peer review. This leads me to believe it could be a great research project. I willing to bet that some transit authorities, and Google would be of enormously interested in the results.

Obviously there are a number of variables that might impact public transit ridership: budgets, fleet size growth or cutbacks, the economy, population growth, etc… That said, I’m sure there is someone out there who could think of a methodology that would account for these factors and still allow us to tell if becoming available in Google Maps impact’s a city’s ridership levels.

The helpful thing is that there are lots of data points to play with. A brief scan of the public transit feed lists suggests that there are roughly 150 cities that provide Google with GTFS data of the transit schedule. That’s a lot of cities to play with and would allow a study to offset regional variations. I’m also confident that each of the transit authorities mentioned in the list publicly publish their ridership levels (or they could be FOIAed/ATIPed)

If anyone has done this study, please let me know, I’d love to know more. If no, and someone is interested in doing this study, please go for it! I’m definitely happy to offer whatever support I can.

Why I’m Struggling with Google+

So it’s been a couple of weeks since Google+ launched and I’ll be honest, I’m really struggling with the service. I wanted to give it a few weeks before writing anything, which has been helpful in letting my thinking mature.

First, before my Google friends get upset, I want to acknowledge the reason I’m struggling has more to do with me than with Google+. My sense is that Google+ is designed to manage personal networks. In terms of social networking, the priority, like at Facebook, is on a soft version of the word “social” eg. making making the experience friendly and social, not necessarily efficient.

And I’m less interested in the personal experience than in the learning/professional/exchanging experience. Mark Jones, the global communities editor for Reuters, completely nailed what drives my social networking experience in a recent Economist special on the News Industry: “The audience isn’t on Twitter, but the news is on Twitter.” Exactly! That’s why I’m on Twitter. Cause that’s where the news is. It is where the thought leaders are interacting and engaging one another. Which is very different activity than socializing. And I want to be part of all that. Getting intellectually stimulated and engaged – and maybe even, occasionally, shaping ideas.

And that’s what threw me initially about Google+. Because of where I’m coming from, I (like many people) initially focused on sharing updates which begged comparing Google+ to Twitter, not Facebook. That was a mistake.

But if Google+ is about about being social above all else, it is going to be more like Facebook than Twitter. And therein lies the problem. As a directory, I love Facebook. It is great for finding people, checking up on their profile and seeing what they are up to. For some people it is good for socializing. But as a medium for sharing information… I hate Facebook. I so rarely use it, it’s hard to remember the last time I checked my stream intentionally.

So I’m willing to accept that part of the problem is me. But I’m sure I’m not alone so if you are like me, let me try to further breakdown why I (and maybe you too) are struggling.

Too much of the wrong information, too little of the right information.

The first problem with Google+ and Facebook is that they have both too much of the wrong information, and too little of the right information.

What do I mean by too much of the wrong? What I love about Twitter is its 140 character limit. Indeed, I’m terrified to read over at Mathew Ingram’s blog that some people are questioning this limit. I agree with Mathew: changing Twitter’s 140 character limit is a dumb idea. Why? For the same reason I thought it made sense back in March of 2009, before Google+ was even a thought:

What I love about Twitter is that it forces writers to be concise. Really concise. This in turn maximizes efficiency for readers. What is it Mark Twain said?  “I didn’t have time to write a short letter, so I wrote a long one instead.” Rather than having one, or even thousands or readers read something that is excessively long, the lone drafter must take the time and energy to make it short. This saves lots of people time and energy. By saying what you’ve got to say in 140 characters, you may work more, but everybody saves.

On the other hand, while I want a constraint over how much information each person can transmit, I want to be able to view my groups (or circles) of people as I please.

Consider the screen shot of TweetDeck below. Look how much information is being displayed in a coherent manner (of my choosing). It takes me maybe, maybe 30-60 seconds to scan all this. In one swoop I see what friends are up to, some of my favourite thought leaders, some columnists I respect… it is super fast and efficient. Even on my phone, switching between these columns is a breeze.

twitter

But now look at Google+. There are comments under each item…but I’m not sure I really care to see. Rather then the efficient stream of content I want, I essentially have a stream of content I didn’t ask for. Worse, I can see, what, maybe 2-5 items per screen, and of course I see multiple circles on a single screen.

Google+1

Obviously, some of this is because Google+ doesn’t have any applications to display it in alternative forms. I find the Twitter homepage equally hard to use. So some of this could be fixed if (and hopefully when) Google makes public their Google+ API.

But it can’t solve some underlying problems. Because an item can be almost as long as the author wants, and there can be comments, Google+ doesn’t benefit from Twitter’s 140 character limit. As one friend put it, rather than looking at a stream of content, I’m looking at a blog in which everybody I know is a writer submitting content and in which an indefinite number of comments may appear. I’ll be honest: that’s not really a blog I’m interested in reading. Not because I don’t like the individual authors, but because it’s simply too much information, shared inefficiently.

Management Costs are too high

And herein lies the second problem. The management costs of Google+ are too high.

I get why “circles” can help solve some of the problems outlined above. But, as others have written, it creates a set of management costs that I really can’t be bothered with. Indeed this is the same reason Facebook is essentially broken for me.

One of the great things about Twitter is that it’s simple to manage: Follow or don’t follow. I love that I don’t need people’s permission to follow them. At the same time, I understand that this is ideal for managing divergent social groups. A lot of people live lives much more private than mine or want to be able to share just among distinct groups of small friends. When I want to do this, I go to email… that’s because the groups in my life are always shifting and it’s simple to just pick the email addresses. Managing circles and keeping track of them feels challenging for personal use. So Google+ ends up taking too much time to manage, which is, of course, also true of Facebook…

Using circles to manage for professional reasons makes way more sense. That is essentially what I’ve got with Twitter lists. The downside here is that re-creating these lists is a huge pain.

And now one unfair reason with some insight attached

Okay, so going to the Google+ website is a pain, and I’m sure it will be fixed. But presently my main Google account is centered around my eaves.ca address and Google+ won’t work with Google Apps accounts so I have to keep flipping to a gmail account I loathe using. That’s annoying but not a deal breaker. The bigger problem is my Google+ social network is now attached to an email account I don’t use. Worse, it isn’t clear I’ll ever be able to migrate it over.

My Google experience is Balkanizing and it doesn’t feel good.

Indeed, this hits on a larger theme: Early on, I often felt that one of the promises of Google was that it was going to give me more opportunities to tinker (like what Microsoft often offers in its products), but at the same time offer a seamless integrated operating environment (like what Apple, despite or because of their control freak evilness, does so well). But increasingly, I feel the things I use in Google are fractured and disconnected. It’s not the end of the world, but it feels less than what I was hoping for, or what the Google brand promise suggested. But then, this is what everybody says Larry Page is trying to fix.

And finally a bonus fair reason that’s got me ticked

Now I also have a reason for actively disliking Google+.

After scanning my address book and social network, it asked me if I wanted to add Tim O’Reilly to a circle. I follow Tim as a thought leader on Twitter so naturally I thought – let’s get his thoughts via Google+ as well. It turns out however, that Tim does not have a Google+ account. Later when I decided to post something a default settings I failed to notice sent emails to everyone in my circles without a Google+ account. So now I’m inadvertently spamming Tim O’Reilly who frankly, doesn’t need to get crap spam emails from me or anyone. I’m feeling bad for him cause I suspect, I’m not the only one doing it. He’s got 1.5 million followers on Twitter. That could be a lot of spam.

My fault? Definitely in part. But I think there’s a chunk of blame that can be heaped on to a crappy UI that wanted that outcome. In short: Uncool, and not really aligned with the Google brand promise.

In the end…

I remember initially, I didn’t get Twitter; after first trying it briefly I gave up for a few months. It was only after the second round that it grabbed me and I found the value. Today I’m struggling with Google+, but maybe in a few months, it will all crystallize for me.

What I get, is that it is an improvement on Facebook, which seems to becoming the new AOL – a sort of gardened off internet that is still connected but doesn’t really want you off in the wilds having fun. Does Google+ risk doing the same to Google? I don’t know. But at least circles are clearly a much better organizing system than anything Facebook has on offer (which I’ve really failed to get into). It’s far more flexible and easier to set up. But these features, and their benefits, are still not sufficient to overcome the cost setting it up and maintaining it…

Ultimately, if everybody moves, I’ll adapt, but I way prefer the simplicity of Twitter. If I had my druthers, I’d just post everything to Twitter and have it auto-post over to Google+ and/or Facebook as well.

But I don’t think that will happen. My guess is that for socially driven users (e.g. the majority of people) the network effects probably keep them at Facebook. And does Google+ have enough features to pull the more alpha type user away? I’m not sure. I’m not seeing it yet.

But I hope they try, as a little more competition in the social networking space might be good for everyone, especially when it comes to privacy and crazy end-user agreements.

The False choice: Bilingualism vs. Open Government (and accountability)

Last week a disturbing headline crossed my computer screen:

B.C. RCMP zaps old news releases from its website

2,500 releases deleted because they weren’t translated into French

1) The worst of all possible outcomes

This is a terrible outcome for accountability and open government. When we erase history we diminish accountability and erode our capacity to learn. As of today, Canadians have a poorer sense of what the RCMP has stood for, what it has claimed and what it has tried to do in British Columbia.

Consider this. The Vancouver Airport is a bilingual designated detachment. As of today, all press releases that were not translated were pulled down. This means that any press release related to the national scandal that erupted after Robert Dziekański – the polish immigrant who was tasered five times by the (RCMP) – is now no longer online. Given the shockingly poor performance the RCMP had in managing (and telling the truth about) this issue – this concerns me.

Indeed, I can’t think that anyone thinks this is a good idea.

The BC RCMP does not appear to think it is a good idea. Consider their press officer’s line: “We didn’t have a choice, we weren’t compliant.”

I don’t think there are any BC residents who believe they are better served by this policy.

Nor do I think my fellow francophone citizens believe they are better served by this decision. Now no one – either francophone or anglophone can find these press releases online. (More on this below)

I would be appalled if a similar outcome occurred in Quebec or a francophone community in Manitoba. If the RCMP pulled down all French press releases because they didn’t happen to have English translations, I’d be outraged – even if I didn’t speak French.

That’s because the one thing worse than not having the document in both official languages, is not having access to the document at all. (And having it hidden in some binder in a barracks that I have to call or visit doesn’t event hint of being accessible in the 21st century).

Indeed, I’m willing to bet almost anything that Graham Fraser, the Official Languages Commissioner – who is himself a former journalist – would be deeply troubled by this decision.

2) Guided by Yesterday, Not Preparing for Tomorrow

Of course, what should really anger the Official Languages Commissioner is an attempt to pit open and accountable government against bilingualism. This is a false choice.

I suspect that the current narrative in government is that translating these documents is too expensive. If one relies on government translators, this is probably true. The point is, we no longer have to.

My friend and colleague Luke C. pinged me after I tweeted this story saying “I’d help them automate translating those news releases into french using myGengo. Would be easy.”

Yes, mygengo would make it cheap at 5 cents a word (or 15 if you really want to overkill it). But even smarter would be to approach Google. Google translate – especially between French and English – has become shockingly good. Perfect… no. Of course, this is what the smart and practical people on the ground at the RCMP were doing until the higher ups got scared by a French CBC story that was critical of the practice. A practice that was ended even though it did not violate any policies.

The problem is there isn’t going to be more money to do translation – not in a world of multi-billion dollar deficits and in a province that boasts 63,000 french speakers. But Google translate? It is going to keep getting better and better. Indeed, the more it translates, the better it gets. If the RCMP (or Canadian government) started putting more documents through Google translate and correcting them it would become still more accurate. The best part is… it’s free. I’m willing to bet that if you ran all 2500 of the press releases through Google translate right now… 99% of them would come out legible and of a standard that would be good enough to share. (again, not perfect, but serviceable). Perhaps the CBC won’t be perfectly happy. But I’m not sure the current outcome makes them happy either. And at least we’ll be building a future in which they will be happy tomorrow.

The point here is that this decision reaffirms a false binary: one based on a 20th century assumptions where translations were expensive and laborious. It holds us back and makes our government less effective and more expensive. But worse, it ignores an option that embraces a world of possibilities – the reality of tomorrow. By continuing to automatically translate these documents today we’d continue to learn how to use and integrate this technology now, and push it to get better, faster. Such a choice would serve the interests of both open and accountable governments as well as bilingualism.

Sadly, no one at the head office of the RCMP – or in the federal government – appears to have that vision. So today we are a little more language, information and government poor.

Three asides:

1) I find it fascinating that the media can get mailed a press release that isn’t translated but the public is not allowed to access it on a website until it is – this is a really interesting form of discrimination, one that supports a specific business model and has zero grounding in the law, and indeed may even be illegal given that the media has no special status in Canadian law.

2) Still more fascinating is how the RCMP appears to be completely unresponsive to news stories about inappropriate behavior in its ranks, like say the illegal funding of false research to defend the war on drugs, but one story about language politics causes the organization to change practices that aren’t even in violation of its policies. It us sad to see more evidence still that the RCMP is one of the most broken agencies in the Federal government.

3) Thank you to Vancouver Sun Reporter Chad Skelton for updating me on the Google aspect of this story.

Wikileaks, free speech and traditional media

I find it fascinating how US government has chosen to try to dismantle the support network that makes wikileaks possible – pressuring paypal, amazon and numerous others into refusing to enable wikileaks to work.

They have pressured pretty much every stakeholder with one exception. The traditional media.

Why does the US government rail against wikileaks and pressure paypal and yet is silent about the New York Times involvement? (or the Guardian’s or the other media partners involved?). The NYT had advance access to the materials, they helped publicize it and, in the case of the Guardian, have been helping users get access to the wikileak documents when wikileaks website went down.

This fact, above all else, demonstrates the weakness the government’s legal case. They aren’t going after those who have a clear mission and the (legal) capacity to protect themselves. They are trying to go after those who can be pressured. This is not a sign of confidence. This is a shakedown. More importantly, it is a sign of weakness.

The fact that organizations like Amazon and Paypal have caved so quickly should also be a red flag for anyone who care about free speech. Essentially, these companies have conceded that – regardless of whether you break the law or not – if the government tells them to not serve you so that you can operate on the net, they will kick you off their platforms. As one great tweet put it: “If Amazon is uncomfortable with free speech they should get out of the book business.”

I see three outcomes from all this.

Winner: Traditional media. They establish one area where they have a competitive advantage: the capacity to marshal legal forces to not only protect their free speech rights, but to pre-emptively prevent the government from even contemplating attacking them. That’s powerful stuff, especially in a world where governments not appear happy to not attack those they disagree with directly but simply attempt to shut down the infrastructure that enables them.

Loser: Paypal, Amazon and others who caved. Maybe the long term effect of this will be negligible but it is also possible that a number of people who are choosing their cloud computing provider right now will be looking at Google which (eventually) stood up to China and Amazon, which caved like a house of cards at the mere breadth of dissatisfaction from the US government. Do you really want a company that is that susceptible to outside pressure running a core component of your business?

Biggest Loser: The US government. The worse part of the US government’s strategy of shutting down Wikileaks is it is has made the story (and the organization) more popular and better known. But more importantly it is counterproductive. Watching the US government deal with wikileaks is like watching the record labels try to fight Napster in the 1990s. Even if you win the battle, you will lose the war. Even if wikileaks gets shut down, 10 more lookalikes will pop up in its place, some of which will be more mainstream (and so harder to discredit) and others which will be more radical (and so more damaging). So all the US government has managed to do is make itself look like China when it comes to the rule of law, the governance of the internet, and the issue of censorship. You don’t have to be a rocket scientists to see the hypocrisy of the US government encouraging Twitter to not do maintenance during the Green Revolution in Iran so that people can communicate, while busily trying to shut down Wikileaks when the internet and network communication doesn’t serve its own interests. The US has damaged its brand and credibility with little to show for gains.

In the end, the system will react (it already has) and this will prompt new infrastructure on the net that better protects freedom of speech and places the capacity to control content even further beyond the reach of governments. There are downsides to all this, including the havoc of organizations like wikileaks can wreak on businesses and governments, but from a free speech perspective, it will be a good thing.

When Measuring the Digital Economy, Measure the (Creative) Destruction Too

Yesterday I had a great lunch with Justin Kozuch of the Pixels to Product research study which aims “to create a classification system for Canada’s digital media industry and shed light on the industry’s size and scope.”

I think the idea of measuring the size and scope of Canada’s digital media industry is a fantastic idea. Plenty of people – including many governments – are probably very curious about this.

But one thought I had was: if we really want to impress on governments the importance of the digital economy, don’t measure it’s size. Measure its creative destructive/disruptive power.

In short, measure the amount of the “normal” economy it has destroyed.

Think of every newspaper subscription canceled, every print shop closed, every board game not played, every add not filmed, whatever… but think of all the money saved by businesses and consumers because the digital made their options dramatically cheaper.

I’m not sure what the methodology for such a measurement would look like, or even if it is possible. But it would be helpful.

I suspect the new digital businesses that replace them are smaller and more efficient. Indeed, they often have to be dramatically so to justify the switching cost. This is part of what makes them disruptive. Take, for example, Google. Did you know it only has 20,000 employees? I always find that an incredible figure. These 20,000 people are creating systems that are wiping out (and creating) whole industries.

I say all this because often the digital replacement of the economy won’t (initially) be as big as what it replaced – that’s the whole point. The risk is governments and economic planning groups will look at the current size of the digital economy and be… unimpressed. Measuring destruction might be one way to change the nature of the conversation, to show them how big this part of the economy really is and why they need to give it serious consideration.

The Future of Media in Canada – Thoughts for the Canadian Parliamentary Committee

Yesterday, Google presented to a House of Commons Heritage Committee which has launched a study of “new media.” Already some disturbing squawks have been heard from some of the MPs. For those who believe in an open internet, and in an individuals right to choose, there is no need to be alarmed just yet, but this is definitely worth keeping an eye on. It is however, a good thing that the parliamentary committee is looking at this (finally) since the landscape has radically changed and the Canadian government needs to adjust.

In his SXSWi talk Clay Shirky talked about how abundance changes things. One an item ceases to be scarce – when it is freely available – the dynamics of what we do with it and how we use it radically change.

It is something government’s have a hard time wrestling with. One basic assumption that often (but hardly always) underlies public policy is that one is dealing with how to manage scarce resources like natural resources. But what happens when something that was previously scarce suddenly becomes abundant? The system breaks. This is the central challenge the Heritage Committee MPs need to wrap their heads around.

Why?

Because this is precisely what is happening with the broadcast industry generally and Canadian content rules specifically. And it explains why Canadian content rules are so deeply, deeply broken.

In the old era the Government policy on Canadian content rested on two pillars:

First, the CRTC was able to create scarcity. It controlled the spectrum and could regulate the number of channels. This meant that broadcasters had to do what it said if they wanted to maintain the right to broadcast. This allowed the CRTC to mandate that a certain percentage of content be Canadian (CanCon).

The second pillar was funding. The Government could fund projects that would foster Canadian content. Hence the CBC, the National Film Board of Canada and various other granting bodies.

The problem is, in the digital era, creating scarcity gets a lot more complicated. There are no channels to regulate on the internet. There is just the abundant infinity of internet content. Moreover you can’t force websites to produce or create Canadian content nor can you force Canadians to go to websites that do (at least god hopes that isn’t a crazy idea the committee gets into its head). The scarcity is gone. The Government can no longer compel Canadians to watch Canadian content.

So what does that mean? There are three implications in my mind.

First. Stop telling Canadians what culture is. The most offensive quote from yesterday’s Globe article was, to quote the piece Bloc Québécois MP Carole Lavallée quote:

Bloc Québécois MP Carole Lavallée highlighted the often low-brow, low-budget fare on YouTube. She accused Google of confusing leisure with culture.

“Leisure is people who play Star Wars in their basement and film one another and put that on YouTube,” she said. “ But culture is something else.”

Effectively, she is telling me – the blog and new media writer – and the 100,000s if not millions of other Canadians who have created something that they do not create Canadian culture. Really? I thought the whole point of the Heritage Ministry, and tools like the CBC was to give voice to Canadians. The internet, a tools like YouTube have done more on that front than any Government program of the last 5 decades. Lavallée may not like what she sees, but today, more Canadian content is created and watched around the world, than ever before.

Second. Be prepared to phase out the CRTC. The CRTC’s regulatory capacity depends on being able to create scarcity. If there is no more scarcity, then it seizes to have a lever. Yes, the TV industry is still with us. But for how long? Canadians, like people everywhere, want to watch what they want, when they want. Thanks to the internet, increasingly they can. The CRTC no longer serves the interests of Canadians, it serves to perpetuate both the broadcast industry and the cable industry (yes, even when they fight) by creating a legal scaffolding that props up their business models. Michael Geist understands this – the committee should definitely be talking to him as well.

Third, if the first pillar is dead, the second pillar is going to have to take on a heavier load and in new and creative ways. The recent National Film Board iPhone app is fantastic example of how new media can be used to promote Canadian content. If the Commons committee is really worried about YouTube, why not have Heritage Canada create a “Canadian channel” on YouTube where it can post the best videos by Canadians and about Canada? Maybe it can even offer grants to the video creaters that get the most views on the channel – clearly they’ve demonstrated an ability to attract an audience. Thinking about more micro-grants that will allow communities to create their own content is another possibility. Ultimately, the Government can’t shape demand, or control the vehicle by which supply is delivered. But it can help encourage more supply – or better still reward Canadians who do well online and enable them to create more ambitious content.

The world of new media is significantly democratizing who can create content and what people can watch. Whatever the heritage committee does I hope they don’t try to put the cork back on that bottle. It will, in effect, be muzzling all the new emerging Canadian voices.

Update: Just saw that Sara Bannerman has a very good post about how Canadian content could be regulated online. Like much of what is in her post, but don’t think “regulation” is the right word. Indeed, most of what she asks for makes business sense – people will likely want Canadian filters for searching (be it for books, content, etc…) as long as those filters are optional.

On Google Maps, In Seattle, Talking up Health Canada

Exciting and fun news today… Google Maps blog announced that Vancouver has became the first Canadian city to add local experts to the Favorite Places on Google Maps. The site shares some of the favourite places of:

Bif Naked (map) – rock singer-songwriter, breast-cancer survivor
Gordon Campbell (map) – Premier of British Columbia
Kit Pearson (map) – children’s book writer, Governor General’s Award winner
Monte Clark (map) – owner of Monte Clark Gallery
Rebecca Bollwitt (map) – Vancouver’s Best Blogger & Top Twitter User for Miss604.com
Rob Feenie (map) – Food Concept Architect for Cactus Restaurants, Iron Chef champion
Ross Rebagliati (map) – Olympic Gold Medallist, snowboarding
Simon Whitfield (map) – Olympic Gold & Silver Medallist, triathlon

and… me! (My map can be found here).

The people at Google asked me for 10 locations and to have a mix of places I like to go as well as places that relate to my work and advocacy around public policy, technology and open government. Very excited to be included and want to thank the people at Google for thinking of including me.

Speaking of public policy and open government, yesterday I drove down to Seattle for a couple of hours to present to their City Council on open data and open government. You can see the presentation here, on the City of Seattle website. Seattle is definitely beginning to look more and more seriously at this issue, especially with the arrival of the new mayor and the leadership of some strong city council members. If you’re in Seattle and feeling passionate about this issue try linking up with following Jon Stahl (his blog) and Brett Horvath on twitter.

Finally, had a great time delivering my talk on The Future of the Public Service to a meeting of Middle Managers of Health Canada’s BC region. Lots of great feedback and conversations after the talk and in the hallways. Many of the ideas shared in this talk are also due to be published in a chapter in O’Reilly Media’s upcoming book on Open Government – very excited about this and will share more about it soon!

Anyway – this is all to say, sorry, no hard core policy or political blog post today…

Google Walk

No it is not the swagger of a recently bought out start-up founder, it is the very cool new feature google map just threw in.

Normally, when you get directions on google maps it assumes you are in a car, so shows you the fastest route as if you are driving. This means that it takes detours around one way roads and the like.

Now, there is a “walking” function so google maps computes the fastest route as though you are on foot. Very cool. Now, what would be really nice is if it “balanced” distance with vertical height so you could pick the flattest walking route. I tend to gravitate to railway tracks. Cool thing about railways is that they can never exceed a 3.5 degree grade (or so I read somewhere once) so I always like walking tracks cause it means I know I’ll never hit too steep a hill.

Very excited to try this feature out. For an avid walker like me having this feature in my blackberry is key. Very psyched.

H/T to Jeremy V for emailing me the link.

How to make $240M of Microsoft equity disappear

Last week a few press articles described how Google apparently lost to Microsoft in a bidding war to invest in Facebook. (MS won – investing $240M in Facebook)

Did Google lose? I’m not so sure… by “losing” it may have just pulled off one of the savviest negotiations I’ve ever seen. Google may never have been interested in Facebook, only in pumping up its value to ensure Microsoft overpaid.

Why?

Because Google is planning to destroy Facebook’s value.

Facebook – like all social network sites – is a walled garden. It’s like a cellphone company that only allows its users to call people on the same network – for example if you were a Rogers cellphone user, you wouldn’t be allowed to call your friend who is a Bell cellphone user. In Facebook’s case you can only send notes, play games (like my favourite, scrabblelicious) and share info with other people on Facebook. Want to join a group on Friendster? To bad.

Social networking sites do this for two reasons. First, if a number of your friends are on Facebook, you’ll also be inclined to join. Once a critical mass of people join, network effects kick in, and pretty soon everybody wants to join.

This is important for reason number two. The more people who join and spend time on their site, the more money they make on advertising and the higher the fees they can charge developers for accessing their user base. But this also means Facebook has to keep its users captive. If Facebook users could join groups on any social networking site, they might start spending more time on other sites – meaning less revenue for Facebook. Facebook’s capacity to generate revenue, and thus its value, therefor depends in large part on two variables: a) the size of its user base; and b) its capacity to keep users captive within your site’s walled garden.

This is why Google’s negotiation strategy was potentially devastating.

MicroSoft just paid $240M for a 1.6% stake in Facebook. The valuation was likely based in part, on the size of Facebook’s user base and the assumption that these users could be kept within the site’s walled garden.

Let’s go back to our cell phone example for a moment. Imagine if a bunch of cellphone companies suddenly decided to let their users call one another. People would quickly start gravitating to those cellphone companies because they could call more of their friends – regardless of which network they were on.

This is precisely the idea behind Google’s major announcement earlier this week. Google launched OpenSocial – a set of common APIs that let developers create applications that work on any social networks that choose to participate. In short, social networks that participate will be able to let their users share information with each other and join each other’s groups. Still more interesting MySpace has just announced it will participate in the scheme.

This is a lose-lose story for Facebook. If other social networking sites allow their users to connect with one another then Facebook’s users will probably drift over to one of these competitors – eroding Facebook’s value. If Facebook decides to jump on the bandwagon and also use the OpenSocial API’s then its userbase will no longer be as captive – also eroding its value.

Either way Google has just thrown a wrench into Facebook’s business model, a week after Microsoft paid top dollar for it.

As such, this could be a strategically brilliant move. In short, Google:

  • Saves spending $240M – $1B investing in Facebook
  • Creates a platform that, by eroding Facebook’s business model, makes Microsoft’s investment much riskier
  • Limit their exposure to an anti-trust case by not dominating yet another online service
  • Creates an open standard in the social network space, making it easier for Google to create its own social networking site later, once a clear successful business model emerges

Nice move.