Tag Archives: open source

Bureaucracies and New Media: How the Airforce deals with blogs

A friend forwarded me this interesting diagram that is allegedly used by the United States Air Force public affairs agency to assess how and if to respond to external blogs and comments that appear upon them.

Airforce Blog Reaction

It’s a fascinating document on many levels – mostly I find it interesting to watch how a command and control driven bureaucracy deals with a networked type environment like the blogosphere.

In the good old days you could funnel all your communications through the public affairs department – mostly because there were so few channels to manage – TV, radio and print media – and really not that many relevant actors in each one. The challenge with new media is that there are both so many new channels emerging (YouTube, twitter, blogs, etc…) that public affairs departments can’t keep up. More importantly, they can’t react in a timely fashion because they often don’t have the relevant knowledge or expertise.

Increasingly, everyone in your organization is going to have to be a public affairs person. Close off your organizations from the world, and you risk becoming irrelevant. Perhaps not a huge problem for the Air Force, but a giant problem for other government ministries (not to mention companies, or the news media – notice how journalists rarely ever respond to comments on their articles…?).

This effort by a bureaucracy to develop a methodology for responding to this new and diverse media environment is an interesting starting point. The effort to separate out legitimate complaints from trolls is probably wise – especially given the sensitive nature of many discussions the Air Force could get drawn into. Of course, it also insulates them from people who are voicing legitimate concerns but will simply be labeled as “a troll.”

Ultimately however, no amount of methodology is going to save an organization from its own people if the underlying values of the organization are problematic. Does your organization encourage people to treat one another with respect, does it empower its employees, does it value and even encourage the raising of differing perspectives, is it at all introspective? Social media is going to expose organizations underlying values to the public, the good, the bad and the ugly. In many instances the picture will not be pretty. Indeed, social media is exposing all of us – as individuals – and revealing just exactly how tolerant and engaging we each are individually. With TV a good methodology could cover that up – with social media, it is less clear that it can. This is one reason why I believe the soft skills are mediation, negotiation and conflict management are so important, and why I feel so lucky to be in that field. Its relevance and important is only just ascending.

Methodologies like that shown above represent interesting first starts. I encourage governments to take a look at it because it is at least saying: pay attention to this stuff, it matters! But figuring out how to engage with the world, and with people, is going to take more than just a decision tree. We are all about to see one another for what we really are – a little introspection, and value check, might be in order…

Blogging: Dealing with difficult comments

Embedded below is an abridged version (10 minutes!) of my 2009 Northern Voice presentation on managing and engaging the community the develops around one’s blog. Specifically, one goals of this presentation was to pull in some of the thinking from the negotiation and conflict management space and see how it might apply to dealing with people who comment on your blog. Hopefully, people will find it interesting.

Finally, a key lesson that came to me while developing the presentation is that most blogs, social media projects, and online projects in general, really need a social contract – or as Skirky describes it, a bargain – that the organizer and the community agree to. Often such contracts (or bargains) are strongly implied, but I believe it is occasionally helpful to make them explicit – particularly on blogs or projects that deal with contentious (politics) or complicated (many open source projects) issues.

At 8:43 in the presentation I talk about what I believe is the implicit bargain on my site. I think about codifying it, especially as a I get more and more commentors. That said, the community that has developed around this blog – mostly of people I’ve never met –  is fantastic, so there hasn’t been an overwhelming need.

Finally thank you to Bruce Sharpe for posting a video of the presentation.

So, I hope this brief presentation is helpful to some of you.

(Notice how many people are coughing! You can tell it was winter time!)

How GCPEDIA will save the public service

GCPediaGCPEDIA (also check out this link) is one of the most exciting projects going on in the public service. If you don’t know what GCPEDIA is – check out the links. It is a massive wiki where public servants can share knowledge, publish their current work, or collaborate on projects. I think it is one of two revolutionary changes going on that will transform how the public service works (more on this another time).

I know some supporters out there fear that GCPEDIA – if it becomes too successful – will be shut down by senior executives. These supporters fear the idea of public servants sharing information with one another will simply prove to be too threatening to some entrenched interests. I recognize the concern, but I think it is ultimately flawed for two reasons.

The less important reason is that it appears a growing number of senior public servants “get it.” They understand that this technology – and more importantly the social changes in how people work and organize themselves that come along with them – are here to stay. Moreover, killing this type of project would simply send the worst possible message about public service sector renewal – it would be an admission that any real efforts at modernizing the public service are nothing more than window dressing. Good news for GCPEDIA supporters – but also not really the key determinant.

The second, and pivotal reason, is that GCPEDIA is going to save the public service.

I’m not joking.

Experts and observers of the Public Service has been talking for the last decade about the demographic tsunami that is going to hit the public service. The tsunami has to do with age. In short, a lot of people are going to retire. In 2006 52% of public servants are 44-65. in 1981 it was 38%, in 1991 it was 32%. Among executives the average ages are higher still. EX-1’s (the most junior executive level) has an average age of 50, Ex 2’s are at 51.9, Ex 3’s at 52.7 and Ex 4’s at 54.1. (numbers from David Zussman – link is a powerpoint deck)

Remember these are average ages.

In short, there are a lot of people who, at some point in the next 10 years, are going to leave the public service. Indeed, in the nightmare scenario, they all leave within a short period of time – say 1-2 years, and suddenly an enormous amount of knowledge and institutional memory walks out the door with them. Such a loss would have staggering implications. Some will be good – new ways of thinking may become easier. But most will be negative, the amount of work and knowledge that will have to be redone to regain the lost institutional memory and knowledge cannot be underestimated.

GCPEDIA is the public service’s best, and perhaps only, effective way to capture the social capital of an entire generation in an indexed and searchable database that future generations can leverage and add to. 10’s of millions of man-hours, and possible far more, are at stake.

This is why GCPEDIA will survive. We can’t afford for it not to.

As an aside, this has one dramatic implication. People are already leaving so we need to populate GCPEDIA faster. Indeed, if I were a Deputy Minister I would immediately create a 5 person communications team whose sole purpose was two fold. First to spread the word about the existence of GCPEDIA as well as help and encourage people to contribute to it. Second, this team would actually interview key boomers who may not be comfortable with the technology and transcribe their work for them onto the wiki. Every department has a legend who is an ES-6 and who will retire an ES-6 but everybody knows that they know everything about everything that ever happened, why it happened and why it matters. It’s that person everybody wants to consult with in the cafeteria. Get that person, and find a way to get their knowledge into the wiki, before their pension vests.

Wedding Open Source to Government Service Delivery

One of the challenges I’m most interested in is how we can wed “open” systems to government hierarchies. In a lecture series I’ve developed for Health Canada I’ve developed a way of explaining how we do this already with our 911 service.

To begin, I like using 911 as an example because people are familiar and comfortable with it. More importantly, virtually everyone agrees that it is not only an essential piece of modern government service but also among the most effective.

What is interesting is that 911, unlike many government programs, relies on constant citizen input.  It is a system that has been architected to be participatory. Indeed it only works because it is participatory – without citizen input the system falls apart. Specifically, it aggregates, very effectively, the long-tail 0f knowledge within a community to deliver, with pin point accuracy, an essential service to the location it is needed at a time it is needed.

I’ve visualized in this slide below (explanation below the fold)

long tail public policy

Imagine the white curve represents all of the police, fire and ambulance interventions in a city. Many of the most critical interventions are ones the police force and ambulance service determine themselves (shaded blue). For example, the police are involved in an investigation that results in a big arrest, or the ambulance parks outside an Eagles reunion concert knowing that some of the boomers in attendance will be “over-served” and will suffer a heart attack.

However, while investigations and predictable events may account for some police/fire/ambulatory actions (and possibly those that receive the most press attention) the vast majority of arrests, fire fights and medical interventions result from plain old 911 calls made by ordinary citizens (shaded red). True, many of these are false alarms, or are resolved with minimal effort (a fire extinguisher deals with the problem, or minor amount of drugs are confiscated but no arrests made). But the sheer quantity of these calls means that while the average quality may be low, they still account for the bulk of successful (however defined) interventions. Viewed in this light 911 is a knowledge aggregator, collecting knowledge from citizens to determine where police cars, fire trucks and ambulances need to go.

Thus to find a system that leverages citizens knowledge and is architected for participation we don’t need to invent something new – there are existing systems, like 911, that we can learn from.

With this in mind, two important lessons about 911 leap out at me:

1) It is a self-interested system: While many 911 callers are concerned citizens calling about someone else I suspect the majority of calls – and the most accurate calls – are initiated by those directly or immediately impacted by a situation. People who have been robbed, are suffering from a heart attack, or who have a fire in their kitchen are highly incented to call 911. Consequently, the system leverages our self interest, although it also allows for good Samaritans to contribute as well.

2) It is narrowly focused in its construct: 911 doesn’t ask callers or permit callers to talk about the nature of justice, the history of fire, or the research evidence supporting a given medical condition. It seeks a very narrow set of data points: the nature of the problem and its location. This is helpful to both emergency response officials and citizens. It limits the quantity of data for the former and helps minimize the demands on the latter.

These, I believe, are the secret ingredients to citizen engagement of the future. A passive type of engagement that seeks specific, painless information/preferences/knoweldge from citizens to augment or redistribute services more effectively.

It isn’t sexy, but it works. Indeed we have 20 years of evidence showing us how well it works with regards to one of our most important services.

The internet is messy, fun and imperfect, just like us

Last October 23rd David Weinberger gave the 2008 Bertha Bassam Lecture at the University of Toronto. I happened to be in Toronto but only found out about the lecture on the 24th. Fortunately Taylor pointed out that the lecture is online.

I’ve never met David Weinberger (his blog is here) but I hope to one day. I maintain his book – Small Pieces Loosely Joined – remains one of, if not the best book written about the internet and society. Everything is Miscellaneous is a fantastic read as well.

The Bertha Bassam lecture is classic Weinberger: smart, accessible, argumentative and fun. But what I love most about Weinberger is how he constantly reminds us that the internet is us…  and that, as a result, it is profoundly human: messy, fun, argumentative, and above all imperfect. Indeed, the point is so beautifully made in this lecture I felt a little emotional listening to it.

Contrast that to the experience of listening to someone like Andrew Keen, a Weinberger critic who this lecture again throws into stark relief. After reflecting on Weinberger, Keen dislikes the internet and web 2.0 mostly because I think he dislikes people. It may sound harsh but if you ever hear him speak – or even read his writing – it is smart, argumentative and interesting, but it oozes with an anger and condescension that is definitely contemptuous and sometimes even borders on hatred. If the debate is reduced to whether or not we should, however imperfectly, try to connect to and learn from one another or whether we should just hold others in contempt, I think Weinberger is going to win every time. At least, I know where I stand.

Indeed, this blog is a triumph of Weinberger’s internet humanism. It is a small effort to write, to share, and to celebrate the complexity and opportunity of the world with those I know and those I don’t, but who share a similar sense of possibility. Will millions read this blog. No. But I enjoy the connections, old and new, I make with the much more modest number of people who do.

I hope you’ll watch this lecture or, if you haven’t the time, download the audio to your ipod and listen to it during your commute home. (lacking the slides won’t have a big impact)

From here to open – How the City of Toronto began Opening up

Toronto the open

For myself, the biggest buzz at ChangeCamp Toronto was that the city showed up with lots of IT staff (much of it quite senior) who were trying to better understand how they could enable others to use their data and help citizens identify and solve problems. In fact the City of Toronto ran what I believe will be seen later as the most enduring sessions in which they asked what data should they start making available immediately (as APIs).

For those not in the know, think of an API as a plug that rather than delivering electricity instead delivers access to a database.

The exciting outcome is that web designers, coders and companies can then use this data to better deliver services, coordinate activities in neighborhoods, make government more transparent, or analyze problems. For example, imagine if all the information regarding restaurants health violations were not hidden deep within a government website (in a PDF format that is not easily searchable by google) but were available on every restaurant review website? Or if road closures were available in a data stream so a google maps application could show which road were closed on any given day – and email you if they were in your neighborhood.

This is the future that cities like Toronto are moving towards. But why Toronto? How did it arrive at this place? How is it that the City of Toronto sent staff to ChangeCamp Toronto?

The emergence of open in Toronto

I’ve tried to map this evolution. I may have missed steps and encourage people to email me or post comments if I have.

evolution of open data TO

The first step was taken when people like David Crow created a forum – Barcamp – around which some of Toronto’s vibrant tech and social tech community began to organize itself. This not only brought the community together but it also enabled unconferences to gain traction as a fun and effective approach to addressing an issue.

Then, in late 2006 the Toronto Transit Commission (TTC) issued an Request For Proposals (RFP) for a redesign of its website. Many in the tech community – who had no interest in doing the redesign – were horrified at the RFP. It was obvious that given the specifications the new website would not achieve its potential. A community self-organized around redesigning the RFP. Others took note and, because they cared about the TTC, wanted to also talk about simple non-website changes the TTC could make to improve services. TransitCamp was this born and – with enormous trepidation, some TTC officials showed up (all of whom should be loudly applauded). The result? The tech and social tech community in Toronto was engaged in civic matters and their activities were beginning to make it onto the city government’s radar.

Other Camps carried on through 2007 and 2008 (think OpenCities), building momentum in the city. Then, in November of 2008 – a breakthrough. The City of Toronto hosted an internal Web 2.0 conference and invited Mark Surman – executive director of the Mozilla Foundation and long time participant in the Toronto social tech space – to deliver the keynote entitled “A City that Thinks like the Web“. After the talk, the Mayor of Toronto stood up and said:

” … I’ve been emailing people about your challenges. Open data for Google Transit is coming by next June, and I don’t see what we shouldn’t open source the software Toronto creates.” He also said “I promise the City will listen” if Torontonians set up a site like FixMyStreet.com

You can hear the Mayor Miller’s full response here:

In short, the Mayor promised to begin talking about opening up (and open sourcing) the city. Freeing up Ryan Merkley and the City of Toronto IT team to attend ChangeCamp

Lessons for ChangeCamp Vancouver

It remains unclear to me whether ChangeCamp is the right venue for tackling this opportunity in Vancouver.

We in Vancouver are not as far along the arc as Toronto is. We do, however, have some advantages. The map is more obvious to us and some of us have good relationships with key staff in the city. However, this process takes time. To replicate the success in Toronto, governments here on the west coast need not only be at ChangeCamp, they need to be running sessions and deeply engaged. For this to occur cultures need to be shifted, new ideas need to percolate within government institutions and agencies and relationships need to be built. All this will take time.

Creating a City of Vancouver that thinks like the web

Last November my friend Mark Surman – Executive Director of the Mozilla Foundation – gave this wonderful speech entitled “A City that Thinks Like the Web” as a lunchtime keynote for 300 councillors, tech staff and agency heads at the City of Toronto’s internal Web 2.0 Summit.

During the talk the Mayor of Toronto took notes and blackberried his staff to find out what had been done and what was still possible and committed the City of Toronto to follow Mark’s call to:

  1. Open our data. transit. library catalogs. community centre schedules. maps. 311. expose it all so the people of Toronto can use it to make a better city. do it now.
  2. Crowdsource info gathering that helps the city.  somebody would have FixMyStreet.to up and running in a week if the Mayor promised to listen. encourage it.
  3. Ask for help creating a city that thinks like the web. copy Washington, DC’s contest strategy. launch it at BarCamp.

The fact is every major city can and should think like the web. The first step is to get local governments to share (our) data. We, collectively as a community, own this data and could do amazing things with it, if we were allowed. Think of how Google Maps is now able to use Translink data to show us where bus stops are, what buses stop there and when the next two are coming!

Google Map Transit YVR

Imagine if anyone could create such a map, mashing up a myriad of data from local governments, provincial ministries, StatsCan? Imagine the services that could be created, the efficiencies gained, the research that would be possible. The long tail of public policy analysis could flourish with citizen coders, bloggers, non-profits and companies creating ideas, services, and solutions the government has neither the means nor the time to address.

If the data is the basic food source of such an online ecosystem then having it categorized, structured and known is essential. The second step is making it available as APIs. Interestingly the City of Vancouver appears to have taken that first step. VanMaps is a fascinating project undertaken by the City of Vancouver and I encourage people to check it out. It is VERY exciting that the city has done this work and more importantly, made it visible to the public. This is forward thinking stuff. The upside is that, in order to create VanMaps all the data has been organized. The downside is that – as far as I can tell – the public is restricted to looking at, but not accessing, the data. That means integrating these data sets with Google maps, or mashing it up with other data sets is not possible (please correct me if I’ve got it wrong).

Indeed, in VanMaps Terms of Use suggests that even if the data were accessible, you aren’t allowed to use it.

VanMaps EULA

Item 4 is worth noting. VanMap may only be used for internal business or personal purposes. My interpretation of this is that any Mashups using VanMap data is verboten.

But let’s not focus on that for the moment. The key point is that creating a Vancouver that thinks like the web is possible. Above all, it increasingly looks like the IT infrastructure to make it happen may already be in place.

Lessons from the Globe and Mail's Policy Wiki

I’ve been observing the Globe Policy Wiki with enormous interest. I’m broadly supportive of all of Mathew Ingram’s experiments and efforts to modernize the Globe. That said, my sense is that this project project faces a number of significant challenges. Some from the technology, others around how it is managed. Understanding and cataloging the ups and downs of such this effort is essential. At some point (I suspect in the not too distant future) wikis will make their way into the government’s policy development process – the more we understand the conditions under which they flourish, the more likely such experiments will be undertaken successfully.

Here are some lessons I’ve taken away:

1. The problem of purpose: accuracy vs. effectiveness

Wiki’s are clearly effective at spreading concrete, specific knowledge. Software manuals, best practice lists and Wikipedia works because – more often than not – they seek to share a concrete, objective truth. Indeed “the goal” of Wikipedia is to strive for verifiable accuracy. Consequently, a Wikipedia article on Mohandas Karamchand Gandhi can identify that he was born on October 2nd, 1869. We can argue whether or not this is true, but he was born on a specific day, and people will eventually align around the most accurate answer. Same with a software wiki – a software bug is caused by a specific, verifiable, set of circumstances.  Indeed, because the article is an assemblage of facts its contributors have an easier time pruning or adding to the article.

Indeed, it is where there is debate and subjective interpretation that things become more complicated in Wikipedia. Did George Bush authorize torture? I’ll bet Wikipedia has hosted a heady debate on the subject that, as yet, remains unresolved.

A policy wiki however, lives in this complicated space. This is because the goal of a policy wiki is not accuracy. Policies are are not an assemblage of facts whose accuracy can be debated. A policy is a proposal – an argument – informed by a combination of assumptions, evidence and values. How does one edit an argument? If we share different values, what do I edit? If I have contradictory evidence, what do I change? Can or should one edit a proposal they simply don’t agree with? In Wikipedia or in online software manual the response would be: “Does it make the piece more accurate?” If the answer is yes, the you should.

But with is the parallel guiding criteria for a policy wiki? “Does it make the policy more effective?” Such a question is significantly more open to interpretation. Who defines effective?

It may be that for a policy wiki will only work within communities that share a common goal, or that at least have a common metric for assessing effectiveness. More likely, Wikis in areas such as public policy may require an owner who ultimately acts as an arbiter deciding which edits stand and which edits will get deleted.

2. Combining voting with editing is problematic.

The goal of having people edit and improve a policy proposal runs counter to those of having them vote on a proposal. A wiki is, by definition, dynamic. Voting – or any preference system – implies that what is being voted on is static and unchanging; a final product that different people can assess.  How can a user vote in favour of something if, the next day, if it can be changed into something I may disagree with it? By allowing simultaneously for voting and editing I suspect the wiki discourages both. Voters are unsure if what they are voting for will stay the same, editors were likely wary of changing anything too radically because the voting option suggests proposals shouldn’t change too much – undermining the benefits of the wiki platform.

3. While problematic for editing, the Policy Wiki could be a great way to catalog briefs

One thing that is interesting about the wiki is that anyone can post their ideas. If the primary purpose were to create a catalogue of ideas the policy wiki could be a great success. Indeed, given that people are discouraged from radically altering policy notes this is effectively what the Policy Wiki is (is it still a wiki?). Presently the main obstacle to this success is the layout. The policy briefs currently appear in a linear order based on when they were submitted. This means a reader must scroll through them one by one. There is no categorization or filtering mechanism to help readers find policies they specifically care about. A search feature would enable readers to find briefs with key words. Also, enabling users to “tag” briefs would allow readers to filter the briefs in more useful ways. One could, for example, ask to see briefs tagged “environment,” or “defense” taking you to the content you want to see faster.

Such filtering approaches might distribute readers more accurately based on their interests. In a recent blog post Ingram notes that the Flat Tax briefing note received the most page views. But this should hardly come as a surprise (and probably should not be interpreted as latent interest in a flat tax). The flat tax brief was the first brief on the list. Consequently, casual observers showing up on the site to see what it was all about were probably just clicking on the first brief to get a taste.

ChangeCamp: Pulling people and creativity out of the public policy long tail

ChangeCamp is a free participatory web-enabled face-to-face event that brings together citizens, technologists, designers, academics, policy wonks, political players, change-makers and government employees to answer one question: How do we re-imagine government and governance in the age of participation?

What is ChangeCamp? It is the application of “the long tail” to public policy.

It is a long held and false assumption that ordinary citizens don’t care about public policy. The statement isn’t, in of itself, false. Many, many, many people truly don’t care that much. They want to live their lives focusing on other things – pursuing other hobbies or interests – but there are many of us who do care. Public policy geeks, fans, followers, advocates, etc… we are everywhere, we’ve just been hidden in a long tail that saw the market place and capacity for developing and delivering public policy restricted to a few large institutions. The single most important lesson I learnt from my time with Canada25 is that it doesn’t have to be that way.

Did Canada25 get a new generation of Canadians, aged 20-35 engaged in public policy? I don’t know.

What I do know is, that at the very minimum, we harnessed and enormous, dispersed desire of many Canadians to participate in, and help shape, the public policy debates affecting the country. Most importantly, we did this by doing three things:

  1. we aggregated together the people who cared about public policy, we gave them peers, friends and a sense of community.
  2. we provided a vehicle through which to channel their energy
  3. by combining 1 and 2, and by using simple technology and a low cost approach – we dramatically lowered the barriers (and csots) to entry for credible participating in these national debates

Today, the technology to enable and aggregate people their ideas, to connect them with peers and to create community, is still more powerful. Our capacity to challenge, push, help, cooperate, leverage and compete with the large institutional public policy actors has never been greater. This, for me, is the goal of ChangeCamp. What concrete tools can we build, what information can we demand be opened up, what new relationships can we build to re-imagine how we – the citizens who care – participate in the creation of public policy and the effective delivery of public services. Not to compete or replace the traditional institutional actors, but to ensure more and better ideas are heard and increasingly effective and efficient services are created.

Long tail of public policy

Individually, none of us may have the collective power of a government ministry or even the resources of most think tanks. But collectively, linked together by technology and powered by our energy and spare capital, the long tail of policy geeks and ordinary citizens is bigger, nimbler, more creative and faster than anything else. Do I know that the long tail of policy can be set free? No. But ChangeCamp seems like a fun place to start experimenting, brainstorming and sharing ways we can make this country better.

Microsoft: A case study in mismanaging a business ecosystem

mslogoA lot of fuss has been made about Microsoft’s inability to compete in the online space and the web specifically.  Indeed, it is widely acknowledged that Microsoft was slow to understand the web’s implications and adjust its product lines accordingly. How did the largest, most successful software company in the world fail to predict or even, once the future became clear, effectively adapt to the rise of the internet? More importantly, why hasn’t it been able to acquire its way out of trouble?

Numerous articles have been written on this, many focusing on Microsoft’s strategy and the fact that it likely faced a disruptive innovation problem. I’d like to supplement that analysis by focusing on the predatorial way Microsoft managed and engaged its business ecosystem in the 1990s. I’ve not seen this analysis before so I thought I would throw it out there.

The 1990’s were a good time for Microsoft. It experienced tremendous growth and its operating system was by far the dominant choice in the market place. It had tremendous leverage over everyone in its business ecosystem, including its competitors, customers and complementors. While this was seen as a source of strength (and profit) it also laid the foundation for many of its problems. The story of Microsoft’s competitors in its traditional marketplace – especially those that have adopted an open source space model such as Linux, Mozilla and Apache – is well documented and forms the core of the traditional disruptive innovation thesis. But I think Microsoft’s inability to counter these threats, as well as its inability to compete in new spaces – such as against Yahoo! or Google – isn’t just a result of the fact that it crushed its traditional competitors but also due to the mismanagement of its relationship with its complementors and partners. More importantly, the disruptive innovation thesis fails, on its own, to explain why Microsoft hasn’t been able to acquire itself out of its problems.

I’ve been told that one of Microsoft’s great strengths is that it has fantastic tools for developers (I’m not a coder so I can’t comment myself). However, in the 1990s and early 2000s, Microsoft lacked a sophisticated or long-term strategy for engaging the software products and companies those developers created. Given that Microsoft was sitting atop the  computer software ecosystem the company had one goal – staying there. This lead it to view anyone as a potential competitor – or if not a competitor than at least someone eating into profits that it could otherwise capture. Rather than balancing the growth of the value network with trying to capture its fair share, Microsoft prioritized the latter over the former. Consequently, many companies that produced products within the Microsoft ecosystem – particularly for Windows – were often not seen as complementors, but as rivals. Microsoft was aggressive in dealing with them – it was gracious in that it would usually offer to buy them out – on its terms – but always looming in the background was the threat that if you didn’t sell to them they would copy what you did. Consequently, many little companies that designed applications that enhanced Windows were forced to sell – or were put out of business after Microsoft copied their products and integrated them into the operating system.

A business ecosystem is like a natural one. It doesn’t matter how nutrient rich the environment (like say, one with excellent development tools) if emerging lifeforms are consistently snuffed out, pretty soon they will elect to grow and evolve elsewhere – even in places where the nutrients are weaker. This is precisely what I suspect started to happen. Likely, fewer and fewer developers wanted to approach the Microsoft ecosystem with a 10-foot pole because they would either be bought out on unfavorable terms or at an early stage (before they were too valuable) or worse, Mircosoft would simply crush them by using its enormous resources to replicate them and eat into their business.

The repercussion of this is that Microsoft saw fewer and fewer new and innovative products being created for its platforms. Programmers and developers shifted to other platforms, or created whole new platforms where they would be free to grow ideas. This, I believe, prevented Microsoft from understanding how the web would change its business. Not only did its current profits create a disincentive to altering its business strategy but it snuffed out one of the few groups of people that could warn it, educate it and challenge it, about the impending changes – its complementors and partners. Equally important is that it diminished the pool of potential acquisition targets whose culture, technology and processes might have helped Microsoft adapt. There were simply not that many mid-sized mammals in the ecosystem: Microsoft had prevented them from evolving.

Today – based on conversations I’ve had with some people in Microsoft – I get the sense that they are trying to become a better partner (or at at least, they may be aware of the problem). Perhaps Microsoft will succeed in becoming a better partner. It won’t however, be easy. Changes to how one treats complementors and partners often require rethinking the very culture of an organization. This is never an easy or quick process. In addition, it takes time to rebuild trust and attract new blood into the ecosystem… and any misstep will count dearly against you.

There are also almost certainly some interesting lessons in this for other dominant players – such as Google. Will Google behave differently? I don’t know. In many regards Microsoft behaviour was rational. It was seeking to preserve its position and maximize its share of the pie. This was made all the tougher because its market was evolving and the future was unclear. No one knew which pieces of the value network would be critical (and therefor most profitable)  and so Microsoft was simply trying to stake out as many of them as possible. It is easy to imagine Google behaving in a similar manner. But I suspect that if it does, it may also find it hard to escape Microsoft’s fate.

Big thank you to David H. for pointing out some typos and errors.