Author Archives: David Eaves

Unknown's avatar

About David Eaves

Connecting: public service renewal, public policy, open-source, open data & government, negotiation, collaboration, life, fun and rockband drumming http://www.eaves.ca

Mozillians: Announcing Community Metrics DashboardCon – January 21, 2014

Please read background below for more info. Here’s the skinny.

What

A one day mini-conference, held (tentatively) in Vancouver on January 14th  San Francisco on January 21st and 22nd, 2014 (remote participating possible) for Mozillians about community metrics and dashboards.

Update: Apologies for the change of date and location, this event has sparked a lot of interest and so we had to change it so we could manage the number of people.

Why?

It turns out that in the past 2-3 years a number of people across Mozilla have been tinkering with dashboards and metrics in order to assess community contributions, effectiveness, bottlenecks, performance, etc… For some people this is their job (looking at you Mike Hoye) for others this is something they arrived at by necessity (looking at you SUMO group) and for others it was just a fun hobby or experiment.

Certainly I (and I believe co-collaborators Liz Henry and Mike Hoye) think metrics in general and dashboards in particular can be powerful tools, not just to understand what is going in the Mozilla Community, but as a way to empower contributors and reduce the friction to participating at Mozilla.

And yet as a community of practice, I’m not sure those interested in converting community metrics into some form of measurable output have ever gathered together. We’ve not exchanged best practices, aligned around a common nomenclature or discussed the impact these dashboards could have on the community, management and other aspects of Mozilla.

Such an exercise, we think, could be productive.

Who

Who should come? Great question. Pretty much anyone who is playing around with metrics around community, participation, or something parallel at Mozilla. If you are interested in participating please contact sign up here.

Who is behind this? I’ve outlined more in the background below, but this event is being hosted by myself, Mike Hoye (engineering community manager) and Liz Henry (bugmaster)

Goal

As you’ve probably gathered the goals are to:

  • Get a better understanding of what community metrics and dashboards exist across Mozilla
  • Learn about how such dashboards and metrics are being used to engage, manage or organize communities and/or influence operations
  • Exchange best around both the development of and use/application of dashboards and metrics
  • Stretch goal – begin to define some common definitions for metrics that exists across mozilla to enable portability of metrics across dashboards.

Hope this sounds compelling. Please feel free to email or ping me if you have questions.

—–

Background

I know that my cocollaborators – Mike Hoye and Liz Henry have their own reasons for ending up here. I, as many readers know, am deeply interested in understanding how open source communities can combine data and analytics with negotiation and management theory to better serve their members. This was the focus on my keynote at OSCON in 2012 (posted below).

For several years I tried with minimal success to create some dashboards that might provide an overview of the community’s health as well as diagnose problems that were harming growth. Despite my own limited success, it has been fascinating to see how more and more individuals across Mozilla – some developers, some managers, others just curious observers – have been scrapping data they control of can access to create dashboards to better understand what is going on in their part of the community. The fact is, there are probably at least 15 different people running community oriented dashboards across Mozilla – and almost none of us are talking to one another about it.

At the Mozilla Summit in Toronto after speaking with Mike Hoye (engineering community manager) and Liz Henry (bugmaster) I proposed that we do a low key mini conference to bring together the various Mozilla stakeholders in this space. Each of us would love to know what others at Mozilla are doing with dashboards and to understand how they are being used. We figured if we wanted to learn from others who were creating and using dashboards and community metrics data – they probably do to. So here we are!

In addition to Mozillians, I’d also love to invite an old colleague, Diederik van Liere, who looks at community metrics for the Wikimedia foundation, as his insights might also be valuable to us.

http://www.youtube.com/watch?v=TvteDoRSRr8

The OGP, Civil Society and Power: Why #CSOday missed its mark

Yesterday in the University of London student union building, civil society organizations (CSOs) from around the world that are participating in the Open Government Partnership (OGP) gathered to prepare for today and tomorrow’s OGP summit.

There was much that was good about Civil Society Day (#CSOday). Old acquaintances reconnect and some new connections were forged. There were many useful exchanges of best practices and shared challenges and even some fun moments – such as singing led by transparency activists from the sub-continent who regularly put their lives on the line.

However with an evenings reflection I feel increasingly that the day represents a missed opportunity.

Not discussed – at least in the sessions I attended – was the more basic question of: Is this working for us? And if no, what should we do about it. Perhaps still more important was using the time to ask: How can the civil society participants use one another and the OGP to build power to advance their goals?

What – in retrospect – might have been the session most likely to trigger this conversation, the “What can civil society do to push ambition on Open Government?” did spark a brief discussion about if and how civil society organizations may exit the OGP if the process is not serving their needs. It also generated a brief acknowledgement that the OGP processes could be revisited. But ultimately the conversation felt unambitious. Something that, as an audience member, was as much my fault as anyones.

Indeed the entire day, the sessions felt like mere prologues/duplications of the sessions that are occurring during the OGP. Coalitions were not formed. Misunderstandings not broken down. Progress was made, but at was best iterative, not transformative.

Again, the CSO’s in my mind, need to start thinking about how the OGP can help them build power. I think, until now, we’ve believed that the secretariat and the processes would do that for us. It does – but likely not enough to generate the type of action many are looking for. Worse, the OGP is probably unlikely to have a single failure moment – rather the CSOs might slowly start drifting away quietly, if they feel it does not serve them. This makes figuring out more about how the OGP can serve CSO’s – particularly more local ones – all the more important.

I am perhaps, alone in thinking this. But if not, I offer one proposal about how we could build power.

A Brief Diagnosis

A core part of the problem is that while the heads of states can regularly generate media by simply meeting within the context of the OGP, it is much harder for civil society. I – and some I talk to – feel like this void should be filled by the steering committee – and particularly its CSO members. However, they appear constrained in what they can say and do. This manifests itself in three ways:

  • First, it appears the steering committee is unable to speak out against – and attract attention to – countries that are clearly moving backwards on their commitments.
  • Second, there appears to be limited capacity to challenge new entrants who cause many CSOs to feel uncomfortable. This includes Russia (who ultimately opted not to join) and Argentina, which many Latin American CSOs feel has been particularly egregious in systemically limiting freedom of expression. Membership has privileges, it endows on countries some social license and impacts the OGP brand in other countries – barriers to entry matter.
  • Third, the steering committee seems to have done little to attract international and/or national attention to Independent Reporting Mechanism reports – a third party report that assessed governments’ progress against their goals. Fears that the IRPs would be watered down seem to have been misplaced. According to many the IRPs are fair, balanced and in many cases quite critical. This is fantastic. The feat now is that poor IRP reports are not creating neither attention nor pressure for change.

It may not be the role of the steering committee to draw attention to these issues. I feel it is. Either way, it needs to be someone’s role. I want to be clear, I don’t believe the CSOs steering committee members have been negligent – I know they are diligent and effective CSO partners. Rather I believe there are some norms, and even hard structural barriers that prevent them from speaking out or pushing the steering committee as a whole to speak out on these issues.

Thus I suggest that the CSOs do the following.

A Suggestion

First – create a committee of highly respected CSO members that most members believe can, in specific circumstances, speak on behalf of the global CSO community. Normally I’d advocate that the members of each regional committee caucus until they decide on who that person can be. However, perhaps in the interim, we should just pick some that are appear to be widely respected. I’ve not consulted with any of these people – so mentioning them is just as likely to embarrass them – but I might nominate: Alison Tilley (South Africa), John Wonderlich (United States),  Emmanuel C. Lallana (Philippines), Felipe Heusser (Chile), Helen Darbishire (Europe). There is a imperfect list and is limited by people I’ve met and heard others speak about in positive terms. The key thing is to not get bogged down – at this time – with the selection process (at this time).

Second – a common mailing list where if, at any point, a national group of CSOs feel like their country is backsliding on its commitments or failing to live up to the OGP in a significant way, they could raise their concern with this committee.

Third – if, after some deliberation both within the committee and across the CSO community in general it was felt that there was a serious problem, this committee could issues statements on behalf of the CSO community. I could be wrong, but it would be nice to think that a collective outcry from the world’s leading CSO’s in transparency, governance and government reform might focus some (hopefully embarrassing) international media on the situation and put this issue on the agenda in various diplomatic circles. This committee might also bang the drum more aggressively in the international media about poor IRM reports.

I’ll be absolutely transparent about the goals here. Directly, the idea is to make the OGP process empower more CSO’s – hopefully the local one in particular. Indirectly however, the underlying hope to put pressure on the OGP governance and culture to remove any barriers that currently prevent CSO steering committee members from speak out as a group about various issues. If we succeeded in this, we could abandon this idea and concentrate on new ways to create power. And, if this had not come to pass, we could then formalize the committee and make it more permanent.

I don’t claim this model is perfect, and would invite feedback and or suggestions for alternatives. But I would love for the CSOs to starting thinking about how they can leverage the community the OGP has created to foster power to enable them to challenge governments more effectively.

Moreover, I think many governments would like it. Indeed, after floating this idea past one government official, they commented “We would like the CSOs to push as more. We want to do more and need to have a political environment in which that pressure exists. It helps us.” Perhaps not true of every government – but we have allies.

The promise and challenges of open government – Toronto Star OpEd

As some readers many know it was recently announced that I’ve been asked by Ontario Premier Wynn and Government Services Minister John Milloy to be part of the Government of Ontario’s task force on Open Government.

The task force will look at best practices around the world as well as engage a number of stakeholders and conduct a series of public consultations across Ontario to make a number of recommendations around opening up the Ontario government.

I have an opinion piece in the Toronto Star today titled The Promise and Challenges of Open Government where I try (in a few words) to outline some of the challenges the task force faces as well as some of the opportunities I hope it can capitalize on.

The promise and challenges of open government

Last week, Premier Kathleen Wynne announced the launch of Ontario’s Open Government initiative, including an engagement task force (upon which I sit).

The premier’s announcement comes on the heels of a number of “open government” initiatives launched in recent years. President Barack Obama’s first act in 2009 was to sign the Memorandum on Transparency and Open Government. Since then numerous city, state and provincial governments across North America are finding new ways to share information. Internationally, 60 countries belong to the Open Government Partnership, a coalition of states and non-profits that seeks to improve accountability, transparency, technology and innovation and citizen participation.

Some of this is, to be blunt, mere fad. But there is a real sense among many politicians and the public that governments need to find new ways to be more responsive to a growing and more diverse set of citizen needs, while improving accountability.

Technology has a certainly been – in part – a driver, if only because it shifts expectations. Today a Google search takes about 30 milliseconds, with many users searching for mere minutes before locating what they are looking for. In contrast, access to information requests can take weeks, or months to complete. In an age of computers, government processes often seem more informed by the photocopier – clinging to complex systems for sorting, copying and sharing information – than using computer systems that make it easy to share information by design.

There is also growing recognition that government data and information can empower people both inside and outside government. In British Columbia, the province’s open data portal is widely used by students – many of whom previously used U.S. data as it was the only free source. Now the province benefits from an emerging workforce that uses local data while studying everything from the environment to demography to education. Meanwhile the largest user of B.C.’s open data portal are public servants, who are able to research and create policy while drawing on better information, all without endless meetings to ask for permission to use other departments’ data. The savings from fewer meetings alone is likely significant.

The benefits of better leveraging government data can affect us all. Take the relatively mundane but important issue of transit. Every day hundreds of thousands of Ontarians check Google Maps or locally developed applications for transit information. The accumulated minutes not spent waiting for transit has likely saved citizens millions of hours. Few probably realize however that it is because local governments “opened” transit data that it has become so accessible on our computers and phones.

Finally, there are a number of new ways to think about how to “talk” to Ontarians. It is possible that traditional public consultations could be improved. But there is also an opportunity to think more broadly about how the government interacts with citizens. Projects like Wikipedia demonstrate how many small contributions can create powerful resources and public assets. Could such a model apply to government?

All of these opportunities are exciting – and the province is right to explore them. But important policy questions remain. For example: how do we safeguard the data government collects to minimize political interference? The country lost a critical resource when the federal government destroyed the reliability of the long form census by making it voluntary. If crowdsourcing and other new forms of public engagement can be adopted for government, how do we manage privacy concerns and preserve equality of opportunity? And how will such changes affect public representation? Canada’s political system has been marked by increasing centralization of power over the past several decades – will new technologies and approaches further this trend? Or could they be shaped to arrest it? These are not simple questions.

It is also easy to dismiss these efforts. This will neither be the first nor the last time people talk about open government. Indeed, there is a wonderfully cynical episode of Yes, Minister from 1980 titled “Open Government.” More recently, various revelations about surveillance and national governments’ desire to snoop in on our every email and phone call reveals much about what is both opaque and to be feared about our governments. Such cynicism is both healthy and necessary. It is also a reason why we should demand more.

Open government is not something we will ever fully achieve. But I do hope that it can serve as an objective and a constantly critical lens for thinking about what we should demand. I can’t speak for the other panelists of the task force, but that will be how I approach my work.

David Eaves is a public policy entrepreneur, open government activist and negotiation expert. He is a member of the Ontario government’s new Engagement Task Force.

Government Procurement Reform – It matters

Earlier this week I posted a slidecast on my talk to Canada’s Access to Information Commissioners about how, as they do their work, they need to look deeper into the government “stack.”

My core argument was how decisions about what information gets made accessible is no longer best managed at the end of a policy development or program delivery process but rather should be embedded in it. This means monkeying around and ensuring there is capacity to export government information and data from the tools (e.g. software) government uses every day. Logically, this means monkeying around in procurement policy (see slide below) since that is where the specs for the tools public servants use get set. Trying to bake “access” into processes after the software has been chosen is, well, often an expensive nightmare.

Gov stack

Privately, one participant from a police force, came up to me afterward and said that I was simply guiding people to another problem – procurement. He is right. I am. Almost everyone I talk to in government feels like procurement is broken. I’ve said as much myself in the past. Clay Johnson is someone who has thought about this more than others, here he is below at the Code for America Summit with a great slide (and talk) about how the current government procurement regime rewards all the wrong behaviours and often, all the wrong players.

Clay Risk profile

So yes, I’m pushing the RTI and open data community to think about procurement on purpose. Procurement is borked. Badly. Not just from a wasting tax dollars money perspective, or even just from a service delivery perspective, but also because it doesn’t serve the goals of transparency well. Quite the opposite. More importantly, it isn’t going to get fixed until more people start pointing out that it is broken and start contributing to solving this major bottle neck of a problem.

I highly, highly recommend reading Clay Johnson’s and Harper Reed’s opinion piece in today’s New York Times about procurement titled Why the Government Never Gets Tech Right.

All of this becomes more important if the White House’s (and other governments’ at all levels) have any hope of executing on their digital strategies (image below).  There is going to be a giant effort to digitize much of what governments do and a huge number of opportunities for finding efficiencies and improving services is going to come from this. However, if all of this depends on multi-million (or worse 10 or 100 million) dollar systems and websites we are, to put it frankly, screwed. The future of government isn’t to be (continue to be?) taken over by some massive SAP implementation that is so rigid and controlled it gives governments almost no opportunity to innovate. And this is the future our procurement policies steer us toward. A future with only a tiny handful of possible vendors, a high risk of project failure and highly rigid and frail systems that are expensive to adapt.

Worse there is no easy path here. I don’t see anyone doing procurement right. So we are going to have to dive into a thorny, tough problem. However, the more governments that try to tackle it in radical ways, the faster we can learn some new and interesting lessons.

Open Data WH

Access to Information, Technology and Open Data – Keynote for the Commissioners

On October 11th I was invited by Elizabeth Denham, the Access to Information and Privacy Commissioner for British Columbia to give a keynote at the Privacy and Access 20/20 Conference in Vancouver to an audience that included the various provincial and federal Information Commissioners.

Below is my keynote, I’ve tried to sync the slides up as well as possible. For those who want to skip to juicier parts:

  • 7:08 – thoughts about the technology dependence of RTI legislation
  • 12:16 –  the problematic approach to RTI implementation that results from these unsaid assumptions
  • 28:25 – the need and opportunity to bring open data and RTI advocates together

Some acronyms used:

StreetMix for testing bike lanes – Burrard St. Bridge Example

I’m MCing the Code for America Summit at the moment, so short on time to write a post, but I’m just LOVING StreetMix so much I had to give it a shout out. If you are a councillor, urban planner or community activist, StreetMix is a site you HAVE to check out.

What does it do? I basically allows you to create or edit and street you want. It is so simple to use it takes about 1 minute to master. At that point, you can build, copy and redesign any street in the world.

Here, for example I’ve recreated the Burrard St. Bridge in Vancouver as it exists today, with bike lanes and below, as it existed before the addition bike lane.

Burrard Bridge new

Burrard Bridge old

Mission Driven Orgs: Don’t Alienate Alumni, Leverage Them (I’m looking at you, Mozilla)

While written for Mozilla, this piece really applies to any mission-driven organization. In addition, if you are media, please don’t claim this is written by Mozilla. I’m a contributor, and Mozilla is at its best when it encourages debate and discussion. This post says nothing about Mozilla official policy and I’m sure there Mozillians who will agree and disagree with me.

The Opportunity

Mozilla is an amazing organization. With a smaller staff, and aided by a community of supporters, it not only competes with the Goliaths of Silicon Valley but uses its leverage whenever possible to fight for users’ rights. This makes it simultaneously a world leading engineering firm and, for most who work there, a mission driven organization.

That was on full display this weekend at the Mozilla Summit, taking place concurrently in Brussels, Toronto and Santa Clara. Sadly, so was something else. A number of former Mozillians, many of whom have been critical to the organization and community were not participating. They either weren’t invited, or did not feel welcome. At times, it’s not hard to see why:

You_chose_Facebook

Again this is not an official Mozilla response. And that is part of the problem. There has never been much of an official or coordinated approach to dealing with former staff and community members. And it is a terrible, terrible lost opportunity – one that hinders Mozilla from advancing its mission in multiple ways.

The main reason is this: The values we Mozillians care about may be codified in the Mozilla Manifesto, but they don’t reside there. Nor do they reside in a browser, or even in an organization. They reside in us. Mozilla is about creating power by foster a community of people who believe in and advocate for an open web.

Critically, the more of us there are, the stronger we are. The more likely we will influence others. The more likely we will achieve our mission.

And power is precisely what many of our alumni have in spades. Given Mozilla’s success, its brand, and its global presence, Mozilla’s contributors (both staff and volunteers) are sought-after – from startups to the most influential companies on the web. This means there are Mozillians influencing decisions – often at the most senior levels – at companies that Mozilla wants to influence. Even if these Mozillians only injected 5% of what Mozilla stands for into their day-to-day lives, the web would still be a better place.

So it begs the question: What should Mozilla’s alumni strategy be? Presently, from what I have seen, Mozilla has no such strategy. Often, by accident or neglect, alumni are left feeling guilty about their choice. We let them – and sometimes prompt them to – cut their connections not just with Mozilla but (more importantly) with the personal connection they felt to the mission. This at a moment when they could be some of the most important contributors to our mission. To say nothing about continuing to contribute their expertise to coding, marketing or any number of other skills they may have.

As a community, we need to accept that as amazing as Mozilla (or any non-profit) is, most people will not spend their entire career there nor volunteer forever. Projects end. Challenges get old. New opportunities present themselves. And yes, people burn out on mission – which no longer means they don’t believe in it – they are just burned out. So let’s not alienate these people, let’s support them. They could be a killer advantage one of our most important advantages. (I mean, even McKinsey keeps an alumni group, and that is just so they can sell to them… we can offer so much more meaning than that. And they can offer us so much more than that).

How I would do it

At this point, I think it is too late to start a group and hope people will come. I could be wrong, but I suspect many feel – to varying degrees – alienated. We (Mozilla) will probably have to do more than just reach out a hand.

I would find three of the most respected, most senior Mozillians who have moved on and I’d reach out privately and personally. I’d invite them to lunch individually. And I’d apologize for not staying more connected with them. Maybe it is their fault, maybe it is ours. I don’t care. It’s in our interests to fix this, so let’s look inside ourselves and apologize for our contribution as a way to start down the path.

I’d then ask them if them if they would be willing to help oversee an alumni group. If they would reach out to their networks and, with us, bring these Mozillians back into the fold.

There is ample opportunity for such a group. They could be hosted once a year and be shown what Mozilla is up to and what it means for the companies they work for. They could open doors to C-suite offices. They could mentor emerging leaders in our community and they could ask for our advice as they build new products that will impact how people use the web. In short, they could be contributors.

Let’s get smart about cultivating our allies – even those embedded in organizations with don’t completely agree with. Let’s start thinking about how we tap into and help keep alive the values that made them Mozillians in the first place, and find ways to help them be effective in promoting them.

The 311 Open Data Competition is now Live on Kaggle

As I shared the other week, I’ve been working on a data competition with Kaggle and SeeClickFix involving 311 data from four cities: Chicago, New Haven, Oakland and Richmond.

So first things first – the competition is now live. Indeed, there are already 19 teams and 56 submissions that have been made. Fortunately, time is on your side, there are 56 days to go.

As I mentioned in my previous post on the subject, I have real hopes that this competition can help test a hypothesis I have about the possibility of an algorithmic open commons:

There is, however, for me, a potentially bigger goal. To date, as far as I know, predictive algorithms of 311 data have only ever been attempted within a city, not across cities. At a minimum it has not been attempted in a way in which the results are public and become a public asset.

So while the specific problem  this contest addresses is relatively humble, I’d see it as a creating a larger opportunity for academics, researchers, data scientists, and curious participants to figure out if can we develop predictive algorithms that work for multiple cities. Because if we can, then these algorithms could be a shared common asset. Each algorithm would become a tool for not just one housing non-profit, or city program but a tool for all sufficiently similar non-profits or city programs.

Of course I’m also discovering there are other benefits that arise out of these competitions.

This last weekend there was a mini-sub competition/hackathon involving a subset of the data. It was amazing to watch from afar. First, I was floored by how much cooperation there was, even between competitors and especially after the competition closed. Take a look at the forums, they are probably make one of the more compelling cases that open data can help foster more people to want to learn how to manipulate and engage with data. Here are contestants sharing their approaches and ideas with one another – just like you’d want them to. I’d known that Kaggle had a interesting community and that learning played an important role in it, but “riding along” in a mini competition has caused me to look again at the competitions through a purely educational lens. It is amazing how much people both wanted to learn and share.

As in the current competition, the team at the hackathon also ran a competition around visualizing the data. And there were some great visualization of the data that came out of it, as well as another example of where people were trying to learn and share. Here are two of my favourites:

map2

I love this visualization by Christoph Molnar because it reveals the different in request locations in each city. In some they are really dense, whereas in others they are much (more) evenly distributed. Super interesting to me.

Most pressing issues in each city

I also love the simplicity of this image created by miswift. There might have been other things I’d done, like colour coded similar problems to make them easier to compare across cities. But I still love it.

Congratulations to all the winners from this weekends event, and I hope readers will consider participating in the current competition.

On Vaccines, Incentives, Open Data and Public Policy

I know. Some mom coming out in favor of vaccines shouldn’t be breaking news. There’s nothing edgy about siding with most parents, nearly all the world’s governments and the vast majority of medical researchers and practitioners. But more of us need to do it.

And so begins JJ Keith – reknown blogger of all things parenting (I’m told) – titled I’m Coming Out… as Pro-Vaccine in the Huffington Post. Two days after getting published the piece has received a staggering 35,000+ facebook “likes” (a number, I suspect, that dwarfs the total likes of every “formal” public policy piece written in the last 48 hours combined – of course her’s IS a piece about public policy). And, it is a wonderfully written piece – filled with a dark sense of humour and concern that the topic deserves.

Keith’s main point is that more parents need to come out in favour of vaccines. That the central problem with the anti-vaccine movements is that they are loud and the rest of… well… are not. And I agree. It would be great if more of us stood up and talked about the importance of vaccines and engaged in this so called debate. Indeed, it may be the best of a number of not so great public policy options.

I’d love to think that there are simple policy measures that could fix what is, quite frankly, a matter of life and death. Requiring parents to produce proof of vaccination when entering their kids in public school feels like an obvious option. Greater transparency into vaccination rates in schools could also potentially spur parents to select schools that are “safer” from a health perspective. But not all those incentives in either case always push in the right direction.

So first, why does this matter?

When parents fail to vaccinate their children they don’t just put their own kids at risk of contracting measles, polio and other terrible diseases. Sadly, they put at risk newborns (who cannot be vaccinated) and – more critically – a chunk of the population who legitimately cannot be vaccinated or interestingly, who do get vaccinated but for whom the vaccination does not work.

This is why epidemiologists refer to “herd immunity” (it’s always nice when discussing public policy to refer to humanity as a “herd”). Since vaccines don’t work on everyone, enough people need to be vaccinated to prevent the disease from spreading reliably. The percentages required is usually north of 80% or 90% although I’m sure it varies a little based on the communicability of the disease.

Thus, what we actually have here is a free rider problem. If everyone vaccinates, then a few people opting out are probably safe if “the herd” remains sufficiently immune to the diseases. But drop below 80% and suddenly a tipping point is reached and things can get scary. Very scary. Frighteningly, there are whole (small) schools districts in California that fall below 50% immunized. And there are normal sized school districts that sit in the 60% and 70% range.

Indeed, thanks to the California Department of Public Health (CDPH), I was able to download the vaccination rates for every preschool in the state and poke through the raw assessment data myself (Sidenote: Dear CDPH administrators, please format your excel spreadsheets without merging title columns that make it impossible for users to sort the data, super frustrating). Shockingly two preschools, Kolbe Academy in Napa and the Waldorf School of Santa Barbara, had a zero kids – that is not one child – with all the required vaccines. Moreover, at Kolbe every single kid had a personal (not medical) exemption while at the Waldorf all but two had personal exemptions (meaning there parents didn’t want their kids to have vaccines) and the other two had simple not gotten around to it. Indeed at least 230 preschools in the state had kids with “personal exemptions” from vaccines that caused total immunization rates to fall below 80%.

These preschools are, in essence time bombs.

And some have already gone off! Just Googling “Whoop Cough Waldorf” (a chain of preschools that seems to dominate my poor immunization list) revealed a story about The East Bay Waldorf School – where only 44% of students are immunized – that had to be shut down by the local health authority because of an outbreak of Whooping Cough which is… easily prevent by a vaccine.

So a simple solution would be to require all kids who go to public schools to be vaccinated. This however, is already the rule in places like Santa Clara, and it while in part effective, it also drives  all the non-immunized kids into the same private schools, further diminishing the herd immunization effect and creating an ideal breeding ground for a dangerous disease.

Disturbingly, this is only desirable insofar as having a few outbreaks in schools like this cause parents to rethink their approach. But this is a terrible, terrible price to pay. It recently happened in Texas where a church that preached a deep skepticism of medicine and vaccines suffered a measles outbreak. Luckily no one died, but the church very quickly set up vaccination stations on its site. It is amazing how quickly one returns to fold of science and medicine once kids start contracting fatal diseases.

I also wouldn’t be surprised me if someone, using the data from the California Dept of Public Heath website created an web application that did a risk assessment of every school based on a number of factors of which vaccination rates would loom large. On the one hand it might cause parents to demand that other kids who attend the school also be vaccinated. On the flip side it might just accelerate the divisions between the anti-immunization camp and everyone else, creating even more dense clusters of schools where non of the kids are immunized.

In the end, short of a series of minor epidemics where enough kids die to scare non-conforming parents into getting their kids immunized, I fear that JJ Keith is doing the best possible thing: trying to build a coalition that will confront those spreading misinformation about vaccinations. Indeed, maybe a good first step would be organizing a boycott of Jenny McCarthy – a celebrity and “The View” co-hostess who has been one of the worst offenders in spreading misinformation about vaccines. Maybe we could call schools where more than 20% of the kids are not immunized because of their parents personal preference “McCarthy Schools.” And every time there is a measles breakout we can label it a “McCarthy Outbreaks” or worse, a “McCarthy death.” It’s strong language, but society maybe need to get tough to rebuild the social pressure needed to stomp out this problem… the other tools at our disposal are not, I fear, that strong.

Why Journalists Should Support Putting Access to Information Requests Online Immediately

Here’s a headline you don’t often expect to see: “Open-Government Laws Fuel Hedge-Fund Profits.”

It’s a fascinating article that opens with a story about SAC Capital Advisors LP – a hedge fund. Last December SAC Capital used Freedom of Information Laws (FOIA) to request preliminary results on a Vertex Pharmaceuticals drug being tested by the US Food and Drug Administration. The request revealed there were no “adverse event reports,” increasing the odds the drug might be approved. SAC Capital used this information – according to the Wall Street Journal – to snatch up 15,000 shares and 25,000 options of Vertex. In December – when the request was made – the stock traded around $40. Eight months later it peaked at $89 and still trades today at around $75. Thus, clever usage of government access to information request potentially netted the company a cool ROI of 100% in 9 months and a profit of roughly 1.2 million dollars (assuming they sold around $80).

This is an interesting story. And I fear it says a lot about the future of access to information laws.

This is because it contrasts sharply with the vision of access to information the media likes to portray: Namely, that access requests are a tool used mainly by hardened journalists trying to uncover dirt about a government. This is absolutely the case… and an important use case. But it is not the only usage of access laws. Nor was it the only intended use of the law. Indeed, it is not even the main usage of the law.

In my work on open data I frequently get pulled into conversations about access to information laws and their future. I find these conversations are aggressively dominated by media representatives (e.g. reporters) who dislike alternative views. Indeed, the one-sided nature of the conversation – with some journalists simply assuming they are the main and privileged interpreters of the public interest around access laws – is deeply unhealthy. Access to information laws are an important piece of legislation. Improving and sustaining them requires a coalition of actors (particularly including citizens), not just journalists. Telling others that their interests are secondary is not a great way to build an effective coalition. Worse, I fear the dominance of a single group means the conversation is often shaped by a narrow view of the legislation and with a specific set of (media company) interests in mind.

For example, many governments – including government agencies in my own province of British Columbia – have posted responses to many access to information requests publicly. This enrages (and I use that word specifically) many journalists who see it as a threat. How can they get a scoop if anyone can see government responses to their requests at the same time? This has led journalists to demand – sometimes successfully – that the requestor have exclusive access to government responses for a period of time. Oy vey. This is dangerous.

For certain types of stories I can see how complete transparency of request responses could destroy a scoop. But most stories – particularly investigative stories – require sources and context and understanding. Such advantages, I suspect, are hard to replicate and are the real source of competitive advantage (and if they aren’t… shouldn’t they be?).

It also suggests that a savvy public – and the media community – won’t be able to figure out who always seems to be making the right requests and reward them accordingly. But let’s put issues of a reputation economy and the complexity of reporting on a story aside.

First, it is worth noting that it is actually in the public interest to have more reporters cover a story and share a piece of news – especially about the government. Second, access to information laws were not created to give specific journalists scoops – they were designed to maximize the public’s capacity to access government information. Protecting a media company’s business model is not the role of access laws. It isn’t even in the spirit of the law.

Third, and worst, this entire debate fails to discuss the risks of such an approach. Which brings me back to the Wall Street Journal article.

I have, for years, warned that if public publication of access to information requests results are delayed so that one party (say, a journalist) has exclusive access for a period of time, then the system will also be used by others in pursuit of interests that might not be in the public good. Specifically, it creates a strong incentive for companies and investors to start mining government to get “exclusive” rights to government information they can put to use in advancing their agenda – making money.

As the SAC Capital Case outlined above underscores, information is power. And if you have exclusive access to that information, you have an advantage over others. That advantage may be a scoop on a government spending scandal, but it can also be a stock tip about a company whose drug is going to clear a regulatory hurdle, or an indication that a juicy government contract is about to be signed, or that a weapons technology is likely to be shelved by the defence department. In other words – and what I have pointed out to my journalist friends – exclusivity in access to information risks transforming the whole system into a giant insider information generation machine. Great for journalists? Maybe. (I’ve my doubts – see above.) But great for companies? The Wall Street Journal article shows us it already is. Exclusivity would make it worse.

Indeed, in the United States, the private sector is already an enormous generator of access requests. Indeed one company, that serves as a clearing house for requests, accounts for 10% of requests on its own:

The precise number of requests from investors is impossible to tally because many come from third-party organizations that send requests on behalf of undisclosed clients—a thriving industry unto itself. One of them, FOI Services Inc., accounted for about 10% of the 50,000 information requests sent to the FDA during the period examined by the Journal. Marlene Bobka, a senior vice president at Washington-based FOI Services, says a “huge, huge reason people use our firm is to blind their requests.”

Imagine what would happen if those making requests had formal exclusive rights? The secondary market in government information could become huge. And again, not in a way that advances the public interest.

In fact, given the above-quoted paragraph, I’m puzzled by the fact that journalists don’t demand that every access to information request be made public immediately. All told, the resources of the private sector (to say nothing of the tens of thousands of requests made by citizens or NGOs) dwarf those of media companies. Private companies may start (or already are) making significantly more requests than journalists ever could. Free-riding on their work could probably be a full time job and a successful career for at least a dozen data journalists. In addition, by not duplicating this work, it frees up media companies’ capacity to focus on the most important problems that are in the public good.

All of this is to say… I fear for a world where many of the journalists I know – by demanding changes that are in their narrow self-interest – could help create a system that, as far as I can tell, could be deeply adverse to the public interest.

I’m sure I’m about to get yelled at (again). But when it comes to access to information requests, we are probably going to be better off in a world where they are truly digitized. That means requests can be made online (something that is somewhat arriving in Canada) and – equally importantly – where results are also published online for all to see. At the very minimum, it is a conversation that is worth having.