Category Archives: open data

Open Data Day 2014 is Coming Feb 22 – Time to Join the Fun!

So, with much help from various community members (who reminded me that we need to get this rolling – looking at you Heather Leson), I pleased to say we are starting to gear up for Open Data Day 2014 on February 22nd, 2014.

From its humble beginnings of a conversation between a few friends who were interested in promoting and playing with open data, last year Open Data Day had locally organized events take place in over 100 cities around the world. Check out this video of open data day in Kathmandu last year.

Why makes Open Data Day work? Mostly you. It is a global excuse for people in communities like yours to come together and organize an event that meets their needs. Whether that is a hackathon, a showcase and fair, lectures, workshops for local NGOs and businesses, training on data, or meetings with local politicians – people are free to organize around whatever they think their community needs. You can read more about how Open Data Day works on our website.

Want to join in on the fun? I thought you’d never ask. Listed below are some different ways you can help make Open Data Day 2014 a success in your community!

A) How can I let EVERYONE know about open data day

I love the enthusiasm. Here’s a tweet you can send:

#OpenData Day is community powered in a timezone near you.  http://opendataday.org/ #ODD2014

Yes, our hashtag is #ODD2014. Cause we are odd. And cause we love open data.

B) I’d like to participate!

Great! If you are interested in participating in check out the Open Data Day wiki. We’ve just unlocked the pages so cities haven’t been added yet but feel free to add your city to the list, and put down your name as interested in participating. You can even check to see who organized the event last year to see if they are interested in doing it again.

C) Forget about participating, I want to coordinate an Open Data Day event in my city.

Whoa! Very exciting! Here’s a short checklist of what to do:

  • If you didn’t organize one last year, check to see if anyone in your city did. It would be good to connect with them first.
  • Read the Open Data Day website. Basically, pick up on our vibe: we want Open Data Day to work for everyone, from novices who know little about data to experts like Kaggle participants and uber geeks like Bruce Schneier. These events have always been welcoming and encouraging – it is part of the design challenge.
  • Okay, now add your city to the list, let people know where it will be taking place (or that you are working on securing space), let them know a rough agenda, what to expect, and how they can contribute.
  • Add yourself to the 2014 Open Data Day map. (Hint: Wikipedia lists Lat/Long in the information side bar for each cities wiki page: “Coordinates: 43°42′N 79°24′W”)
  • Join the Open Data Day mailing list. Organizers tend to share best practices and tips here. It’s not serious, really just a help and support group.
  • Check out resources like this and this about how to organize a successful event.
  • Start spreading the news!

D) I want to help more! How can Open Data Day work more smoothly everywhere?

Okay, for the truly hardcore you right, we need help. Open Data day has grown. This means we’ve outgrown a whole bunch of our infrastructure… like our webpage! Everyone involved in this is a volunteer so… we have some extra heavy lifting we need help with. This includes:

a. Website template update: The current Open Data Day template was generously donated by Mark Dunkley (thank you!!!). We’d love to have it scale a little better and refresh the content. You can see the code on github here. Email me if you are interested. Skills required: css, design

b. Translation: Can you help translate the ODD site into your language? You can submit the requests on github or send a document to heather.leson at okfn dot org with the content. She’ll do the github stuff if that’s beyond you.

c. Map: Leaflet and layers helpers wanted! We’d like a map geek to help correct geolocation and keep the 2014 map fresh with accurate geo for all the locations. Github repo is here and the event list is here.

What’s next?

I’m really looking forward to this year… I’ve lots more thoughts I’ll be sharing shortly.

Plus, I can’t wait to hear from you!

Mozillians: Announcing Community Metrics DashboardCon – January 21, 2014

Please read background below for more info. Here’s the skinny.

What

A one day mini-conference, held (tentatively) in Vancouver on January 14th  San Francisco on January 21st and 22nd, 2014 (remote participating possible) for Mozillians about community metrics and dashboards.

Update: Apologies for the change of date and location, this event has sparked a lot of interest and so we had to change it so we could manage the number of people.

Why?

It turns out that in the past 2-3 years a number of people across Mozilla have been tinkering with dashboards and metrics in order to assess community contributions, effectiveness, bottlenecks, performance, etc… For some people this is their job (looking at you Mike Hoye) for others this is something they arrived at by necessity (looking at you SUMO group) and for others it was just a fun hobby or experiment.

Certainly I (and I believe co-collaborators Liz Henry and Mike Hoye) think metrics in general and dashboards in particular can be powerful tools, not just to understand what is going in the Mozilla Community, but as a way to empower contributors and reduce the friction to participating at Mozilla.

And yet as a community of practice, I’m not sure those interested in converting community metrics into some form of measurable output have ever gathered together. We’ve not exchanged best practices, aligned around a common nomenclature or discussed the impact these dashboards could have on the community, management and other aspects of Mozilla.

Such an exercise, we think, could be productive.

Who

Who should come? Great question. Pretty much anyone who is playing around with metrics around community, participation, or something parallel at Mozilla. If you are interested in participating please contact sign up here.

Who is behind this? I’ve outlined more in the background below, but this event is being hosted by myself, Mike Hoye (engineering community manager) and Liz Henry (bugmaster)

Goal

As you’ve probably gathered the goals are to:

  • Get a better understanding of what community metrics and dashboards exist across Mozilla
  • Learn about how such dashboards and metrics are being used to engage, manage or organize communities and/or influence operations
  • Exchange best around both the development of and use/application of dashboards and metrics
  • Stretch goal – begin to define some common definitions for metrics that exists across mozilla to enable portability of metrics across dashboards.

Hope this sounds compelling. Please feel free to email or ping me if you have questions.

—–

Background

I know that my cocollaborators – Mike Hoye and Liz Henry have their own reasons for ending up here. I, as many readers know, am deeply interested in understanding how open source communities can combine data and analytics with negotiation and management theory to better serve their members. This was the focus on my keynote at OSCON in 2012 (posted below).

For several years I tried with minimal success to create some dashboards that might provide an overview of the community’s health as well as diagnose problems that were harming growth. Despite my own limited success, it has been fascinating to see how more and more individuals across Mozilla – some developers, some managers, others just curious observers – have been scrapping data they control of can access to create dashboards to better understand what is going on in their part of the community. The fact is, there are probably at least 15 different people running community oriented dashboards across Mozilla – and almost none of us are talking to one another about it.

At the Mozilla Summit in Toronto after speaking with Mike Hoye (engineering community manager) and Liz Henry (bugmaster) I proposed that we do a low key mini conference to bring together the various Mozilla stakeholders in this space. Each of us would love to know what others at Mozilla are doing with dashboards and to understand how they are being used. We figured if we wanted to learn from others who were creating and using dashboards and community metrics data – they probably do to. So here we are!

In addition to Mozillians, I’d also love to invite an old colleague, Diederik van Liere, who looks at community metrics for the Wikimedia foundation, as his insights might also be valuable to us.

http://www.youtube.com/watch?v=TvteDoRSRr8

The OGP, Civil Society and Power: Why #CSOday missed its mark

Yesterday in the University of London student union building, civil society organizations (CSOs) from around the world that are participating in the Open Government Partnership (OGP) gathered to prepare for today and tomorrow’s OGP summit.

There was much that was good about Civil Society Day (#CSOday). Old acquaintances reconnect and some new connections were forged. There were many useful exchanges of best practices and shared challenges and even some fun moments – such as singing led by transparency activists from the sub-continent who regularly put their lives on the line.

However with an evenings reflection I feel increasingly that the day represents a missed opportunity.

Not discussed – at least in the sessions I attended – was the more basic question of: Is this working for us? And if no, what should we do about it. Perhaps still more important was using the time to ask: How can the civil society participants use one another and the OGP to build power to advance their goals?

What – in retrospect – might have been the session most likely to trigger this conversation, the “What can civil society do to push ambition on Open Government?” did spark a brief discussion about if and how civil society organizations may exit the OGP if the process is not serving their needs. It also generated a brief acknowledgement that the OGP processes could be revisited. But ultimately the conversation felt unambitious. Something that, as an audience member, was as much my fault as anyones.

Indeed the entire day, the sessions felt like mere prologues/duplications of the sessions that are occurring during the OGP. Coalitions were not formed. Misunderstandings not broken down. Progress was made, but at was best iterative, not transformative.

Again, the CSO’s in my mind, need to start thinking about how the OGP can help them build power. I think, until now, we’ve believed that the secretariat and the processes would do that for us. It does – but likely not enough to generate the type of action many are looking for. Worse, the OGP is probably unlikely to have a single failure moment – rather the CSOs might slowly start drifting away quietly, if they feel it does not serve them. This makes figuring out more about how the OGP can serve CSO’s – particularly more local ones – all the more important.

I am perhaps, alone in thinking this. But if not, I offer one proposal about how we could build power.

A Brief Diagnosis

A core part of the problem is that while the heads of states can regularly generate media by simply meeting within the context of the OGP, it is much harder for civil society. I – and some I talk to – feel like this void should be filled by the steering committee – and particularly its CSO members. However, they appear constrained in what they can say and do. This manifests itself in three ways:

  • First, it appears the steering committee is unable to speak out against – and attract attention to – countries that are clearly moving backwards on their commitments.
  • Second, there appears to be limited capacity to challenge new entrants who cause many CSOs to feel uncomfortable. This includes Russia (who ultimately opted not to join) and Argentina, which many Latin American CSOs feel has been particularly egregious in systemically limiting freedom of expression. Membership has privileges, it endows on countries some social license and impacts the OGP brand in other countries – barriers to entry matter.
  • Third, the steering committee seems to have done little to attract international and/or national attention to Independent Reporting Mechanism reports – a third party report that assessed governments’ progress against their goals. Fears that the IRPs would be watered down seem to have been misplaced. According to many the IRPs are fair, balanced and in many cases quite critical. This is fantastic. The feat now is that poor IRP reports are not creating neither attention nor pressure for change.

It may not be the role of the steering committee to draw attention to these issues. I feel it is. Either way, it needs to be someone’s role. I want to be clear, I don’t believe the CSOs steering committee members have been negligent – I know they are diligent and effective CSO partners. Rather I believe there are some norms, and even hard structural barriers that prevent them from speaking out or pushing the steering committee as a whole to speak out on these issues.

Thus I suggest that the CSOs do the following.

A Suggestion

First – create a committee of highly respected CSO members that most members believe can, in specific circumstances, speak on behalf of the global CSO community. Normally I’d advocate that the members of each regional committee caucus until they decide on who that person can be. However, perhaps in the interim, we should just pick some that are appear to be widely respected. I’ve not consulted with any of these people – so mentioning them is just as likely to embarrass them – but I might nominate: Alison Tilley (South Africa), John Wonderlich (United States),  Emmanuel C. Lallana (Philippines), Felipe Heusser (Chile), Helen Darbishire (Europe). There is a imperfect list and is limited by people I’ve met and heard others speak about in positive terms. The key thing is to not get bogged down – at this time – with the selection process (at this time).

Second – a common mailing list where if, at any point, a national group of CSOs feel like their country is backsliding on its commitments or failing to live up to the OGP in a significant way, they could raise their concern with this committee.

Third – if, after some deliberation both within the committee and across the CSO community in general it was felt that there was a serious problem, this committee could issues statements on behalf of the CSO community. I could be wrong, but it would be nice to think that a collective outcry from the world’s leading CSO’s in transparency, governance and government reform might focus some (hopefully embarrassing) international media on the situation and put this issue on the agenda in various diplomatic circles. This committee might also bang the drum more aggressively in the international media about poor IRM reports.

I’ll be absolutely transparent about the goals here. Directly, the idea is to make the OGP process empower more CSO’s – hopefully the local one in particular. Indirectly however, the underlying hope to put pressure on the OGP governance and culture to remove any barriers that currently prevent CSO steering committee members from speak out as a group about various issues. If we succeeded in this, we could abandon this idea and concentrate on new ways to create power. And, if this had not come to pass, we could then formalize the committee and make it more permanent.

I don’t claim this model is perfect, and would invite feedback and or suggestions for alternatives. But I would love for the CSOs to starting thinking about how they can leverage the community the OGP has created to foster power to enable them to challenge governments more effectively.

Moreover, I think many governments would like it. Indeed, after floating this idea past one government official, they commented “We would like the CSOs to push as more. We want to do more and need to have a political environment in which that pressure exists. It helps us.” Perhaps not true of every government – but we have allies.

The promise and challenges of open government – Toronto Star OpEd

As some readers many know it was recently announced that I’ve been asked by Ontario Premier Wynn and Government Services Minister John Milloy to be part of the Government of Ontario’s task force on Open Government.

The task force will look at best practices around the world as well as engage a number of stakeholders and conduct a series of public consultations across Ontario to make a number of recommendations around opening up the Ontario government.

I have an opinion piece in the Toronto Star today titled The Promise and Challenges of Open Government where I try (in a few words) to outline some of the challenges the task force faces as well as some of the opportunities I hope it can capitalize on.

The promise and challenges of open government

Last week, Premier Kathleen Wynne announced the launch of Ontario’s Open Government initiative, including an engagement task force (upon which I sit).

The premier’s announcement comes on the heels of a number of “open government” initiatives launched in recent years. President Barack Obama’s first act in 2009 was to sign the Memorandum on Transparency and Open Government. Since then numerous city, state and provincial governments across North America are finding new ways to share information. Internationally, 60 countries belong to the Open Government Partnership, a coalition of states and non-profits that seeks to improve accountability, transparency, technology and innovation and citizen participation.

Some of this is, to be blunt, mere fad. But there is a real sense among many politicians and the public that governments need to find new ways to be more responsive to a growing and more diverse set of citizen needs, while improving accountability.

Technology has a certainly been – in part – a driver, if only because it shifts expectations. Today a Google search takes about 30 milliseconds, with many users searching for mere minutes before locating what they are looking for. In contrast, access to information requests can take weeks, or months to complete. In an age of computers, government processes often seem more informed by the photocopier – clinging to complex systems for sorting, copying and sharing information – than using computer systems that make it easy to share information by design.

There is also growing recognition that government data and information can empower people both inside and outside government. In British Columbia, the province’s open data portal is widely used by students – many of whom previously used U.S. data as it was the only free source. Now the province benefits from an emerging workforce that uses local data while studying everything from the environment to demography to education. Meanwhile the largest user of B.C.’s open data portal are public servants, who are able to research and create policy while drawing on better information, all without endless meetings to ask for permission to use other departments’ data. The savings from fewer meetings alone is likely significant.

The benefits of better leveraging government data can affect us all. Take the relatively mundane but important issue of transit. Every day hundreds of thousands of Ontarians check Google Maps or locally developed applications for transit information. The accumulated minutes not spent waiting for transit has likely saved citizens millions of hours. Few probably realize however that it is because local governments “opened” transit data that it has become so accessible on our computers and phones.

Finally, there are a number of new ways to think about how to “talk” to Ontarians. It is possible that traditional public consultations could be improved. But there is also an opportunity to think more broadly about how the government interacts with citizens. Projects like Wikipedia demonstrate how many small contributions can create powerful resources and public assets. Could such a model apply to government?

All of these opportunities are exciting – and the province is right to explore them. But important policy questions remain. For example: how do we safeguard the data government collects to minimize political interference? The country lost a critical resource when the federal government destroyed the reliability of the long form census by making it voluntary. If crowdsourcing and other new forms of public engagement can be adopted for government, how do we manage privacy concerns and preserve equality of opportunity? And how will such changes affect public representation? Canada’s political system has been marked by increasing centralization of power over the past several decades – will new technologies and approaches further this trend? Or could they be shaped to arrest it? These are not simple questions.

It is also easy to dismiss these efforts. This will neither be the first nor the last time people talk about open government. Indeed, there is a wonderfully cynical episode of Yes, Minister from 1980 titled “Open Government.” More recently, various revelations about surveillance and national governments’ desire to snoop in on our every email and phone call reveals much about what is both opaque and to be feared about our governments. Such cynicism is both healthy and necessary. It is also a reason why we should demand more.

Open government is not something we will ever fully achieve. But I do hope that it can serve as an objective and a constantly critical lens for thinking about what we should demand. I can’t speak for the other panelists of the task force, but that will be how I approach my work.

David Eaves is a public policy entrepreneur, open government activist and negotiation expert. He is a member of the Ontario government’s new Engagement Task Force.

Government Procurement Reform – It matters

Earlier this week I posted a slidecast on my talk to Canada’s Access to Information Commissioners about how, as they do their work, they need to look deeper into the government “stack.”

My core argument was how decisions about what information gets made accessible is no longer best managed at the end of a policy development or program delivery process but rather should be embedded in it. This means monkeying around and ensuring there is capacity to export government information and data from the tools (e.g. software) government uses every day. Logically, this means monkeying around in procurement policy (see slide below) since that is where the specs for the tools public servants use get set. Trying to bake “access” into processes after the software has been chosen is, well, often an expensive nightmare.

Gov stack

Privately, one participant from a police force, came up to me afterward and said that I was simply guiding people to another problem – procurement. He is right. I am. Almost everyone I talk to in government feels like procurement is broken. I’ve said as much myself in the past. Clay Johnson is someone who has thought about this more than others, here he is below at the Code for America Summit with a great slide (and talk) about how the current government procurement regime rewards all the wrong behaviours and often, all the wrong players.

Clay Risk profile

So yes, I’m pushing the RTI and open data community to think about procurement on purpose. Procurement is borked. Badly. Not just from a wasting tax dollars money perspective, or even just from a service delivery perspective, but also because it doesn’t serve the goals of transparency well. Quite the opposite. More importantly, it isn’t going to get fixed until more people start pointing out that it is broken and start contributing to solving this major bottle neck of a problem.

I highly, highly recommend reading Clay Johnson’s and Harper Reed’s opinion piece in today’s New York Times about procurement titled Why the Government Never Gets Tech Right.

All of this becomes more important if the White House’s (and other governments’ at all levels) have any hope of executing on their digital strategies (image below).  There is going to be a giant effort to digitize much of what governments do and a huge number of opportunities for finding efficiencies and improving services is going to come from this. However, if all of this depends on multi-million (or worse 10 or 100 million) dollar systems and websites we are, to put it frankly, screwed. The future of government isn’t to be (continue to be?) taken over by some massive SAP implementation that is so rigid and controlled it gives governments almost no opportunity to innovate. And this is the future our procurement policies steer us toward. A future with only a tiny handful of possible vendors, a high risk of project failure and highly rigid and frail systems that are expensive to adapt.

Worse there is no easy path here. I don’t see anyone doing procurement right. So we are going to have to dive into a thorny, tough problem. However, the more governments that try to tackle it in radical ways, the faster we can learn some new and interesting lessons.

Open Data WH

Access to Information, Technology and Open Data – Keynote for the Commissioners

On October 11th I was invited by Elizabeth Denham, the Access to Information and Privacy Commissioner for British Columbia to give a keynote at the Privacy and Access 20/20 Conference in Vancouver to an audience that included the various provincial and federal Information Commissioners.

Below is my keynote, I’ve tried to sync the slides up as well as possible. For those who want to skip to juicier parts:

  • 7:08 – thoughts about the technology dependence of RTI legislation
  • 12:16 –  the problematic approach to RTI implementation that results from these unsaid assumptions
  • 28:25 – the need and opportunity to bring open data and RTI advocates together

Some acronyms used:

The 311 Open Data Competition is now Live on Kaggle

As I shared the other week, I’ve been working on a data competition with Kaggle and SeeClickFix involving 311 data from four cities: Chicago, New Haven, Oakland and Richmond.

So first things first – the competition is now live. Indeed, there are already 19 teams and 56 submissions that have been made. Fortunately, time is on your side, there are 56 days to go.

As I mentioned in my previous post on the subject, I have real hopes that this competition can help test a hypothesis I have about the possibility of an algorithmic open commons:

There is, however, for me, a potentially bigger goal. To date, as far as I know, predictive algorithms of 311 data have only ever been attempted within a city, not across cities. At a minimum it has not been attempted in a way in which the results are public and become a public asset.

So while the specific problem  this contest addresses is relatively humble, I’d see it as a creating a larger opportunity for academics, researchers, data scientists, and curious participants to figure out if can we develop predictive algorithms that work for multiple cities. Because if we can, then these algorithms could be a shared common asset. Each algorithm would become a tool for not just one housing non-profit, or city program but a tool for all sufficiently similar non-profits or city programs.

Of course I’m also discovering there are other benefits that arise out of these competitions.

This last weekend there was a mini-sub competition/hackathon involving a subset of the data. It was amazing to watch from afar. First, I was floored by how much cooperation there was, even between competitors and especially after the competition closed. Take a look at the forums, they are probably make one of the more compelling cases that open data can help foster more people to want to learn how to manipulate and engage with data. Here are contestants sharing their approaches and ideas with one another – just like you’d want them to. I’d known that Kaggle had a interesting community and that learning played an important role in it, but “riding along” in a mini competition has caused me to look again at the competitions through a purely educational lens. It is amazing how much people both wanted to learn and share.

As in the current competition, the team at the hackathon also ran a competition around visualizing the data. And there were some great visualization of the data that came out of it, as well as another example of where people were trying to learn and share. Here are two of my favourites:

map2

I love this visualization by Christoph Molnar because it reveals the different in request locations in each city. In some they are really dense, whereas in others they are much (more) evenly distributed. Super interesting to me.

Most pressing issues in each city

I also love the simplicity of this image created by miswift. There might have been other things I’d done, like colour coded similar problems to make them easier to compare across cities. But I still love it.

Congratulations to all the winners from this weekends event, and I hope readers will consider participating in the current competition.

Why Journalists Should Support Putting Access to Information Requests Online Immediately

Here’s a headline you don’t often expect to see: “Open-Government Laws Fuel Hedge-Fund Profits.”

It’s a fascinating article that opens with a story about SAC Capital Advisors LP – a hedge fund. Last December SAC Capital used Freedom of Information Laws (FOIA) to request preliminary results on a Vertex Pharmaceuticals drug being tested by the US Food and Drug Administration. The request revealed there were no “adverse event reports,” increasing the odds the drug might be approved. SAC Capital used this information – according to the Wall Street Journal – to snatch up 15,000 shares and 25,000 options of Vertex. In December – when the request was made – the stock traded around $40. Eight months later it peaked at $89 and still trades today at around $75. Thus, clever usage of government access to information request potentially netted the company a cool ROI of 100% in 9 months and a profit of roughly 1.2 million dollars (assuming they sold around $80).

This is an interesting story. And I fear it says a lot about the future of access to information laws.

This is because it contrasts sharply with the vision of access to information the media likes to portray: Namely, that access requests are a tool used mainly by hardened journalists trying to uncover dirt about a government. This is absolutely the case… and an important use case. But it is not the only usage of access laws. Nor was it the only intended use of the law. Indeed, it is not even the main usage of the law.

In my work on open data I frequently get pulled into conversations about access to information laws and their future. I find these conversations are aggressively dominated by media representatives (e.g. reporters) who dislike alternative views. Indeed, the one-sided nature of the conversation – with some journalists simply assuming they are the main and privileged interpreters of the public interest around access laws – is deeply unhealthy. Access to information laws are an important piece of legislation. Improving and sustaining them requires a coalition of actors (particularly including citizens), not just journalists. Telling others that their interests are secondary is not a great way to build an effective coalition. Worse, I fear the dominance of a single group means the conversation is often shaped by a narrow view of the legislation and with a specific set of (media company) interests in mind.

For example, many governments – including government agencies in my own province of British Columbia – have posted responses to many access to information requests publicly. This enrages (and I use that word specifically) many journalists who see it as a threat. How can they get a scoop if anyone can see government responses to their requests at the same time? This has led journalists to demand – sometimes successfully – that the requestor have exclusive access to government responses for a period of time. Oy vey. This is dangerous.

For certain types of stories I can see how complete transparency of request responses could destroy a scoop. But most stories – particularly investigative stories – require sources and context and understanding. Such advantages, I suspect, are hard to replicate and are the real source of competitive advantage (and if they aren’t… shouldn’t they be?).

It also suggests that a savvy public – and the media community – won’t be able to figure out who always seems to be making the right requests and reward them accordingly. But let’s put issues of a reputation economy and the complexity of reporting on a story aside.

First, it is worth noting that it is actually in the public interest to have more reporters cover a story and share a piece of news – especially about the government. Second, access to information laws were not created to give specific journalists scoops – they were designed to maximize the public’s capacity to access government information. Protecting a media company’s business model is not the role of access laws. It isn’t even in the spirit of the law.

Third, and worst, this entire debate fails to discuss the risks of such an approach. Which brings me back to the Wall Street Journal article.

I have, for years, warned that if public publication of access to information requests results are delayed so that one party (say, a journalist) has exclusive access for a period of time, then the system will also be used by others in pursuit of interests that might not be in the public good. Specifically, it creates a strong incentive for companies and investors to start mining government to get “exclusive” rights to government information they can put to use in advancing their agenda – making money.

As the SAC Capital Case outlined above underscores, information is power. And if you have exclusive access to that information, you have an advantage over others. That advantage may be a scoop on a government spending scandal, but it can also be a stock tip about a company whose drug is going to clear a regulatory hurdle, or an indication that a juicy government contract is about to be signed, or that a weapons technology is likely to be shelved by the defence department. In other words – and what I have pointed out to my journalist friends – exclusivity in access to information risks transforming the whole system into a giant insider information generation machine. Great for journalists? Maybe. (I’ve my doubts – see above.) But great for companies? The Wall Street Journal article shows us it already is. Exclusivity would make it worse.

Indeed, in the United States, the private sector is already an enormous generator of access requests. Indeed one company, that serves as a clearing house for requests, accounts for 10% of requests on its own:

The precise number of requests from investors is impossible to tally because many come from third-party organizations that send requests on behalf of undisclosed clients—a thriving industry unto itself. One of them, FOI Services Inc., accounted for about 10% of the 50,000 information requests sent to the FDA during the period examined by the Journal. Marlene Bobka, a senior vice president at Washington-based FOI Services, says a “huge, huge reason people use our firm is to blind their requests.”

Imagine what would happen if those making requests had formal exclusive rights? The secondary market in government information could become huge. And again, not in a way that advances the public interest.

In fact, given the above-quoted paragraph, I’m puzzled by the fact that journalists don’t demand that every access to information request be made public immediately. All told, the resources of the private sector (to say nothing of the tens of thousands of requests made by citizens or NGOs) dwarf those of media companies. Private companies may start (or already are) making significantly more requests than journalists ever could. Free-riding on their work could probably be a full time job and a successful career for at least a dozen data journalists. In addition, by not duplicating this work, it frees up media companies’ capacity to focus on the most important problems that are in the public good.

All of this is to say… I fear for a world where many of the journalists I know – by demanding changes that are in their narrow self-interest – could help create a system that, as far as I can tell, could be deeply adverse to the public interest.

I’m sure I’m about to get yelled at (again). But when it comes to access to information requests, we are probably going to be better off in a world where they are truly digitized. That means requests can be made online (something that is somewhat arriving in Canada) and – equally importantly – where results are also published online for all to see. At the very minimum, it is a conversation that is worth having.

New Zealand: The World’s Lab for Progressive Tech Legislation?

Cross posted with TechPresident.

One of the nice advantage of having a large world with lots of diverse states is the range of experiments it offers us. Countries (or regions within them) can try out ideas, and if they work, others can copy them!

For example, in the world of drug policy, Portugal effectively decriminalized virtually all drugs. The result has been dramatic. And much of it positive. Some of the changes include a decline in both HIV diagnoses amongst drug users by 17% and drug use among adolescents (13-15 yrs). For those interested you can read more about this in a fantastic report by the Cato Institute written by Glenn Greenwald back in 2009 before he started exposing the unconstitutional and dangerous activities of the NSA. Now some 15 years later there have been increasing demands to decriminalize and even legalize drugs, especially in Latin America. But even the United States is changing, with both the states of Washington and Colorado opting to legalize marijuana. The lessons of Portugal have helped make the case, not by penetrating the public’s imagination per se, but by showing policy elites that decriminalization not only works but it saves lives and saves money. Little Portugal may one day be remembered for changing the world.

I wonder if we might see a similar paper written about New Zealand ten years from now about technology policy. It may be that a number of Kiwis will counter the arguments in this post by exposing all the reasons why I’m wrong (which I’d welcome!) but at a glance, New Zealand would probably be the place I’d send a public servant or politician wanting to know more about how to do technology policy right.

So why is that?

First, for those who missed it, this summer New Zealand banned software patents. This is a stunning and entirely sensible accomplishment. Software patents, and the legal morass and drag on innovation they create, are an enormous problem. The idea that Amazon can patent “1-click” (e.g. the idea that you pre-store someone’s credit card information so they can buy an item with a single click) is, well, a joke. This is a grand innovation that should be protected for years?

And yet, I can’t think of single other OECD member country that is likely to pass similar legislation. This means that it will be up to New Zealand to show that the software world will survive just fine without patents and the economy will not suddenly explode into flames. I also struggle to think of an OECD country where one of the most significant industry groups – the Institute of IT Professionals appeared – would not only both support such a measure but help push its passage:

The nearly unanimous passage of the Bill was also greeted by Institute of IT Professionals (IITP) chief executive Paul Matthews, who congratulated [Commerce Minister] Foss for listening to the IT industry and ensuring that software patents were excluded.

Did I mention that the bill passed almost unanimously?

Second, New Zealanders are further up the learning curve around the dangerous willingness their government – and foreign governments – have for illegally surveilling them online.

The arrest of Kim Dotcom over MegaUpload has sparked some investigations into how closely the country’s police and intelligence services follow the law. (For an excellent timeline of the Kim Dotcom saga, check out this link). This is because Kim Dotcom was illegally spied on by New Zealand’s intelligence services and police force, at the behest of the United States, which is now seeking to extradite him. The arrest and subsequent fall out has piqued public interest and lead to investigations including the Kitteridge report (PDF) which revealed that “as many as 88 individuals have been unlawfully spied on” by the country’s Government Communications Security Bureau.

I wonder if the Snowden documents and subsequent furor probably surprised New Zealanders less than many of their counterparts in other countries since it was less a bombshell than another data point on a trend line.

I don’t want to overplay the impact of the Kim Dotcom scandal. It has not, as far as I can tell, lead to a complete overhaul of the rules that govern intelligence gathering and online security. That said, I suspect, it has created a political climate that amy be more (healthily) distrustful of government intelligence services and the intelligence services of the United States. As a result, it is likely that politicians have been more sensitive to this matter for a year or two longer than elsewhere and that public servants are more accustomed at policies through the lens of its impact on rights and privacy of citizens than in many other countries.

Finally, (and this is somewhat related to the first point) New Zealand has, from what I can tell, a remarkably strong open source community. I’m not sure why this is the case, but suspect that people like Nat Torkington – and open source and open data advocate in New Zealand – and others like him play a role in it. More interestingly, this community has had influence across the political spectrum. The centre left labour party deserves much of the credit for the patent reform while the centre-right New Zealand National Party has embraced both open data. The country was among the first to embrace open source as a viable option when procuring software and in 2003 the government developed an official open source policy to help clear the path for greater use of open source software. This contrasts sharply with my experience in Canada where, as late as 2008, open source was still seen by many government officials as a dangerous (some might say cancerous?) option that needed to be banned and/or killed.

All this is to say that in both the public (e.g. civil society and the private sector) and within government there is greater expertise around thinking about open source solutions and so an ability to ask different questions about intellectual property and definitions of the public good. While I recognize that this exists in many countries now, it has existed longer in New Zealand than in most, which suggests that it enjoys greater acceptance in senior ranks and there is greater experience in thinking about and engaging these perspectives.

I share all this for two reasons:

First, I would keep my eye on New Zealand. This is clearly a place where something is happening in a way that may not be possible in other OECD countries. The small size of its economy (and so relative lack of importance to the major proprietary software vendors) combined with a sufficient policy agreement both among the public and elites enables the country to overcome both internal and external lobbying and pressure that would likely sink similar initiatives elsewhere. And while New Zealand’s influence may be limited, don’t underestimate the power of example. Portugal also has limited influence, but its example has helped show the world that the US -ed narrative on the “war on drugs” can be countered. In many ways this is often how it has to happen. Innovation, particularly in policy, often comes from the margins.

Second, if a policy maker, public servant or politician comes to me and asks me who to talk to around digital policy, I increasingly find myself looking at New Zealand as the place that is the most compelling. I have similar advice for PhD students. Indeed, if what I’m arguing is true, we need research to describe, better than I have, the conditions that lead to this outcome as well as the impact these policies are having on the economy, government and society. Sadly, I have no names to give to those I suggest this idea to, but I figure they’ll find someone in the government to talk to, since, as a bonus to all this, I’ve always found New Zealanders to be exceedingly friendly.

So keep an eye on New Zealand, it could be the place where some of the most progressive technology policies first get experimented with. It would be a shame if no one noticed.

(Again If some New Zealanders want to tell me I’m wrong, please do. Obviously, you know your country better than I do).

Thesis Question Idea: Probing Power & Promotions in the Public Service

Here’s an idea for a PhD candidate out there with some interest in government or HR and some quant skills.

Imagine you could access the a sensible slice of the HR history of a 300,000+ person organization, so you could see when people were promoted and where they moved in the organization?.

I’m not sure if would work, but the Government Electronic Directory Service (GEDS), essentially a “white pages” of Canada’s national government, could prove to be such a dataset. The service is actually designed to let people find one another within government. However, this also means it could potentially allow an someone to track the progress of public servants careers since you can see the different titles an employee enjoys each time they change jobs (and thus get a new title and phone number in GEDS). While not a perfect match, job titles generally map up to pay scales and promotions, likely making it not a prefect, but likely still a good, metric for career trajectory.

The screen shot below is for a random name I tried. I’ve attempted to preserve the privacy of the employee, which, in truth isn’t really necessary, since anyone can access GEDS and so the data isn’t actually private to begin with.

GEDS 2

There are a number of interesting questions I could imagine an engaged researcher could ask with such data. For example, where are the glass ceilings: are there particular senior roles that seem harder for women to get promoted into? Who are the super mentors: is there a manager whose former charges always seem to go on to lofty careers? Are there power cliques: are there super public servants around whom others cluster and whose promotions or career moves are linked? Are there career paths that are more optimal, or suboptimal? Or worse is ones path predetermined early on by where and in what role one enters the public service? And (frighteningly), could you create a predictive algorithm that allowed one to accurately forecast who might be promoted.

These types of questions could be enormously illuminating and shed an important light on how the public service works. Indeed, this data set would not only be important to issues of equity and fairness within the public service, but also around training and education. In many ways, I wish the public service itself would look at this data to learn about itself.

Of course, given that there is not effectively a pan-government HR group (that I’m aware of) it is unlikely that anyone is thinking about the GEDS data in a pan-government and longitudinal way (more likely there are HR groups organized by ministry that just focus on their ministry’s employees). All this, in my mind, would make this research in an academic institution all the more important.

I’m sure there are probably fears that would drive opposition to this. Privacy is an obvious one (this is why I’m saying an academic, or the government itself, should do this). Another might be lawsuits. Suppose such a study did discover institutional sexism? Or that some other group of people were disproportionally passed over for roles in a way that suggested unfair treatment. If this hypothetical study were able to quantify this discrimination in a new way, could it then be used to support lawsuits? I’ve no idea. Nor do I think I care. I’d rather have a government that was leveraging its valuable talent in the most equitable and effective way then one that stayed blind to understanding itself in order to avoid a possible lawsuit.

The big if of course, is whether snapshots of the GEDS database have been saved over the years, either on purpose or inadvertently (backups?). It is also possible that some geek somewhere has been scrapping GEDS on a nightly, weekly or monthly basis. The second big if is, would anyone be willing to hand the data over? I’d like to think that the answer would be yes, particularly for an academic whose proposal had been successfully vetted by an Institutional Review Board.

If anyone ever decides to pursue this, I’d be happy to talk to you more about ideas I have. Also, I suspect there may be other levels of government that similar applications. Maybe this would work easier on a smaller scale.