Category Archives: free culture

The Future of Academic Research

Yesterday, Nature – one of the worlds premier scientific journals recognized University of British Columbia scientist Rosie Redfield as one of the top 10 science newsmakers of 2011.

The reason?

After posting a scathing attack on her blog about a paper that appeared in the journal Science, Redfield decided to attempt to recreate the experiment and has been blogging about her effort over the past year. As Nature describes it:

…that month, Redfield took matters into her own hands: she began attempting to replicate the work in her lab at the University of British Columbia in Vancouver, and documenting her progress on her blog (http://rrresearch.fieldofscience.com).

The result has been a fascinating story of open science unfolding over the year. Redfield’s blog has become a virtual lab meeting, in which scientists from around the world help to troubleshoot her attempts to grow and study the GFAJ-1 bacteria — the strain isolated by Felisa Wolfe-Simon, lead author of the Science paper and a microbiologist who worked in the lab of Ronald Oremland at the US Geological Survey in Menlo Park, California.

While I’m excited about Redfields blog (more on that below) we should pause and note the above paragraph is a very, very sad reminder of the state of affairs in science. I find the term “open science” to be an oxymoron. The scientific process only works when it is, by definition, open. There is, quite arguably, no such thing as “closed science.” And yet it is a reflection of how 18th century the entire science apparatus remains that Redfields awesome experiment is just that – an experiment. We should celebrate her work, and ask ourselves, why is this not the norm?

So first, to celebrate her work… when I look at Redfields blog, I see exactly what I hope the future of scientific, and indeed all academic research, will look like. Here is someone who is constantly updating their results and sharing what they are doing with their peers, as well as getting input and feedback from colleagues and others around the world. Moreover, she plays to the mediums strengths. While rigorous, she remains inviting and, from my reading, creates a more honest and human view into the world of science. I suspect that this might be much more attractive (and inspiring) to potential scientists. Consider, these two lines from one of her recent posts:

So I’m pretty sure I screwed something up.  But what?  I used the same DNA stock tube I’ve used many times before, and I definitely remember putting 3 µl of DNA into each assay tube.  I made fresh sBHI + novobiocin plates using pre-made BHI agar,, and I definitely remember adding the hemin (4 ml), NAD (80 µl) and novobiocin (40 µl) to the melted agar before I poured the plates.

and

UPDATE:  My novobiocin plates had no NovR colonies because I had forgotten to add the required hemin supplement to the agar!  How embarrassing – I haven’t made that mistake in years.

and then this blog post title:

Some control results! (Don’t get excited, it’s just a control…)

Here is someone literally walking through their thought processes in a thorough, readable way. Can you imagine anything more helpful for a student or young scientist? And the posts! Wonderfully detailed walk throughs of what has been tried, progress made and set backs uncovered. And what about the candor! The admission of error and the attempts to figure out what went wrong. It’s the type of thinking I see from great hackers as well. It’s also the type of dialogue and discussion you won’t see in a formal academic paper but is exactly what I believe every field (from science, to non-profit, to business) needs more of.

Reading it all, and I’m once again left wondering. Why is this the experiment? Why isn’t this the norm? Particularly at publicly funded universities?

Of course, the answer lies in another question, one I first ran into over a year ago reading this great blog post by Michael Clarke on Why Hasn’t Scientific Publishing Been Disrupted Already? As he so rightly points out:

When Tim Berners-Lee created the Web in 1991, it was with the aim of better facilitating scientific communication and the dissemination of scientific research. Put another way, the Web was designed to disrupt scientific publishing. It was not designed to disrupt bookstores, telecommunications, matchmaking services, newspapers, pornography, stock trading, music distribution, or a great many other industries…

…The one thing that one could have reasonably predicted in 1991, however, was that scientific communication—and the publishing industry that supports the dissemination of scientific research—would radically change over the next couple decades.

And yet it has not.

(Go read the whole article, it is great). Mathew Ingram also has a great piece on this published half a year later called So when does academic publishing get disrupted?

Clarke has a great breakdown on all of this, but my own opinion is that scientific journals survive not because they are an efficient means of transmitting knowledge (they are not – Redfield’s blog shows there are much, much faster ways to spread knowledge). Rather journals survive in their current form because they are the only rating system scientists (and more importantly) universities have to deduce effectiveness, and thus who should get hired, fired, promoted and, most importantly, funded. Indeed, I suspect journals actually impede (and definitely slow) scientific progress. In order to get published scientists regularly hold back sharing and disclosing discoveries and, more often still, data, until they can shape it in such a way that a leading journal will accept it. Indeed, try to get any scientists to publish their data in machine readable formats – even after they have published with it -it’s almost impossible… (notice there are no data catalogs on any major scientific journals websites…) The dirty secret is that this is because they don’t want others using it in case it contains some juicy insight they have so far missed.

Don’t believe me? Just consider this New York Times article on the break throughs in Alzheimer’s. The whole article is about a big break through in scientific research process. What was it? That the scientists agreed they would share their data:

The key to the Alzheimer’s project was an agreement as ambitious as its goal: not just to raise money, not just to do research on a vast scale, but also to share all the data, making every single finding public immediately, available to anyone with a computer anywhere in the world.

This is unprecedented? This is the state of science today? In an era where we could share everything, we opt to share as little as possible. This is the destructive side of the scientific publishing process that is linked to performance.

It is also the sad reason why it is a veteran, established researcher closer to the end of her career that is blogging this way and not a young, up and coming researcher trying to establish herself and get tenure. This type of blog is too risky to ones career. Today “open” science, is not a path forward. It actually hurts you in a system that prefers more inefficient methods at spreading insights, research and data, but is good at creating readily understood rankings.

I’m thrilled that Rosie Redfield has been recognized by Nature (which clearly enjoys the swipe at Science – its competitor). I’m just sad that the today’s culture of science and universities means there aren’t more like her.

 

Bonus material: If you want to read an opposite view, here is a seriously self-interested defensive of the scientific publishing industry that was totally stunning to read. It’s fascinating that this man and Michael Clarke share the same server. If you look in the comments of that post, there is a link to this excellent post by a researcher at a University in Cardiff that I think is a great counter point.

 

Open Data Day 2011 – Recaps from Around the World

This last Saturday was International Open Data Day with hackathons taking place in cities around the world.

How many you ask? We can’t know for certain, but organizers around the world posted events to the wiki in over 50 cities around the world. Given the number of tweets with the #odhd hashtag, and the locations they were coming from, I don’t think we were far off that mark. If you assume 20 people at each event (some had many more – for instance there were over 100 in Ottawa, Vancouver had close to 50, 120+ in New York) it’s safe to say more than 1000 people were hacking on open data projects around the world.

It’s critical to understand that Open data Day is a highly decentralized event. All the work that makes it a success (and I think it was a big success) is in the hands of local organizers who find space, rally participants, push them to create stuff and, of course, try to make the day as fun as possible. Beyond their hard work and dedication there isn’t much, if any, organization. No boss. No central authority. No patron or sponsor to say thank you. So if you know any of the fine people who attended, or even more importantly, helped organize an event, please shake their hand or shoot them a thank you. I know I’m intensely grateful to see there are so many others out there that care about this issue, that want to connect, learn, meet new people, have fun and, of course, make something interesting. Given the humble beginnings of this event, we’ve had two very successful years.

So what about the day? What was accomplished? What Happened?

Government Motivator

I think one of the biggest accomplishments of Open Data Day has been how it has become a motivator – a sort of deadline – for governments keen to share more open data. Think about this. A group of volunteers around the world is moving governments to share more data – to make public assets more open to reuse. For example, in Ireland Fingal County Council released data around trees, parking, playing pitches & mobile libraries for the day. In Ontario, Canada the staff for the Region of Waterloo worked extra hard to get their open data portal up in time for the event. And it wasn’t just local governments. The Government of BC launched new high value data sets in anticipation of the event and the Federal Government of Canada launched 4000 new data sets with International Open Data Day in mind. Meanwhile, the open data evangelist of Data.gov was prepared to open up data sets for anyone who had a specific request.

While governments should always be working to make more data available I think we can all appreciate the benefits of having a deadline, and Open Data Day has helped become just that for more and more governments.

In other places, Open Data Day turns into a place where governments can converse with developers and citizens about why open data matters, and do research into what data the public is interested in. This is exactly what happened in Enschede in the Netherlands where local city staff worked with participants around prioritizing data sets to make open.

Local Events & Cool Hacks

A lot of people have been blogging about, or sharing videos of, Open Data Day events around the world. I’ve seen blog posts and news articles on events in places such as Madrid, Victoria BC, Oakland, Mexico City, Vancouver, and New York City. If there are more, please email them to me or post them on the wiki.

I haven’t been able to keep track of all the projects that got worked on, but here are a sampling of some that I’ve seen via twitter, the wiki and other forums:

Hongbo: The Emergency Location Locator

In Cotonou, Benin the open data day participants developed a web application called Hongbo the Goun word for “Gate.” Hongbo enables users to locate the nearest hospital, drugstore and police stations. As they noted on the open data day wiki, the data sets for this application were public but not easily accessible. They hope Benin citizen can use it quickly identify who to call or where to go in emergencies.

Tweet My Council

In Sydney, Australia participants created Tweetmycouncil. A fantastic simply application that allows a user to know which jurisdiction they are standing in. Simply send a tweet to the hashtag #tmyc and the app will work where you, what council’s jurisdiction you are in and send you a tweet with the response.

Mexican Access to Information Tracker

In Mexico City one team created an application to compare Free of Information requests between different government departments. This could be a powerful tool for citizens and journalists. (Github repo)

Making it Easier for the Next Guy

Another project out of Mexico City, a team from Oaxaca created an API that creates a json file for any public data set. Would be great for this team to connect with Max Ogden and talk about Gut.

Making it Even Easier for the Next Guy

Speaking of, Max Ogden in Oakland shared more on Gut, which is less of a classic app then a process that enables users to convert data between different formats. It had a number of people excited including open data developers at other locations such as Luke Closs and Mike West.

Mapping Census Data in Barcelona

A team of hackers in Barcelona mapped census tracts so they could be visualized, showing things, like say, the number of parks per census tract. You can find the data sets they used in Google Fusion Tabels here.

Foreign Aid Visualizations

In London UK and in Seattle (and possibly other places) developers were also very keen on the growing amount of aid data being made available and in a common structure thanks to IATI. In Seattle developers created this very cool visualization of US Aid over the last 50 years. I know the London UK team has visualizations of their own they’d like to share shortly.

Food Hacking!

One interesting thing about Open Data Day is how it bridges some very different communities. One of the most active are the food hackers which came out in force in both New York and Vancouver.

In New York a whole series of food related tools, apps and visualization got developed, most of which are described here and here. The sheer quantity of participants (120+) and projects developed is astounding, but also fantastic is how inclusive their event is, with lots of people not just working on apps, but analyzing data and creating visualizations to help others understand an issue they share a common passion for: the Food Bill. Please do click on those links to see some of the fun visuals created.

The Ultimate Food API

In Vancouver, the team at FoodTree – who hosted the hackathon there – focused on shipping an API for developers interested in large food datasets.  You can find their preliminary API and datasets in github. You can also track the work they’ve done on their Open Food Wiki.

Homelessness

In Victoria, BC a team created a map of local walk-in community services that you can check out at http://ourservices.ca/.

BC Emergency Tweeting System

Another team in Victoria, BC focused on creating twitter hashtags for each official place in the province with the hopes that the province’s Provincial Emergency Program.

Mapping Shell’s Oils Spills in Nigeria

The good people at the Open Knowledge Foundation worked on getting a ton more data into the Datahub, but they also had people learning how to visualize data, one of whom created this visualization of oil spills in Nigeria. Always great to see people experimenting and learning!

Mapping Vancouver’s Most Dangerous Intersections for Bikes

Open Data hacking and biking accident data have a long history together and this hackathon I uploaded 5 years worth of bike accident I managed to get from ICBC to Buzzdata. As a result – even though I couldn’t be present in Vancouver – two different developers took it and mapped it. You can see @ngriffithshere and @ericp’s will be up soon. It was interesting to learn that Broadway and Cambie is the most dangerous intersection in the city for cyclists?

Looking Forward

Last year open data day attracted individual citizens: those with a passion for an issue (like food) or who want to make their government more effective or citizens lives a little easier. However, this year we already started to see the community grow – the team at Socrata hosted a hackathon at their offices in Seattle. Buzzdata had people online trying to help people share their data. In addition to these private companies some of the more established non-profits were out in force. The Open Knowledge Foundation had a team working on making openspending.org more accessible while MySociety helped a team in Canada set up a local version of MapIt.

For those who think that open data can change the world or, even build medium sized economic ecosystems, over night, we need to reset their expectations. But it is growing. No longer are participants just citizens and hacktavists – there are real organizations and companies participating. Few, but they are there. My hope is that this trend will continues. That open data day will continue to have meaning for individuals and hackers but will also be something that larger more established organizations, non-profits and companies will use as a rallying point as well. Something to shoot for next year.

Feedback

As I mentioned at the beginning, Open Data Day is a very decentralized event. We are, of course, not wedded to that approach and I’d love to hear feedback from people, good or bad, about worked or didn’t work. Please do feel free to email me, post it to the mailing list or simply comment below.

 

 

Postscript

Finally, some of you may have noticed I became conspicuously absent on the day. I want to apologize to everyone. My partner went into labour on Friday night and so by early morning Saturday it was obvious that my open data day was going to be spent with her. Our baby was 11 days over due so we really thought that we’d be in the clear by Dec 3rd… but our baby had other plans. The good news is that despite 35 hours of labour, baby and boy are doing well!

Statistics Canada Data to become OpenData – Background, Winners and Next Steps

As some of you learned last night, Embassy Magazine broke the story that all of Statistics Canada’s online data will not only be made free, but released under the Government of Canada’s Open Data License Agreement (updated and reviewed earlier this week) that allows for commercial re-use.

This decision has been in the works for months, and while it does not appear to have been formally announced, Embassy Magazine does appear to have managed to get a Statistics Canada spokesperson to confirm it is true. I have a few thoughts about this story: Some background, who wins from this decision, and most importantly, some hope for what it will, and won’t lead to next.

Background

In the embassy article, the spokesperson claimed this decision had been in the works for years, something that is probably technically true. Such a decision – or something akin to it – has likely been contemplated a number of times. And there have been a number of trials and projects that have allowed for some data to be made accessible albeit under fairly restrictive licenses.

But it is less clear that the culture of open data has arrived at StatsCan, and less clear to me that this decision was internally driven. I’ve met many a Statscan employee who encountered enormous resistance while advocating for data open. I remember pressing the issue during a talk at one of the department’s middle managers conference in November of 2008 and seeing half the room nod vigorously in agreement, while the other half crossed it arms in strong disapproval.

Consequently, with the federal government increasingly interested in open data, coupled with a desire to have a good news story coming out of statscan after last summer census debacle, and with many decisions in Ottawa happening centrally, I suspect this decision occurred outside the department. This does not diminish its positive impact, but it does mean that a number of the next steps, many of which will require StatsCan to adapt its role, may not happen as quickly as some will hope, as the organization may take some time to come to terms with the new reality and the culture shift it will entail.

This may be compounded by the fact that there may be tougher news on the horizon for StatsCan. With every department required to have submitted proposal to cut their budgets by either 5% and 10%, and with StatsCan having already seen a number of its programs cut, there may be fewer resources in the organization to take advantage of the opportunity making its data open creates, or even just adjust to what has happened.

Winners (briefly)

The winners from this decision are of course, consumers of statscan’s data. Indirectly, this includes all of us, since provincial and local governments are big consumers of statscan data and so now – assuming it is structured in such a manner – they will have easier (and cheaper) access to it. This is also true of large companies and non-profits which have used statscan data to locate stores, target services and generally allocate resources more efficiently. The opportunity now opens for smaller players to also benefit.

Indeed, this is the real hope. That a whole new category of winners emerges. That the barrier to use for software developers, entrepreneurs, students, academics, smaller companies and non-profits will be lowered in a manner that will enable a larger community to make use of the data and therefor create economic or social goods.

Such a community, however, will take time to evolve, and will benefit from support.

And finally, I think StatsCan is a winner. This decision brings it more profoundly into the digital age. It opens up new possibilities and, frankly, pushes a culture change that I believe is long over due. I suspect times are tough at StatsCan – although not as a result of this decision – this decision creates room to rethink how the department works and thinks.

Next Steps

The first thing everybody will be waiting for is to see exactly what data gets shared, in what structure and to what detail. Indeed this question arose a number of times on twitter with people posting tweets such as “Cool. This is all sorts of awesome. Are geo boundary files included too, like Census Tracts and postcodes?” We shall see. My hope is yes and I think the odds are good. But I could be wrong, at which point all this could turn into the most over hyped data story of the year. (Which actually matters now that data analysts are one of the fastest growing categories of jobs in North America).

Second, open data creates an opportunity for a new and more relevant role for StatsCan to a broader set of Canadians. Someone from StatsCan should talk to the data group at the World Bank around their transformation after they launched their open data portal (I’d be happy to make the introduction). That data portal now accounts for a significant portion of all the Bank’s web traffic, and the group is going through a dramatic transformation, realizing they are no longer curators of data for bank staff and a small elite group of clients around the world but curators of economic data for the world. I’m told a new, while the change has not been easy, a broader set of users have brought a new sense of purpose and identity. The same could be true of StatsCan. Rather than just an organization that serves the government of Canada and a select groups of clients, StatsCan could become the curators of data for all Canadians. This is a much more ambitious, but I’d argue more democratized and important goal.

And it is here that I hope other next steps will unfold. In the United States, (which has had free census data for as long as anyone I talked to can remember) whenever new data is released the census bureau runs workshops around the country, educating people on how to use and work with its data. StatsCan and a number of other partners already do some of this, but my hope is that there will be much, much more of it. We need a society that is significantly more data literate, and StatsCan along with the universities, colleges and schools could have a powerful role in cultivating this. Tracey Lauriault over at the DataLibre blog has been a fantastic advocate of such an approach.

I also hope that StatsCan will take its role as data curator for the country very seriously and think of new ways that its products can foster economic and social development. Offering APIs into its data sets would be a logical next step, something that would allow developers to embed census data right into their applications and ensure the data was always up to date. No one is expecting this to happen right away, but it was another question that arose on twitter after the story broke, so one can see that new types of users will be interested in new, and more efficient ways, of accessing the data.

But I think most importantly, the next step will need to come from us citizens. This announcement marks a major change in how StatsCan works. We need to be supportive, particularly at a time of budget cuts. While we are grateful for open data, it would be a shame if the institution that makes it all possible was reduced to a shell of its former self. Good quality data – and analysis to inform public policy – is essential to a modern economy, society, and government. Now that we will have free access to what our tax dollars have already paid for, let’s make sure that it stays that way, by both ensure it continues to be available, and that there continues to be a quality institution capable of collecting and analyzing it.

(sorry for typos – it’s 4am, will revise in the morning)

As Canada Searches for its Open Government Partnership Commitments: A Proposal

Just before its launch in New York on September 20th, the Canadian Government agreed to be a signatory of the Open Government Partnership (OGP). Composed of over 40 countries the OGP signatories are required to create a list of commitments they promise to implement. Because Canada signed on just before the deadline it has not – to date – submitted its commitments. As a result, there is a fantastic window for the government to do something interesting with this opportunity.

So what should we do? Here are the top 5 suggestions I propose for Canada’s OGP Commitments:

Brief Background on Criteria:

Before diving in, it is worth letting readers know that there are some criteria for making commitments. Specifically, any commitment must tackle at least one of the five “core” challenges: improve public services, increase public integrity, more effectively manage public resources, create safer communities, and increase corporate accountability.

In addition, each recommendation should reflect at least one of the core OGP principles, which are: transparency, citizen participation, accountability, and technology and innovation.

The Top Ten

Having reviewed several other countries commitments and being familiar with both what Canada has already done and what it could do, attached are 10 commitments I would like to see our government make to the OGP.

1. Be open about developing the commitments

Obviously there are a number of commitments the government is going to make since they are actions or programs that government was going to launch anyways. In addition, there will be some that will be new ideas that public servants or politicians have been looking for an opportunity to champion and now have an excuse. This is all fine and part of the traditional way government works.

But wouldn’t it be nice if – as part of the open government partnership – we asked citizens what they thought the commitments should be? That would make the process nicely consistent with the principles and goals of the OGP.

Thus the government should launch a two week crowd sourced idea generator, much like it did during the Digital Economy consultations. This is not suggestion that the ideas submitted must become part of the commitments, but they should inform the choices. This would be a wonderful opportunity to hear what Canadians have to say. In addition, the government could add some of its own proposal into the mix and see what type of response they get from Canadians.

2. Redefine Public as Digital: Pass an Online Information Act

At this year’s open government data camp in Warsaw, the always excellent Tom Steinberg noted that creating a transparent government and putting in place the information foundations of a digital economy will be impossible unless access to government data is not a gift from government (that can be taken away) but a right every citizen has. At the same time Andrew Rasiej of Tech President advocated that we must redefine public as digital. A paper print out in a small office in the middle of nowhere, does not make for  “public disclosure” in the 21st century. It’s bad for democracy, it’s bad for transparency, and it is grossly inefficient for government.

Thus, the government should agree to pass a Online Information Act, perhaps modeled on that proposed in the US Senate, that

a) Any document it produces should be available digitally, in a machine readable format. The sham that the government can produce 3000-10,000 printed pages about Afghan detainees or the F-35 and claim it is publicly disclosing information must end.

b) Any data collected for legislative reasons must be made available – in machine readable formats – via a government open data portal.

c) Any information that is ATIPable must be made available in a digital format. And that any excess costs of generating that information can be born by the requester, up until a certain date (say 2015) at which point the excess costs will be born by the ministry responsible. There is no reason why, in a digital world, there should be any cost to extracting information – indeed, I fear a world where the government can’t cheaply locate and copy its own information for an ATIP request as it would suggest it can’t get that information for its own operations.

3. Sign the Extractive Industries Transparency Initiative

As a leader in the field of resource extraction it is critical that Canada push for the highest standards in a sector that all too often sees money that should be destined for the public good get diverted into the hands of a few well connected individuals. Canada’s reputation internationally has suffered as our extractive resource sector is seen as engaging in a number of problematic practices such as bribing public officials – this runs counter to the Prime Minister’s efforts to promote democracy.

As a result, Canada should sign, with out delay, the Extractive Industries Transparency Initiative, much like the United States did in September. This can help signal our desire for a transparent extractive industry, one in which we play a significant role.

4. Sign on to the International Aid Transparency Initiative

Canada has already taken significant steps to publishing its aid data online, in machine readable formats. This should be applauded. The next step is to do so in a way that conforms with international standards so that this data can be assessed against the work of other donors.

The International Aid Transparency Initiative (IATI) offers an opportunity to increase transparency in foreign aid, better enable the public to understand its aid budget, compare the country’s effectiveness against others and identify duplication (and thus poorly used resources) among donors. Canada should agree to implement IATI immediately. In addition, it should request that the organizations it funds also disclose their work in ways that are compliant with IATI.

5. Use Open Data to drive efficiency in Government Services: Require the provinces to share health data – particularly hospital performance – as part of its next funding agreement within the Canada Health Act.

Comparing hospitals to one another is always a difficult task, and open data is not a panacea. However, more data about hospitals is rarely harmful and there are a number of issues on which it would be downright beneficial. The most obvious of these would be deaths caused by infection. The number of deaths that occur due to infections in Canadian hospitals is a growing problem (sigh, if only open data could help ban the antibacterial wipes that are helping propagate them). Having open data that allows for league tables to show the scope and location of the problem will likely cause many hospitals to rethink processes and, I suspect, save lives.

Open data can supply some of the competitive pressure that is often lacking in a public healthcare system. It could also better educate Canadians about their options within that system, as well as make them more aware of its benefits.

6. Reduce Fraud: Find Fraud by Creating a Death List

In an era where online identity is a problem it is surprising to me that I’m unable to locate a database of expired social insurance numbers. Being able to querry a list of social security numbers that belong to dead people might be a simple way to prevent fraud. Interestingly, the United States has just such a list available for free online. (Side fact: Known as the Social Security Death Index this database is also beloved by genealogist who use it to trace ancestry).

7. Save lives by publishing a API of recall data

The only time the public finds out about a product recall is after someone has died. This is a terribly tragic, not to mention grossly inefficient, outcome. Indeed, the current approach is a classic example of using 21st century technology to deliver a service in a 19th century manner. If the government is interested in using the OGP to improve government services it should stop just issuing recall press releases and also create an open data feed of recalled products. I expand on this idea here.

If the government were doubly smart it would work with major retailers – particularly in the food industry – to ensure that they regularly tap into this data. In an ideal world any time Save-on-Foods, Walmart, Safeway, or any other retailers scans product in their inventory it would immediately check it against the recall database, allowing bad food to be pulled out of production before it hits the shelves. In addition, customers who use loyalty cards could be called or emailed to be informed that they had bought a product that had been recalled. This would likely be much more effective than hoping the media picks the story up.

8. Open Budget and Actual Spending Data

For almost a year the UK government has published all spending data, month by month, for each government ministry (down to the £500 in some, £25,000 in others). More over, as an increasing number of local governments are required to share their spending data it has lead to savings, as government begin to learn what other ministries and governments are paying for similar services.

Another bonus is that it becomes possible to talk about the budget in new and interesting ways. This BEAUTIFUL graphic was published in the Guardian, while still complicated it is much easier to understand than any government document about the budget I have ever seen.

Public-spending-graphic-0051

9. Allow Government Scientists to speak directly to the media about their research.

It has become a reoccurring embarrassment. Scientists who work for Canada publish an internationally recognized ground break paper that provides some insight about the environment or geography of Canada and journalists must talk to government scientists from other countries in order to get the details. Why? Because the Canadian government blocks access. Canadians have a right to hear the perspectives of scientists their tax dollars paid for – and enjoy the opportunity to get as well informed as the government on these issues.

Thus, lift the ban that blocks government scientists from speaking with the media.

10. Create a steering group of leading Provincial and Municipal CIOs to create common schema for core data about the country.

While open data is good, open data organized the same way for different departments and provinces is even better. When data is organized the same way it makes it easier to citizens to compare one jurisdiction against another, and for software solutions and online services to emerge that use that data to enhance the lives of Canadians. The Federal Government should use its convening authority to bring together some of the countries leading government CIOs to establish common data schemas for things like crime, healthcare, procurement, and budget data. The list of what could be worked on is virtually endless, but those four areas all represent data sets that are frequently requested, so might make for a good starting point.

The State of Open Data 2011

What is the state of the open data movement? Yesterday, during my opening keynote at the Open Government Data Camp (held this year in Warsaw, Poland) I sought to follow up on my talk from last year’s conference. Here’s my take of where we are today (I’ll post/link to a video of the talk as soon as the Open Knowledge Foundation makes it available).

Successes of the Past Year: Crossing the Chasm

1. More Open Data Portals

One of the things that has been amazing to witness in 2011 is the veritable explosion of Open Data portals around the world. Today there are well over 50 government data catalogs with more and more being added. The most notable of these was probably the Kenyan Open Data catalog which shows how far, and wide, the open data movement has grown.

2. Better Understanding and More Demand

The things about all these portals is that they are the result of a larger shift. Specifically, more and more government officials are curious about what open data is. This is not to say that understanding has radically shifted, but many people in government (and in politics) now know the term, believe there is something interesting going on in this space, and want to learn more. Consequently, in a growing number of places there is less and less headwind against us. Rather than screaming from the rooftops, we are increasingly being invited in the front door.

3. More Experimentation

Finally, what’s also exciting is the increased experimentation in the open data space. The number of companies and organizations trying to engage open data users is growing. ScraperWiki, the DataHub, BuzzData, Socrata, Visua.ly, are some of the products and resources that have emerged out of the open data space. And the types of research and projects that are emerging – the tracking of the Icelandic volcano eruptions, the emergence of hacks and hackers, micro projects (like my own Recollect.net) and the research showing that open data could be generating savings of £8.5 million a year to governments in the Greater Manchester area, is deeply encouraging.

The Current State: An Inflection Point

The exciting thing about open data is that increasingly we are helping people – public servants, politicians, business owners and citizens imagine a different future, one that is more open, efficient and engaging. Our impact is still limited, but the journey is still in its early days. More importantly, thanks to success (number 2 above) our role is changing. So what does this mean for the movement right now?

Externally to the movement, the work we are doing is only getting more relevant. We are in an era of institution failure. From the Tea Party to Occupy Wall St. there is a recognition that our institutions no longer sufficiently serve us. Open data can’t solve this problem, but it is part of the solution. The challenge of the old order and the institutions it fostered is that its organizing principle is built around the management (control) of processes, it’s been about the application of the industrial production model to government services. This means it can only move so fast, and because of its strong control orientation, can only allow for so much creativity (and adaption). Open data is about putting the free flow of information at the heart of government – both internally and externally – with the goal of increasing government’s metabolism and decentralizing societies’ capacity to respond to problems. Our role is not obvious to the people in those movements, and we should make it clearer.

Internally to the movement, we have another big challenge. We are at a critical inflection point. For years we have been on the outside, yelling that open data matters. But now we are being invited inside. Some of us want to rush in, keen to make advances, others want to hold back, worried about being co-opted. To succeed, it is essential we must become more skilled at walking this difficult line: engaging with governments and helping them make the right decisions, while not being co-opted or sacrificing our principles. Choosing to not engage would, in my opinion, be to abscond from our responsibility as citizens and open data activists. This is a difficult transition, but it will be made easier if we at least acknowledge it, and support one another in it.

Our Core Challenges: What’s next

Looking across the open data space, my own feeling is that there are three core challenges that are facing the open data movement that threaten to compromise all the successes we’ve currently enjoyed.

1. The Compliance Trap

One key risk for open data is that all our work ends up being framed as a transparency initiative and thus making data available is reduced to being a compliance issue for government departments. If this is how our universe is framed I suspect in 5-10 years governments, eager to save money and cut some services, will choose to cut open data portals as a cost saving initiative.

Our goal is not to become a compliance issue. Our goal is to make governments understand that they are data management organizations and that they need to manage their data assets with the same rigour with which they manage physical assets like roads and bridges. We are as much about data governance as we are open data. This means we need to have a vision for government, one where data becomes a layer of the government architecture. Our goal is to make data platform one that not only citizens outside of government can build on, but one that government reconstructs its policy apparatus as well as its IT systems at top of. Achieving this will ensure that open data gets hardwired right into government and so cannot be easily shut down.

2. Data Schemas

This year, in the lead up to the Open Data Camp, the Open Knowledge Foundation created a map of open data portals from around the world. This was fun to look at, and I think should be the last time we do it.

We are getting to a point where the number of data portals is becoming less and less relevant. Getting more portals isn’t going to enable open data to scale more. What is going to allow us to scale is establishing common schemas for data sets that enable them to work across jurisdictions. The single most widely used open government data set is transit data, which because it has been standardized by the GTFS is available across hundreds of jurisdictions. This standardization has not only put the data into google maps (generating millions of uses everyday) but has also led to an explosion of transit apps around the world. Common standards will let us scale. We cannot forget this.

So let’s stop mapping open data portals, and start mapping datasets that adhere to common schemas. Given that open data is increasingly looked upon favourably by governments, creating these schemas is, I believe, now the central challenge to the open data movement.

3. Broadening the Movement

I’m impressed by the hundreds and hundreds of people here at the Open Data Camp in Warsaw. It is fun to be able to recognize so many of the faces here, the problem is that I can recognize too many of them. We need to grow this movement. There is a risk that we will become complacent, that we’ll enjoy the movement we’ve created and, more importantly, our roles within it. If that happens we are in trouble. Despite our successes we are far from reaching critical mass.

The simple question I have for us is: Where is the United Way, Google, Microsoft, the Salvation Army, Oxfam, and Greenpeace? We’ll know were are making progress when companies – large and small – as well as non-profits – start understanding how open government data can change their world for the better and so want to help us advance the cause.

Each of us needs to go out and start engaging these types of organizations and helping them see this new world and the potential it creates for them to make money or advance their own issues. The more we can embed ourselves into other’s networks, the more allies we will recruit and the stronger we will be.

 

International Open Data Hackathon 2011: Better Tools, More Data, Bigger Fun

Last year, with only a month of notice, a small group passionate people announced we’d like to do an international open data hackathon and invited the world to participate.

We were thinking small but fun. Maybe 5 or 6 cities.

We got it wrong.

In the end people from over 75 cities around the world offered to host an event. Better still we definitively heard from people in over 40. It was an exciting day.

Last week, after locating a few of the city organizers email addresses, I asked them if we should do it again. Every one of them came back and said: yes.

So it is official. This time we have 2 months notice. December 3rd will be Open Data Day.

I want to be clear, our goal isn’t to be bigger this year. That might be nice if it happens. But maybe we’ll only have 6-7 cities. I don’t know. What I do want is for people to have fun, to learn, and to engage those who are still wrestling with the opportunities around open data. There is a world of possibilities out there. Can we seize on some of them?

Why.

Great question.

First off. We’ve got more data. Thanks to more and more enlightened governments in more and more places, there’s a greater amount of data to play with. Whether it is Switzerland, Kenya, or Chicago there’s never been more data available to use.

Second, we’ve got better tools. With a number of governments using Socrata there are more API’s out there for us to leverage. Scrapperwiki has gotten better and new tools like Buzzdata, TheDataHub and Google’s Fusion Tables are emerging every day.

And finally, there is growing interest in making “openess” a core part of how we measure governments. Open data has a role to play in driving this debate. Done right, we could make the first Saturday in December “Open Data Day.” A chance to explain, demo and invite to play, the policy makers, citizens, businesses and non-profits who don’t yet understand the potential. Let’s raise the world’s data literacy and have some fun. I can’t think of a better way than with another global open data hackathon – an maker’s fair like opportunity for people to celebrate open data by creating visualizations, writing up analyses, building apps or doing what ever they want with data.

Of course, like last time, hopefully we can make the world a little better as well. (more on that coming soon)

How.

The basic premises for the event would be simple, relying on 5 basic principles.

1. Together. It can be as big or as small, as long or as short, as you’d like it, but we’ll be doing it together on Saturday, December 3rd, 2011.

2. It should be open. Around the world I’ve seen hackathons filled with different types of people, exchanging ideas, trying out new technologies and starting new projects. Let’s be open to new ideas and new people. Chris Thorpe in the UK has done amazing work getting young and diverse group hacking. I love Nat Torkington’s words on the subject. Our movement is stronger when it is broader.

3. Anyone can organize a local event. If you are keen help organize one in your city and/or just participate add your name to the relevant city on this wiki page. Where ever possible, try to keep it to one per city, let’s build some community and get new people together. Which city or cities you share with is up to you as it how you do it. But let’s share.

4. You can work on anything that involves open data. That could be a local or global app, a visualization, proposing a standard for common data sets, scraping data from a government website to make it available for others in buzzdata.

It would be great to have a few projects people can work on around the world – building stuff that is core infrastructure to future projects. That’s why I’m hoping someone in each country will create a local version of MySociety’s Mapit web service for their country. It will give us one common project, and raise the profile of a great organization and a great project.

We also hope to be working with Random Hacks of Kindness, who’ve always been so supportive, ideally supplying data that they will need to run their applications.

5. Let’s share ideas across cities on the day. Each city’s hackathon should do at least one demo, brainstorm, proposal, or anything that it shares in an interactive way with at members of a hackathon in at least one other city. This could be via video stream, skype, by chat… anything but let’s get to know one another and share the cool projects or ideas we are hacking on. There are some significant challenges to making this work: timezones, languages, culture, technology… but who cares, we are problem solvers, let’s figure out a way to make it work.

Like last year, let’s not try to boil the ocean. Let’s have a bunch of events, where people care enough to organize them, and try to link them together with a simple short connection/presentation.Above all let’s raise some awareness, build something and have some fun.

What next?

1. If you are interested, sign up on the wiki. We’ll move to something more substantive once we have the numbers.

2. Reach out and connect with others in your city on the wiki. Start thinking about the logistics. And be inclusive. Someone new shows up, let them help too.

3. Share with me your thoughts. What’s got you excited about it? If you love this idea, let me know, and blog/tweet/status update about it. Conversely, tell me what’s wrong with any or all of the above. What’s got you worried? I want to feel positive about this, but I also want to know how we can make it better.

4. Localization. If there is bandwidth locally, I’d love for people to translate this blog post and repost it locally. (let me know as I’ll try cross posting it here, or at least link to it). It is important that this not be an english language only event.

5. If people want a place to chat with other about this, feel free to post comments below. Also the Open Knowledge Foundation’s Open Data Day mailing list will be the place where people can share news and help one another out.

Once again, I hope this will sound like fun to a few committed people. Let me know what you think.

The Science of Community Management: DjangoCon Keynote

At OSCON this year, Jono Bacon, argued that we are entering a era of renaissance in open source community management – that increasingly we don’t just have to share stories but that repeatable, scientific approaches are increasingly available to us. In short, the art of community management is shifting to a science.

With an enormous debt to Jono, I contend we are already there. Indeed the tools for enable a science of community management have existed for at least 5 years. All that is needed is an effort to implement them.

A few weeks ago the organizers of DjangoCon were kind enough to invite me to give the keynote at their conference in Portland and I made these ideas the centerpiece of my talk.

Embedded below is the result: a talk that that starts slowly, but that grew with passion and engagement as it progressed. I really want to thank the audience for the excellent Q&A and for engaging with me and the ideas as much as they did. As someone from outside their community, I’m grateful.

My hope in the next few weeks is to write this talk up in a series of blog posts or something more significant, and, hopefully, to redo this video in slideshare (although I’m going to have to get my hands on the audio of this). I’ll also be giving a version of this talk at the Drupal Pacific Northwest Summit in a few weeks. Feedback, as always, is not only welcome, but gratefully received. None of this happens in a vacuum, it is always your insights that help me get better, smarter and more on target.

Big thanks to Dierderik Van Liere and Lauren Bacon for inspiration and help as well as Mike Beltzner, Daniel Einspanjer, David Ascher and Dan Mosedale (among many others) at Mozilla who’ve been supportive and a big assistance.

In the meantime, I hope this is enjoyable, challenging and spurs good thoughts.

The Geopolitics of the Open Government Partnership: the beginning of Open vs. Closed

Aside from one or two notable exceptions, there hasn’t been a ton of press about the Open Government Partnership (OGP). This is hardly surprising. The press likes to talk about corruption and bad government, people getting together to talk about actually address these things in far less sexy.

But even where good coverage exists analysts and journalists are, I think, misunderstanding the nature of the partnership and its broader implications should it take hold. Presently it is generally seen as a do good project, one that will help fight corruption and hopefully lead to some better governance (both of which I hope will be true). However, the Open Government Partnership isn’t just about doing good, it has real strategic and geopolitical purposes.

In fact, the OGP is, in part, about a 21st century containment strategy.

For those unfamiliar with 20th century containment, a brief refresher. Containment refers to a strategy outlined by a US diplomat – George Kennan – who while posted in Moscow wrote the famous The Long Telegram in which he outlined the need for a more aggressive policy to deal with an expansionist post-WWII Soviet Union. He argued that such a policy would need to seek to isolate the USSR politically and strategically, in part by positioning the United States as a example in the world that other countries would want to work with. While discussions of “containment” often focus on its military aspects and the eventual arms race, it was equally influential in prompting the ideological battle between the USA and USSR as they sought to demonstrate whose “system” was superior.

So I repeat. The OGP is part of a 21st century containment policy. And I’d go further, it is a effort to forge a new axis around which America specifically, and a broader democratic camp more generally, may seek to organize allies and rally its camp. It abandons the now outdated free-market/democratic vs. state-controlled/communist axis in favour of a more subtle, but more appropriate, open vs. closed.

The former axis makes little sense in a world where authoritarian governments often embrace (quasi) free-market to reign, and even have some of the basic the trappings of a democracy. The Open Government Partnership is part of an effort to redefine and shift the goal posts around what makes for a free-market democracy. Elections and a market place clearly no longer suffice and the OGP essentially sets a new bar in which a state must (in theory) allow itself to be transparent enough to provide its citizens with information (and thus power), in short: it is a state can’t simple have some of the trappings of a democracy, it must be democratic and open.

But that also leaves the larger question. Who is being contained? To find out that answer take a look at the list of OGP participants. And then consider who isn’t, and likely never could be, invited to the party.

OGP members Notably Absent
Albania
Azerbaijan
Brazil
Bulgaria
Canada
Chile
Colombia
Croatia
Czech Republic
Dominican Republic
El Salvador
Estonia
Georgia
Ghana
Guatemala
Honduras
Indonesia
Israel
Italy
Jordon
Kenya
Korea
Latvia
Liberia
Lithuania
Macedonia
Malta
Mexico
Moldova
Mongolia
Montenegro
Netherlands
Norway
Peru
Philippines
Romania
Slovak Republic
South Africa
Spain
Sweden
Tanzania
Turkey
Ukraine
United Kingdom
United States
Uruguay
ChinaIran

Russia

Saudi Arabia

(Indeed much of the middle East)

Pakistan

*India is not part of the OGP but was involved in much of initial work and while it has withdrawn (for domestic political reasons) I suspect it will stay involved tangentially.

So first, what you have here is a group of countries that are broadly democratic. Indeed, if you were going to have a democratic caucus in the United Nations, it might look something like this (there are some players in that list that are struggling, but for them the OGP is another opportunity to consolidate and reinforce the gains they’ve made as well as push for new ones).

In this regards, the OGP should be seen as an effort by the United States and some allies to find some common ground as well as a philosophical touch point that not only separates them from rivals, but that makes their camp more attractive to deal with. It’s no trivial coincidence that on the day of the OGP launch the President announced the United States first fulfilled commitment would be its decision to join the Extractive Industries Transparency Initiative (EITI). The EITI commits the American oil, gas and mining companies to disclose payments made to foreign governments, which would make corruption much more difficult.

This is America essentially signalling to African people and their leaders – do business with us, and we will help prevent corruption in your country. We will let you know if officials get paid off by our corporations. The obvious counter point to this is… the Chinese won’t.

It’s also why Brazil is a co-chair, and the idea was prompted during a meeting with India. This is an effort to bring the most important BRIC countries into the fold.

But even outside the BRICs, the second thing you’ll notice about the list is the number of Latin American, and in particular African countries included. Between the OGP, the fact that the UK is making government transparency a criteria for its foreign aid, and that World Bank is increasingly moving in the same direction, the forces for “open” are laying out one path for development and aid in Africa. One that rewards governance and – ideally – creates opportunities for African citizens. Again, the obvious counter point is… the Chinese won’t.

It may sounds hard to believe but the OGP is much more than a simple pact designed to make heads of state look good. I believe it has real geopolitical aims and may be the first overt, ideological salvo in the what I believe will be the geopolitical axis of Open versus Closed. This is about finding ways to compete for the hearts and minds of the world in a way that China, Russia, Iran and others simple cannot. And, while I agree we can debate the “openness” of the various the signing countries, I like the idea of world in which states compete to be more open. We could do worse.

The End of the World and Journalism in the Era of Open

For those not in the United Kingdom a massive scandal has erupted around allegations that one of the country’s tabloids – the News of the World ( a subsidiary of Rupert Murdoch’s News Corporation) – was illegally hacking into and listening in on the voicemails of not only the royal family members and celebrities but also murder victims and family members of soldiers killed in Afghanistan.

The fall out from the scandal, among other things, has caused the 168 year old newspaper to be unceremoniously closed, prompted an enormous investigation into the actions of editors and executives at the newspaper, forced the resignation (and arrest) of Andy Coulson – former News of the World editor and director of communications for the Prime Minister – and thrown into doubt Rupert Murdoch’s bid to gain complete control over the British satellite television network BskyB.

For those wanting to know more I encourage you to head over to the Guardian, which broke the story and has done some of the best reporting on it. Also, possibly the best piece of analysis I’ve read on the whole sordid affair is this post from reuters which essentially points out that by shutting down News of the World, Newscorp may shrewdly ensure that all incriminating documents can (legally) be destroyed. Evil genius stuff.

But why bring this all up here at eaves.ca?

Because I think this is an example of a trend in media that I’ve been arguing has been going on for some time.

Contrary to what news people would have you believe, my sense is that most people don’t trust newspapers – no more so then they trust governments. Starting in 1983 Ipsos MORI and the British Medical Association have asked UK citizens who they trust. The results for politicians are grim. The interesting thing is, they are no better for journalists (although TV news anchors do okay). Don’t believe me? Take a look at the data tables from Ipsos MORI. Or look at the chart Benne Dezzle over at Viceland created out of the data.

There is no doubt people value the products of governments and the media – but this data suggests they don’t trust the people creating them, which I really think is a roundabout way of saying: they don’t trust the system that creates the news.

I spend a lot of my time arguing that government’s need to be more transparent, and that this (contrary to what many public servants feel) will make them more, not less, effective. Back in 2009, in reaction to the concern that the print media was dying, I wrote a blog post saying the same was true for journalism. Thanks, in part, to Jay Rosen listing it as part of his flying seminar on the future of news, it became widely read and ended up as getting reprinted along with Taylor Owen and I’s article Missing the Link, in the journalism textbook The New Journalist. Part of what I think is going in the UK is a manifestation of the blog post, so if you haven’t read it, I think now is as good a time as any.

The fact is, newsrooms are frequently as opaque (both in process and, sometimes, in motivations) as governments are. People may are willing to rely on them, and they’ll use them if their outputs are good, but they’ll turn on them, and quickly, if they come to understand that the process stinks. This is true of any organization and news media doesn’t get a special pass because of the job it plays – indeed the opposite may be true. But more profoundly I think it is interesting how what many people consider to be two of the key pillars to western democracy are staffed by people who are among the least trusted in our society. Maybe that’s okay. But maybe it’s not. But if we think we need better forms of government – which many people seem to feel we do – it may also be that we believe we need better ways of generating, managing and engaging in the accountability of that government.

Of course, I don’t want to overplay the situation here. News of the World doomed itself because it broke the law. More importantly, it did so in a truly offensive way: hacking into the cell phone of a murder victim who was an everyday person. Admitedly, when the victims were celebrities, royals and politicians, it percolated as a relatively contained scandal. But if we believe that transparency is the sunlight that causes governments to be less corrupt – or at least forces politicians to recognize their decisions will be more scrutinized – maybe a little transparency might have caused the executives and editors at News Corp to behave a little better as well. I’m not sure what a more open media organization might look like – although wikipedia does an interesting job – but from both a brand protection and values based decision making perspective a little transparency could be the right incentive to ensure that the journalists, editors and executives in a news system few of us seem to trust, behave a little better. And that might cause them to earn more of the trust I think many deserve.

 

 

 

 

 

 

Access to Information is Fatally Broken… You Just Don’t Know it Yet

I’ve been doing a lot of thinking about access to information, and am working on a longer analysis, but in the short term I wanted to share two graphs – graphs that outline why Access to Information (Freedom of Information in the United States) is unsustainable and will, eventually, need to be radically rethought.

First, this analysis is made possible by the enormous generosity of the Canadian Federal Information Commissioners Office which several weeks ago sent me a tremendous amount of useful data regarding access to information requests over the past 15 years at the Treasury Board Secretariat (TBS).

The first figure I created shows both the absolute number of Access to Information Requests (ATIP) since 1996 as well as the running year on year percentage increase. The dotted line represents the average percentage increase over this time. As you can see the number of ATIP requests has almost tripled in this time period. This is very significant growth – the kind you’d want to see in a well run company. Alas, for those processing ATIP requests, I suspect it represents a significant headache.

That’s because, of course, such growth is likely unmanageable. It might be manageable if say, the costs of handling each requests was dropping rapidly. If such efficiencies were being wrestled out of the system of routing and sorting requests then we could simply ignore the chart above. Sadly, as the next chart I created demonstrates this is not the case.

ATIPcosts

In fact the costs of managing these transactions has not tripled. It has more than quadrupled. This means that not only are the number of transactions increasing at about 8% a year, the cost of fulfilling each of those transactions is itself rising at a rate above inflation.

Now remember, I’m not event talking about the effectiveness of ATIP. I’m not talking about how quickly requests are turned around (as the Information Commissioner has discussed, it is broadly getting worse) nor am I discussing less information is being restricted (it’s not, things are getting worse). These are important – and difficult to assess – metrics.

I am, instead, merely looking at the economics of ATIP and the situation looks grim. Basically two interrelated problems threaten the current system.

1) As the number of ATIP requests increase, the manpower required to answer them also appears to be increasing. At some point the hours required to fulfill all requests sent to a ministry will equal the total hours of manpower at that ministry’s  disposal. Yes that day may be far off, but they day where it hits some meaningful percentage – say 1%, 3% or 5% of total hours worked at Treasury Board, may not be that far off. That’s a significant drag on efficiency. I recall talking to a foreign service officer who mentioned that during the Afghan prisoner scandal an entire department of foreign service officers – some 60 people in all – were working full time on assessing access to information requests. That’s an enormous amount of time, energy and money.

2) Even more problematic than the number of work hours is the cost. According to the data I received, Access to Information requests costs The Treasury Board $47,196,030 last year. Yes, that’s 47 with a “million” behind it. And remember, this is just one ministry. Multiply that by 25 (let’s pretend that’s the number of ministries, there are actually many more, but I’m trying to be really conservative with my assumptions) and it means last year the government may have spent over $1.175 Billion fulfilling ATIP requests. That is a staggering number. And its growing.

Transparency, apparently, is very, very expensive. At some point, it risks becoming too expensive.

Indeed, ATIP reminds me of healthcare. It’s completely unsustainable, and absolutely necessary.

To be clear, I’m not saying we should get rid of ATIP. That, I believe, to be folly. It is and remains a powerful tool for holding government accountable. Nor do I believe that requesters should pay for ATIP requests as a way to offset costs (like BC Ferries does) – this creates a barrier that punishes the most marginalized and threatened, while enabling only the wealthy or well financed to hold government accountable.

I do think it suggests that governments need to radical rethink how manage ATIP. More importantly I think it suggests that government needs to rethink how it manages information. Open data, digital documents are all part of a strategy that, I hope, can lighten the load. I’ve also felt that if/as government’s move their work onto online platforms like GCPEDIA, we should simply make non-classified pages open to the public on something like a 5 year timeline. This could also help reduce requests.

I’ve more ideas, but at its core we need a system rethink. ATIP is broken. You may not know it yet, but it is. The question is, what are we going to do before it peels off the cliff? Can we invent something new and better in time?