The Next International Open Data Hack Day – initial thoughts

Yesterday I got to meet up with Edward Ocampo-Gooding and Mary Beth Baker in Ottawa and we started talking about what the next international open data hackathon: when might be a good time to do it, what might it look like, etc…

One idea is to set a theme that might help inspire people and serve as something to weave the events together in stronger way. Edward proposed the theme of Mom’s and, since, in many, many, many countries, Mother’s day is in May, it seemed like a nice suggestion.

It also has two nice benefits:

  • it gets us away from an exclusive focus on government and might get people in the headspace of creating applications with tangible uses – something almost everyone can relate to
  • many people have mom’s! so getting into the shoes of a mom and imagining what might be interesting, engaging and/or helpful shouldn’t be impossible
  • it might engage new people in the open data movement and in the local events

In addition, another suggestion that was raised is the idea of focusing on a few projects that have already been speced out in advanced – much like Random Hacks of Kindness does with their hackathons. Think this could be fruitful to explore.

Finally, regarding timelines, I’m thinking May. It works thematically (if that theme gets used). More importantly, however, it’s far enough out to plan, near enough to be tangible and sets a nice pace of two global hackathons a year which feels sufficiently ambitious for a group of volunteers, doesn’t crowd out/compete with other hackathons or local events, and seems like a good check in timeline for volunteer driven projects… Also, it might give people a chance to use scrapperwiki in the interim to get data together for projects they want to work on.

Thoughts on all this? Please blog, post a comment below, or if you are feeling shy, drop me a note (david at eaves.ca or @daeaves with hastag #odhd on twitter). I’ve also created a page on the Open Data Day wiki to discuss this if people are more comfortable with that.

What I’m doing at Code for America

For the last two weeks – and for much of January – I’m in San Francisco helping out with Code for America. What’s Code for America? Think Teach for America, but rather than deploying people into classrooms to help provide positive experiences for students and teachers while attempting to shift the culture of school districts, Code for America has fellows work with cities to help develop reusable code to save cities money, make local government as accessible as your favorite website, and help shift the government’s culture around technology.

code-for-america1-150x112The whole affair is powered by a group of 20 amazing fellows and an equally awesome staff that has been working for months to make it all come together. My role – in comparison – is relatively minor, I head up the Code for America Institute – a month long educational program the fellows go through when they first arrive.  I wanted to write about what I’ve been trying to do both because of the openness ideals of Code for America and to share any lessons for others who might attempt a similar effort.

First, to understand what I’m doing, you have to understand the goal. On the surface, to an outsider, the Code for America change process might look something like this:

  1. Get together some crazy talented computer programers (hackers, if you want to make the government folks nervous)
  2. Unleash them on a partner city with a specific need
  3. Take resulting output and share across cities

Which of course, would mistakenly frame the problem as technical. However, Code for America is not about technology. It’s about culture change. The goal is about rethinking and reimagining  government as better, faster, cheaper and adaptive. It’s about helping think of the ways its culture can embrace government as a platform, as open and as highly responsive.

I’m helping (I think) because I’ve enjoyed some success in getting government’s to think differently. I’m not a computer developer and at their core, these successes were never technology problems. The challenge is understanding how the system works, identify the leverage points for making change, develop partners and collaborate to engage those leverage points, and do whatever it takes to ensure it all comes together.

So this is the message and the concept the speakers are trying to impart on the fellows. Or, in other words, my job is to help unleash the already vibrant change agents within the 20 awesome fellows and make them effective in the government context.

So what have we done so far?

We’ve focused on three areas:

1) Understand Government: Some of the fellows are new to government, so we’ve had presentations from local government experts like Jay Nath, Ed Reiskin and Peter Koht as well as the Mayor of Tuscon’s chief of staff (to give a political perspective). And of course, Tim O’Reilly has spoken about how he thinks government must evolve in the 21st century. The goal: understand the system as well as, understand and respect the actors within that system.

2) Initiate & Influence: Whether it is launching you own business (Eric Ries on startups), starting a project (Luke Closs on Vantrash) or understanding what happens when two cultures come together (Caterina Fake on Yahoo buying Flickr) or myself on negotiating, influence and collaboration, our main challenges will not be technical, they will be systems based and social. If we are to build projects and systems that are successful and sustainable we need to ask the right questions and engage with these systems respectfully as we try to shift them.

3) Plan & Focus: Finally, we’ve had experts in planning and organizing. People like Allen Gunn (Gunner) and the folks from Cooper Design, who’ve helped the fellows think about what they want, where they are going, and what they want to achieve. Know thyself, be prepared, have a plan.

The last two weeks will continue to pick up these themes but also give the fellows more time to (a) prepare for the work they will be doing with their partner cities; and (b) give them more opportunities to learn from one another. We’re half way through the institute at this point and I’m hoping the experience has been a rich – if sometimes overwhelming – one. Hopefully I’ll have an update again at the end of the month.

Honourable Mention! The Mozilla Visualization Challenge Update

Really pleased to share that Diederik and I earned an honourable mention for our submission to the Mozilla Open Data Competition.

For those who missed it – and who find opendata, open source and visualization interesting – you can read a description of and see images from our submission to the competition in this blog post I wrote a month ago.

A Response to a Ottawa Gov 2.0 Skeptic

So many, many months ago a Peter R. posted this comment (segments copied below) under a post I’d written titled: Prediction, The Digital Economy Strategy Will Fail if it Isn’t Drafted Collaboratively on GCPEDIA. At first blush Peter’s response felt aggressive. I flipped him an email to say hi and he responded in a very friendly manner. I’ve been meaning to respond for months but, life’s been busy.  However, over the break (and my quest to hit inbox 0) I finally carved out some time –  my fear is that this late response will sound like a counter attack – it isn’t intended as such but rather an effort to respond to a genuine question. I thought it would be valuable to post as many of the points may resonate with supporters and detractors of Gov 2.0 alike. Here’s my responses to the various charges:

The momentum, the energy and the excitement behind collaborative/networked/web 2.0/etc is only matched by the momentum, the energy and the excitement that was behind making money off of leveraged debt instruments in the US.

Agreed, there is a great deal of energy and excitement behind collaborative networks, although I don’t think this is sparked – as ensued, by something analogous to bogus debt instruments. People are excited because of the tangible results created by sharing and/or co-production networks like Wikipedia, Mozilla, Flickr Google search and Google Translate (your results improve based on users data) and Ushahidi inspire people because of the tremendous results they are able to achieve with a smaller footprint of resources. I think the question of what these types of networks and this type of collaboration means to government is an important question – that also means that as people experiment their will be failures – but to equate the entire concept of Gov 2.0 and the above cited organizations, tools and websites with financial instruments that repackaged subprime mortgages is, in my mind, fairly problematic.

David, the onus lies squarely with you to prove that policymakers across government are are incapable of finding good policy solutions WITHOUT letting everyone and his twitting brother chime in their two cents.

Actually the onus doesn’t lie squarely with me. This is a silly statement. In fact, for those of us who believe in collaborative technologies such as GCPEDIA or yammer this sentence is the single most revealing point in the Peter’s entire comment. I invite everyone and anyone to add to my rebuttal, or to Peter’s argument. Even those who argue against me would be proving my point – tools like blogs and GCPEDIA allow ideas and issues to be debated with a greater number of people and a wider set of perspectives. The whole notion that any thought or solution lies solely with one person is the type of thinking that leads to bad government (and pretty much bad anything). I personally believe that the best ideas emerge when they are debated and contested – honed by having flaws exposed and repaired. Moreover this has never been more important than today, when more and more issues cross ministerial divides. Indeed, the very fact that we are having this discussion on my blog, and that Peter deemed it worthy of comment, is a powerful counterpoint this statement.

Equally important, I never said policymakers across government are are incapable of finding good policy solutions. This is serious misinterpretation of what said. I did say that the Digital Economy Strategy would fail (and I’ll happily revise, and soften to say, will likely not be meaningful) unless written on GCPEDIA. I still believe this. I don’t believe you can have people writing policy about how to manage an economy who are outside of and/or don’t understand the tools of that economy. I actually think our public servants can find great policy solutions – if we let them. In fact, most public servants I know spend most of their time trying to track down public servants in other ministries or groups to consult them about the policy they are drafting. In short, they spend all their time trying to network, but using tools of the late 20th century (like email), mid 20th century (telephone), or mid 3rd century BC (the meeting) to do it. I just want to give them more efficient tools – digital tools, like those we use in a digital economy – so they can do what they are already doing.

For the last 8 years I’ve worked in government, I can tell you with absolute certainty (and credibility!) that good policy emerges from sound research and strategic instrument choice. Often (select) public consultations are required, but sometimes none at all. Take 3 simple and highly successful policy applications: seat belts laws, carbon tax, banking regulation. Small groups of policymakers have developed policies (or laws, regs, etc) to brilliant effect….sans web 2.0. So why do we need gcpedia now?

Because the world expects you to do more, faster and with less. I find this logic deeply concerning coming from a public servant. No doubts that government developed policies to brilliant effect before the wide adoption of the computer, or even the telephone. So should we get rid of them too? An increasing number of the world’s major corporations are, or are setting up an internal wiki/collaboration platform, a social networking site, even using microblogging services like Yammer to foster internal collaboration. Indeed, these things help us to do research and develop ideas faster, and I think, better. The question isn’t why do we need GCPEDIA now. The question is why aren’t we investing to make GCPEDIA a better platform? The rest of the world is.

I’ll put this another way: tons of excellent policy solutions are waiting in the shared drives of bureaucrats across all governments.

I agree. Let’s at least put it on a wiki where more people can read them, leverage them and, hopefully, implement them. You sitting on a great idea that three other people in the entire public service have read isn’t a recipe for getting it adopted. Nor is it a good use of Canadian tax dollars. Socializing it is. Hence, social media.

Politics — being what it is — doesn’t generate progressive out solutions for various ideological reasons (applies equally to ndp, lib, con). First, tell us what a “failed” digitial economy strategy (DES) looks like. Second, tell us what components need to be included in the DES for it be successful. Third, show us why gcpedia/wikis offer the only viable means to accumulate the necessary policy ingredients.

For the last part – see my earlier post and above. As for what a failed digital economy strategy looks like – it will be one that is irrelevant. It is one that will go ignored by the majority of people who actually work in the digital economy. Of course, an irrelevant policy will be better than a truly bad one which, which I suspect, is also a real risk based on the proceedings of Canada 3.0. (That conference seemed to be about “how do we save analog business that are about to be destroyed by the digital economy” – a link to one of my favourite posts). And of course I have other metrics that matter to me. That all said, after meeting the public servant in charge of the process at Canada 3.0, I was really, really encouraged – she is very smart and gets it.

She also found the idea of writing the policy on GCPEDIA intriguing. I have my doubts that that is how things are proceeding, but it gives me hope.

Canada's Secret Open Data Strategy?

Be prepared for the most boring sentence to an intriguing blog post.

The other night, I was, as one is wont to do, reading through a random Organization for Economic Coordination and Development report entitled Towards Recovery and Partnership with Citizens: The Call for Innovative and Open Government. The report was, in fact, a summary of its recent Ministerial Meeting of the OECD’s Public Governance Committee.

Naturally, I flipped to the section authored by Canada and, imagine the interest with which I read the following:

The Government of Canada currently makes a significant amount of open data available through various departmental websites. Fall 2010 will see the launch of a new portal to provide one-stop access to federal data sets by providing a “single-window” to government data. In addition to providing a common “front door” to government data, a searchable catalogue of available data, and one-touch data downloading, it will also encourage users to develop applications that re-use and combine government data to make it useful in new and unanticipated ways, creating new value for Canadians. Canada is also exploring the development of open data policies to regularise the publication of open data across government. The Government of Canada is also working on a strategy, with engagement and input from across the public service, developing short and longer-term strategies to fully incorporate Web 2.0 across the government.

In addition, Canada’s proactive disclosure initiatives represent an ongoing contribution to open and transparent government. These initiatives include the posting of travel and hospitality expenses, government contracts, and grants and contribution funding exceeding pre-set thresholds. Subsequent phases will involve the alignment of proactive disclosure activities with those of the Access to Information Act, which gives citizens the right to access information in federal government records.

Lots of interesting things packed into these two paragraphs, something I’m sure readers concerned with open data, open government and proactive, would agree with. So let’s look at the good, the bad and the ugly, of all of this, in that order.

The Good

So naturally the first sentence is debatable. I don’t think Canada makes a significant amount of its data available at all. Indeed, across every government website there is probably no more than 400 data sets available in machine readable format. That’s less that the city of Washington DC. It’s about (less than) 1% of what Britain or the United States disclose. But, okay,let’s put that unfortunate fact aside.

The good and really interesting thing here is that the Government is stating that it was going to launch an open data portal. This means the government is thinking seriously about open data. This means – in all likelihood – policies are being written, people are being consulted (internally), processes are being thought through. This is good news.

It is equally good news that the government is developing a strategy for deploying Web 2.0 technologies across the government. I hope this will be happening quickly as I’m hearing that in many departments this is still not embraced and, quite often, is banned outright. Of course, using social media tools to talk with the public is actually the wrong focus (Since the communications groups will own it all and likely not get it right for quite a while), the real hope is being allowed to use the tools internally.

The Bad

On the open data front, the bad is that the portal has not launched. We are now definitely passed the fall of 2010 and, as for whatever reason, there is no Canadian federal open data portal. This may mean that the policy (despite being announced publicly in the above document) is in peril or that it is simply delayed. Innumerable things can delay a project like this (especially on the open data front). Hopefully whatever the problem is, it can be overcome. More importantly, let us hope the government does something sensible around licensing and uses the PDDL and not some other license.

The Ugly

Possibly the heart stopping moment in this brief comes in the last paragraph where the government talks about posting travel and hospitality expenses. While these are often posted (such as here) they are almost never published in machine readable format and so have to be scrapped in order to be organized, mashed up or compared to other departments. Worse still, these files are scattered across literally hundreds of government websites and so are virtually impossible to track down. This guy has done just that, but of course now he has the data, it is more easily navigable but no more open then before. In addition, it takes him weeks (if not months) to do it, something the government could fix rather simply.

The government should be lauded for trying to make this information public. But if this is their notion of proactive disclosure and open data, then we are in for a bumpy, ugly ride.

Canada ranks last in freedom of information

For those who missed it over the weekend it turns out Canada ranks last in freedom of information study that looked at the world’s western Parliamentary democracies. What makes it all the more astounding is that a decade ago Canada was considered a leader.

Consider two from the Information Commissioner, Suzanne Legault quotes pulled from the piece:

Only about 16 per cent of the 35,000 requests filed last year resulted in the full disclosure of information, compared with 40 per cent a decade ago, she noted.

And delays in the release of records continue to grow, with just 56 per cent of requests completed in the legislated 30-day period last year, compared with almost 70 per cent at the start of the decade.

These are appalling numbers.

The sad thing is… don’t expect things to get better. Why?

Firstly, the current government seems completely uninterested in access to information, transparency and proactive disclosure, despite these being core planks of its election platform and core values of the reform movement that re-launched Canadian conservatism. Indeed, reforming and improving access to information is the only unfulfilled original campaign promise of the Conservatives – and there appears to be no interest in touching it. Quite the opposite – that political staff now intervene to block and restrict Access to Information Requests – contravening the legislation and policy – is now a well known and documented fact.

Second, this issue is of secondary importance to the public. While everyone will say they care about access to information and open government, then number of people (while growing) still remains small. These types of reports and issues are of secondary importance. This isn’t to say they don’t matter. They do – but generally after something bigger and nastier has come to light and the public begins to smell rot. Then studies like this become the type of thing that hurts a government – it gives legitimacy and language to a sentiment people widely feel.

Third, the public seems confused about who they distrust more – the fact is, however bad the current government is on this issue, the Liberal brand is still badly tarnished on this issue of transparent government due to the scandals from almost a decade ago. Sadly, this means that there will be less burden on this government to act since – every time the issue of transparency and open government arise – rather than act, Government leaders simply point out the other parties failings.

So as the world moves on while Canada remains stuck, its government becoming more opaque, distant and less accountable to the people that elect it.

Interestingly , this also has a real cost to Canada’s influence in the world. It means something when the world turns to you as an expert – as we once were on access to information – minister’s are consulted by other world leaders, your public servants are given access to information loops they might otherwise not know about, there is a general respect, a soft power, that comes from being an acknowledged leader. Today, this is gone.

Indeed, it is worth noting that of the countries survey in the above mentioned study, only Canada and Ireland do not have open data portals which allow for proactive disclosure.

It’s a sign of the times.

One Simple Step to Figure out if your Government "gets" Information Technology

Chris Moore has a good post up on his blog at the moment that asks “Will Canadian Cities ever be Strategic?” In it (and it is very much worth reading) he hits on a theme I’ve focused on in many of my talks to government but that I think is also relevant to citizens who care about how technologically sophisticated their government is (which is a metric of how relevant you think you government is going to be in a few years).

If you want to know how serious your government or ministry is about technology there is a first simple step you can take: look at the org chart.

Locate the Chief Information Officer (CIO) or Chief Technology Officer (CTO). Simple question: Is he or she part of the senior executive team?

If yes – there is hope that your government (or the ministry you are looking at) may have some strategic vision for IT.

If no – and this would put you in the bucket with about 80% of local governments, and provincial/federal ministries – then your government almost certainly does not have a strong vision, and it isn’t going to be getting one any time soon.

As Chris also notes and I’ve been banging away at (such as during my keynote to MISA Ontario), in most governments in Canada the CIO/CTO does not sit at the executive table. At the federal level they are frequently Director Generals, (or lower), not Assistant Deputy Minister level roles. At the local level, they often report into someone at the C-level.

This is insanity.

I’m trying to think of a Fortune 500 company – particularly one which operates in the knowledge economy – with this type of reporting structure. The business of government is about managing information… to better regulate, manage resources and/or deliver services. You can’t be talking about how to do that without having the CIO/CTO being part of that conversation.

But that’s what happens almost every single day in many govenrment orgs.

Sadly, it gets worse.

In most organizations the CIO/CTO reports into the Chief Financial Officer (CFO). This really tells you what the organization thinks of technology: It is a cost centre that needs to be contained.

We aren’t going to reinvent government when the person in charge of the infrastructure upon which most of the work is done, the services are delivered and pretty much everything else the org does depends on, isn’t even part of the management team or part of the organizations strategic conversations.

It’s a sad state of affairs and indicative of why our government’s are so slow in engaging in new technology.

Most Popular Eaves.ca Posts of 2010

Some people have asked me, what were the 10 most viewed posts from last year? Well here as posts that were written last year in order of popularity (excluding static pages and the homepage):

  1. Case Study: How Open data saved Canada $3.2 Billion
  2. Learning from Libraries: The Literacy Challenge of Open Data
  3. Why Old Media and Social Media Don’t Get Along
  4. Let’s do an International Open Data Hackathon
  5. When Police Lie
  6. UK Adopts Open Government License for everything: Why it’s good and what it means
  7. Visualizing Firefox Plugins Memory Consumption
  8. Wikileaks and the coming conflict between closed and open
  9. What Munir’s Resignation means to Public Servants
  10. Minister Moore and the Myth of Market Forces

Here are the 20 viewed posts in 2010 including posts written in previous years (and including static pages)

  1. My Homepage
  2. Fatness Index – Canada vs. United States
  3. About David
  4. Case Study: How Open data saved Canada $3.2 Billion
  5. Learning from Libraries: The Literacy Challenge of Open Data
  6. Why Old Media and Social Media Don’t Get Along
  7. The other reason young people don’t vote – or why I didn’t vote yesterday
  8. WestJet vs. Air Canada
  9. R2D2 – in the right place at the right time for a reason.
  10. Concerns from Beyond the West: The dangers one-member, one-vote
  11. Banned Blogs
  12. Let’s do an International Open Data Hackathon
  13. When Police Lie
  14. Save the Census Coalition
  15. UK Adopts Open Government License for everything: Why it’s good and what it means
  16. How GCPEDIA will save the public service
  17. Firefox 3 pledge map vs. the Pentagon’s new map
  18. Speeches
  19. The Supreme Court of Canada: There are no journalists, only citizens
  20. The Three Laws of Open Government Data

At some point I’ll write a piece about my favourite posts from 2010…

What I love about the latter list is how many old posts are in there. Some just keep logging hits year after year. The Fatness Index has links all over the internet to it so there is never a day where it doesn’t get at least a few hits. Also think it is hilarious that the R2D2 post is always so strong since it really just links to a great post on another blog – Google just really likes putting it up high on those searches.

What I really like though is the mix: there are pieces on GCPEDIA, open data, the media, Firefox and open source, and politics. A great mix.

Making StatsCan Data Free: Assessing the Cost

Regular readers of my blog will know that I’ve advocated that StatsCan’s data – and particularly its Census data – should be made open (e.g. free, unlicensed, and downloadable in multiple formats). Presently, despite the fact that Canadian tax dollars pay to collect (a sadly diminishing amount, and quality of,) data, it is not open.

The main defense I hear to why StatsCan’s data should not be free is because the department depends on the revenue the data generates.

So exactly how much revenue are we talking about? Thanks to the help of some former public servants I’ve been able to go over the publicly available numbers. The basic assessment – which I encourage people to verify and challenge – turns out not to be a huge a number.

The most interesting figure in StatsCan’s finances is the revenue it generates from its online database (e.g. data downloaded from its website). So how much revenue is it? Well in 2007/2008, it was $559,000.

That’s it. For $559,000 in lost government revenue Canadians could potentially have unlimited access to the Statscan census database their tax dollars paid to collect and organize. I suspect this is a tiny fraction of the value (and tax revenue) that might be generated by economic activity if this data were free.

Worse, the $559,000 is not profit. From what I can tell it is only revenue. Consequently, it doesn’t factor in collection costs StatsCan has to absorb to run and maintain a checkout system on its website, collect credit card info, bill people, etc… I’m willing to bet almost anything that the cost of these functions either exceed $559,000 a year, or come pretty close. So the net cost of making the data free could end up being a less.

StatsCan makes another $763,000 selling Statistics Canada publications (these are 243 data releases of the 29 major economic indicators StatsCan measures and the 5 census releases it does annually – in short these are non-customized reports). So for $1,422,000 Canadians could get access to both the online data statscan has and the reports the organization generates. This is such laughably (or depressingly) small number it begs the question – why are we debating this? (again this is revenue, not profit, so the cost could be much lower)

Of course, the figure that you’ll often hear cited is $100M in revenue. So what accounts for the roughly 100x difference between the above number and the alleged revenue? Well, in 2007/08 StatsCan did make $103,155,000 but this was from value added (e.g. customized) reports. This is very, very different product than the basic data that is available on its website. My sources tell me this is not related to downloaded data.

I think we should concede that if the entire StatsCan’s database were made open and free it would impact some of this revenue. But this would also be a good thing. Why is this? Let’s break it down:

  1. Increase Capacity and Data Literacy: By making a great deal of data open and free, StatsCan would make it easier for competitors to enter the market place. More companies and individuals could analyze the country’s census and other data, and so too could more “ordinary” Canadians than ever would be able to access the database (again, that their tax dollars paid to create). This might include groups like senior high school and university students, non-profits and everyday citizens who wanted to know more about their country. So yes, Statscan would have more competitors, but the country might also benefit from having a more data literate population (and thus potential consumers).
  2. Increase Accessibility of Canadian Data to Marginalized Groups: An increase in the country’s analysis capacity would drop the price for such work. This would make it cheaper and easier for more marginal groups to benefit from this data – charities, religious groups, NGO’s, community organizations, individuals, etc…
  3. Improve Competitiveness: It would also be good for Canadian competitiveness, companies would have to spend less to understand and sell into the Canadian market. This would lower the cost of doing business in Canada – helpful to consumers and the Canadian economy.
  4. StatsCan would not lose all or even most of its business: Those at StatsCan who fear the organization would be overwhelmed by a more open world should remember, not all the data can be shared. Some data – particularly economic data gathered from companies – is sensitive and confidential. As a result there will be some data that StatsCan retains exclusive access to, and thus a monopoly over analysis. More importantly, I suspect that were Statscan data made open the demand for data analysis would grow, so arguably new capacity might end up being devoted to new demand, not existing demand.
  5. It will Reduce the Cost of Government: Finally, the crazy thing about StatsCan is that it sells its data and services to other Ministries and layers of government. This means that governments are paying people to move tax payer money between government ministries and jurisdictions. This is a needless administrative costs that drives up everybody’s taxes and poorly allocates scarce government resources (especially at the local level). Assuming every town and city in Canada pays $50 – 1000 dollars to access statscan data may not seem like much, but in reality, we are really paying that, plus their and StatsCan’s staff time to manage all these transactions, enforce compliance, etc… all of which is probably, far, far more.

So in summary, the cost to Canada of releasing this data will likely be pretty marginal, while the benefits could be enormous.

At best, if costs half a million dollars in forgone revenue. Given the improved access and enormous benefits, this is a pittance to pay.

At worst, StatsCana would lose maybe 20-30 million – this is a real nightmare scenario that assumes much greater competition in the marketplace (again, a lot of assumptions in this scenario). Of course the improved access to data would lead to economic benefits that would far, far, surpass this lost revenue, so the net benefit for the country would be big, but the cost to StatsCan would be real. Obviously, it would be nice if this decline in revenue was offset by improved funding for StatsCan (something a government that was genuinely concerned about Canadian economic competitiveness would jump at doing). However, given the current struggles Statscan faces on the revenue front (cuts across the board) I could see how a worse case scenario would be nerve wracking to the department’s senior public servants, who are also still reeling from the Long Form Census debacle.

Ultimately, however, I think the worse case scenario is unlikely. Moreover, in either scenario the benefits are significant.

Bonus Material:

Possibly the most disconcerting part of the financial reports on StatsCan on Treasury Board’s website was the stakeholder consultation associated with access to statscan’s database. It claimed that:

Usability and client satisfaction survey were conducted with a sample of clients in early 2005. Declared level of satisfaction with service was very high.

This is stunning. I’ve never talked to anyone who has had a satisfactory experience on StatsCan’s website (in contrast to their phone support – which everyone loves). I refer to the statscan site where the place where what you want is always one click away.

I’m willing to bet a great deal that the consultations were with existing long term customers – the type of people that have experience using the website. My suspicion is that if a broader consultation was conducted with potential users (university students, community groups, people like me and you, etc…) the numbers would tank. I dare you to try to use their website. It is virtually unnavigable.

Indeed, had made its website and data more accessible I suspect it the department would engage Canadians and have more stakeholders. This would have been the single most powerful thing it could have done to protect itself from cuts and decisions like the Long Form fiasco.

I know this post may anger a number of people at Statscan. I’m genuinely sorry. I know the staff work hard, are dedicated and are exceedingly skilled and professional. This type of feedback is never flattering – particularly in public. It is because you are so important to the unity, economy and quality of life in our country that it is imperative we hold you to the highest possible bar – not just in the quality of that data your collect (there you already excel) but in the way you serve and engage Canadians. In this, I hope that you get the support you need and deserve.

Hello 2011! Coding for America, speaking, open data and licenses

Eaves.ca readers – happy new year! Here’s a little but of an overview into how we are kicking off 2011/

Thank you

First, if you are a regular reader…. thank you. Eaves.ca has just entered its 4th year and it all keeps getting more and more rewarding. I write to organize my thoughts and refine my writing skills but it is always most rewarding to see others benefit from/find it of value to read these posts.

Code for America

I’ve just landed San Francisco (literally! I’m sitting on the floor of the airport, catching the free SFO wifi) where I’ll be spending virtually all of January volunteering to help launch Code for America. Why? Because I think the organizations matters, its projects are important and the people are great. I’ve always enjoyed hanging out with technologists with a positive agenda for change (think Mozilla & OpenMRS) and Code for America takes all that fun and combines it with government – one of my other great passions. I hope to update you on the progress the organizations makes and what will be happening over the coming weeks. And yes, I am thinking about how Code for Canada might fit into all this.

Gov 2.0

I’ll also be in Ottawa twice in January. My first trip out is to present on a paper about how to write collaboratively using a wiki. With a little bit of work, I’ll be repositioning this paper to make it about how to draft public policy on a wiki within government (think GCPEDIA). With luck I’ll publish this in something like Policy Options or something similar (maybe US based?). I think this has the potential of being one of my most important pieces of the year and needless to say, I’m excited and will be grateful for feedback, both good and negative.

Open Data and Government

On my second trip to Ottawa I’ll be presenting on Open Data and Open Government to the Standing Committee on Access to Information, Privacy and Ethics. Obviously, I’m honored and thrilled they’ve asked me to come and talk and look forward to helping parliamentarians understand why this issue is so important and how they could make serious progress in short order if they put their minds to the task.

Licenses

So for the last two years we’ve been working hard to get cities to do open data with significant success. This year, may be the year that the Feds (more on that in a later post) and some provinces get in on the game, as well as a larger group of cities. The goal for them will be to build on, and take to the next level, the successes of the first movers like Vancouver, Edmonton, Toronto and Ottawa. This will mean one thing. Doing the licensing better. The Vancouver license, which has been widely copied was a good starting point for governments venturing into unknown territory (e.g. getting their toes wet). But the license has several problems –and there are several significantly better choices out there (I’m look over at you PDDL – which I notice the City of Surrey has adopted, nice work.). So, I think one big goal for 2011 will be to get governments to begin shifting to (ideally) the PDDL and (if necessary) something more equivalent. On this front I’m feeling optimistic as well and will blog on this in the near future.

Lots of other exciting things going on as well – I look forward to sharing them here in the blog soon.

All in all, its hard to to be excited about 2011 and I hope you are too. Thank you so much for being part of all of this.