Category Archives: cool links

Canada's Digital Economy Strategy: Two quick actions you can take

For those interested – or better still, up till now uninterested – in Canada’s digital economy strategy I wanted to write a quick post about some things you can do to help ensure the country moves in the right direction.

First, there are a few proposals on the digital economy strategy consultation website that could do with your vote. If you have time I encourage you to go and read them and, if swayed, to vote for them. They include:

  • Open Access to Canada’s Public Sector Information and Data – Essentially calling for open data at the federal level
  • Government Use and Participation in Open Source – A call for government to save taxpayers money by engaging with and leveraging the opportunity of open source software
  • Improved access to publicly-funded data – I’m actually on the fence on this one. I agree that data from publicly funded research should be made available, however, this is not open government data and I fear that the government will adopt this recommendation and then claim that is does “open data” as the UK and the US. This option would, in fact, be something far, far short of such a claim. Indeed, the first option above is broader and encompasses this recommendation.

Second, go read Michael Geist’s piece Opening Up Canada’s Digital Economy Strategy. It is bang on and I hope to write something shortly that builds upon it.

Finally, and this is on a completely different tack, but if you are up for “clicking your mouse for change,” please also consider joining the facebook group I recently created that encourages people to opt out of receiving the yellow pages. It gives instructions what to do and, the more people who join bigger a message it sends to Yellow Pages – and the people that advertise in them – that this wasteful medium is no longer of interest to consumers (and never gets used anyways).

ChangeCamp Vancouver, GovCamp Toronto & Open Data Hackathon

For those in Vancouver, ChangeCamp will be taking place Saturday at the W2 Storyeum on 151 W Cordova. You can register here and propose sessions in advance here. I know I’ll be there and I am looking forward to hearing about interesting local projects and trying to find ways to contribute to them.

I’ll probably submit a brain storming session on datadotgc.ca – there are some exciting developments in the work around the website I’d like to test out on an audience. I’d also be interested in a session that asks people about apps they’d like to create using open data. It will be interesting to get a better sense of additional data sets people would like to request from the city.

Out in Toronto, I’ll be speaking at GovCamp Toronto on June 17th at the Toronto Reference Library. I’m not sure how registration is going to work but I would keep an eye on this page if you are interested.

Finally, on the 17th SAP will be hosting an open data hackathon for developers in Vancouver. A great opportunity to come out and work on projects related to Apps 4 Climate Action or open data projects using city of Vancouver data (or both!). I was really impressed to hear that in Ottawa a 130 people came to the first open data hackathon – would love to help foster a community like that here in Vancouver. You can RSVP for this event here.

Hope to see you at these events!

Apps for Climate Action Update – Lessons and some new sexy data

ttl_A4CAOkay, so I’ll be the first to say that the Apps4Climate Action data catalog has not always been the easiest to navigate and some of the data sets have not been machine readable, or even data at all.

That however, is starting to change.

Indeed, the good news is three fold.

First, the data catalog has been tweaked and has better search and an improved capacity to sort out non-machine readable data sets. A great example of a government starting to think like the web, iterating and learning as the program progresses.

Second, and more importantly, new and better sets are starting to be added to the catalog. Most recently the Community Energy and Emissions Inventories were released in an excel format. This data shows carbon emissions for all sorts of activities and infrastructure at a very granular level. Want to compare the GHG emissions of a duplex in Vancouver versus a duplex in Prince George? Now you can.

Moreover, this is the first time any government has released this type of data at all, not to mention making it machine readable. So not only have the app possibilities (how green is your neighborhood, rate my city, calculate my GHG emissions) all become much more realizable, but any app using this data will be among the first in the world.

Finally, probably one of the most positive outcomes of the app competition to date is largely hidden from the public. The fact that members of the public have been asking for better data or even for data sets at all(!) has made a number of public servants realize the value of making this information public.

Prior to the competition making data public was a compliance problem, something you did but you figured no one would ever look at or read it. Now, for a growing number of public servants, it is an innovation opportunity. Someone may take what the government produces and do something interesting with it. Even if they don’t, someone is nonetheless taking interest in your work – something that has rewards in of itself. This, of course, doesn’t mean that things will improve over night, but it does help advance the goal of getting government to share more machine readable data.

Better still, the government is reaching out to stakeholders in the development community and soliciting advice on how to improve the site and the program, all in a cost-effective manner.

So even within the Apps4Climate Action project we see some of the changes the promise of Government 2.0 holds for us:

  • Feedback from community participants driving the project to adapt
  • Iterations of development conducted “on the fly” during a project or program
  • Success and failures resulting in queries in quick improvements (release of more data, better website)
  • Shifting culture around disclosure and cross sector innovation
  • All on a timeline that can be measured in weeks

Once this project is over I’ll write more on it, but wanted to update people, especially given some of the new data sets that have become available.

And if you are a developer or someone who would like to do a cool visualization with the data, check out the Apps4Climate Action website or drop me an email, happy to talk you through your idea.

Saving Millions: Why Cities should Fork the Kuali Foundation

For those interested in my writing on open source, municipal issues and technology, I want to be blunt: I consider this to be one of the most important posts I’ll write this year.

A few months ago I wrote an article and blog post about “Muniforge,” an idea based on a speech I’d given at a conference in 2009 in which I advocated that cities with common needs should band together and co-develop software to reduce procurement costs and better meet requirements. I continued to believe in the idea, but have recognized that cultural barriers would likely mean it would be difficult to realize.

Last month that all changed. While at Northern Voice I ended up talking to Jens Haeusser an IT strategist at the University of British Columbia and confirmed something I’d long suspected: that some people much smarter than me had already had the same idea and had made it a reality… not among cities but among academic institutions.

The result? The Kuali foundation. “…A growing community of universities, colleges, businesses, and other organizations that have partnered to build and sustain open-source administrative software for higher education, by higher education.”

In other words for the past 5 years over 35 universities in the United States, Canada, Australia and South Africa have been successfully co-developing software.

For cities everywhere interested in controlling spending or reducing costs, this should be an earth shattering revelation – a wake up call – for several reasons:

  • First, a viable working model for muniforge has existed for 5 years and has been a demonstrable success, both in creating high quality software and in saving the participating institutions significant money. Devising a methodology to calculate how much a city could save by co-developing software with an open source license is probably very, very easy.
  • Second, what is also great about universities is that they suffer from many of the challenges of cities. Both have: conservative bureaucracies, limited budgets, and significant legacy systems. In addition, neither have IT as the core competency and both are frequently concerned with licenses, liability and the “owning” intellectual property.
  • Which thirdly, leads to possibly the best part. The Kuali Foundation has already addressed all the critical obstacles to such an endeavour and has developed licensing agreements, policies, decision-making structures, and work flows processes that address necessary for success. Moreover, all of this legal, policy and work infrastructure is itself available to be copied. For free. Right now.
  • Fourth, the Kuali foundation is not a bunch of free-software hippies that depend on the kindness of strangers to patch their software (a stereotype that really must end). Quite the opposite. The Kuali foundation has helped spawn 10 different companies that specialize in implementing and supporting (through SLAs) the software the foundation develops. In other words, the universities have created a group of competing firms dedicated to serving their niche market. Think about that. Rather than deal with vendors who specialize in serving large multinationals and who’ve tweaked their software to (somewhat) work for cities, the foundation has fostered competing service providers (to say it again) within the higher education niche.

As a result, I believe a group of forwarding thinking cities – perhaps starting with those in North America – should fork the Kuali Foundation. That is, they should copy Kuali’s bylaws, it structure, its licenses and pretty much everything else – possibly even the source code for some of its projects – and create a Kuali for cities. Call it Muniforge, or Communiforge or CivicHub or whatever… but create it.

We can radically reduce the costs of software to cities, improve support by creating the right market incentive to help foster companies whose interests are directly aligned with cities and create better software that meets cities unique needs. The question is… will we? All that is required is for CIO’s to being networking and for a few to discover some common needs. One I idea I have immediately is for the City of Nanaimo to apply the Kuali modified Apache license to its council monitoring software package it developed in house, and to upload it to GitHub. That would be a great start – one that could collectively save cities millions.

If you are a city CIO/CTO/Technology Director and are interested in this idea, please check out these links:

The Kuali Foundation homepage

Open Source Collaboration in Higher Education: Guidelines and Report of the Licensing and Policy Framework Summit for Software Sharing in Higher Education by Brad Wheeler and Daniel Greenstein (key architects behind Kuali)

Open Source 2010: Reflections on 2007 by Brad Wheeler (a must read, lots of great tips in here)

Heck, I suggest looking at all of Brad Wheeler’s articles and presentations.

Another overview article on Kuali by University Business

Phillip Ashlock of Open Plans has an overview article of where some cities are heading re open source.

And again, my original article on Muniforge.

If you aren’t already, consider reading the OpenSF blog – these guys are leaders and one way or another will be part of the mix.

Also, if you’re on twitter, consider following Jay Nath and Philip Ashlock.

Gov 2.0 Expo Ignite talk on Open Data as an old idea

First, sorry for the scant blogging this week. Being 6 weeks into a 10 week travel marathon that sees me crossing the continent 9 times the long days finally caught up with me. Also, I’ve been at the O’Reilly Media Gov 2.0 Expo in Washington DC which has been fantastic. Really, a sense of coming home. It has also been great to meet so many people that I’ve corresponded with, admired and/or whose work I’ve simply followed closely. Great moment was spending a hour with Tim Berners-Lee in the speakers lounge talking about open data and meeting up with the Sunlight Foundation team at their offices in DC. Real inspirations all of them.

Tomorrow I’m giving a talk at the Federation of Canadian Municipalities on Open Data and the opportunity of Open Source software, so been busy there to, getting my thoughts together. I consider this one of the most important audiences I’ll talk to this year, this is a group that could transform how cities work in Canada – so I’m looking forward to it.

In the mean time, for those who were not at the Gov 2.0 expo I’ve pasted a clip of my talk below. It’s an ignite talk which means it last 5 minutes, I had to give them 20 slides in advance and those slide move forward every 15 seconds (I’m not controlling them!)

But then… forget my talk! There were a bunch of other (more) fantastic talks that can be found on the Gov 2.0 you-tube page here. I strongly encourage you to check them out!

Articles I'm digesting – 25/5/2010

Been a while since I’ve done one of these. A couple of good ones ranging from the last few months. Big thank you’s to those who sent me these pieces. Always enjoy.

The Meaning of Open by Jonathan Rosenberg, Senior Vice President, Product Management

Went back and re-read this. Every company makes mistakes and Google is no exception (privacy settings on Buzz being everyone’s favourite) but this statement, along with Google’s DataLiberartion.org (which unlike Facebook is designed to ensure you can extract your information from Google’s services) shows why Google enjoys greater confidence than Facebook, Apple or any number of its competitors. If you’re in government, the private or the non-profit sector, read this post. This is how successful 21st century organizations think.

Local Governments Offer Data to Software Tinkerers by Claire Cain Miller (via David Nauman & Andrew Medd)

Another oldie (December 2009 is old?) but a goodie. Describes a little bit of the emerging eco-system for open local government data along with some of the tensions it is creating. Best head in the sand line:

Paul J. Browne, a deputy commissioner of the New York City Police Department, said it releases information about individual accidents to journalists and others who request it, but would not provide software developers with a regularly updated feed. “We provide public information, not data flow for entrepreneurs,” he said.

So… if I understand correctly, the NYPD will only give data to people who ask and prefer to tie up valuable resources filling out individual requests rather than just provide a constant feed that anyone can use. Got it. Uh, and just for the record, those “entrepreneurs” are the next generation of journalists and the people who will make the public information useful. The NYPD’s “public information” is effectively useless, much like that my home town police department offers. Does anyone actually looks at PDF’s and pictures of crimes? That you can only get on a weekly basis? Really? In an era of spreadsheets and google maps… no.

Didacticism in Game Design by Clint Hocking (via Lauren Bacon)

eaves.ca readers meet Clint Hocking. My main sadness in introducing you is that you’ll discover how a truly fantastic, smart blog reads. The only good news for me us that you are hopefully more interested in public policy, open source and things I dwell on than video games, so Clint won’t steal you all away. Just many of you.

A dash of a long post post that is worth reading

As McLuhan says: the medium is the message. When canned, discrete moral choices are rendered in games with such simplicity and lack of humanity, the message we are sending is not the message specific to the content in question (the message in the canned content might be quite beautiful – but it’s not a ludic message) – it is the message inherent in the form in which we’ve presented it: it effectively says that ‘being moral is easy and only takes a moment out of each hour’. To me, this is almost the opposite of the deeper appreciation of humanity we might aim to engender in our audience.

Clint takes video games seriously. And so should you.

The Analytic Mode by David Brooks (via David Brock)

These four lines alone make this piece worth reading. Great lessons for students of policy and politics:

  • The first fiction was that government is a contest between truth and error. In reality, government is usually a contest between competing, unequal truths.
  • The second fiction was that to support a policy is to make it happen. In fact, in government power is exercised through other people. It is only by coaxing, prodding and compromise that presidents [or anyone!] actually get anything done.
  • The third fiction was that we can begin the world anew. In fact, all problems and policies have already been worked by a thousand hands and the clay is mostly dry. Presidents are compelled to work with the material they have before them.
  • The fourth fiction was that leaders know the path ahead. In fact, they have general goals, but the way ahead is pathless and everything is shrouded by uncertainty

The case against non-profit news sites by Bill Wyman (via Andrew Potter)

Yes, much better that news organizations be beholden to a rich elite than paying readers… Finally someone takes on the idea that a bunch of enlightened rich people or better, rich corporate donors, are going to save “the news.” Sometimes it feels like media organizations are willing to do anything they can to avoid actually having to deal with paying customers. Be it using advertisers and relying on rich people to subsidize them, anything appears to be better than actually fighting for customers.

That’s what I love about Demand Media. Some people decry them as creating tons of cheap content, but at least they looked at the market place and said: This is a business model that will work. Moreover, they are responding to a real customer demand – searches in google.

Wyman’s piece also serves as a good counterpoint to the recent Walrus advertising campaign which essentially boiled down to: Canada needs the Walrus and so you should support it. The danger here is that people at the Walrus believe this line: That they are of value and essential to Canada even if no one (or very few people) bought them or read them. I think people should buy The Walrus not because it would be good for the country but because it is engaging, informative and interesting to Canadians (or citizens of any country). I think the Walrus can have great stories (Gary Stephen Ross’s piece A Tale of Two Cities is a case in point), but if you have a 1 year lead time for an article, that’s going to hard to pull off in the internet era, foundation or no foundation. I hope the Walrus stays with us, but Wyman’s article serves up some arguments worth contemplating.

Open Government interview and panel on TVO's The Agenda with Steve Paikin

My interview on TVO’s The Agenda with Steve Paikin has been uploaded to Youtube (BTW, it is fantastic that The Agenda has a YouTube channel where it posts all its interviews. Kudos!). If you live outside Ontario, or were wrapped up in the Senators-Pens playoff game that was on at the same time (which obviously we destroyed in the ratings), I thought I’d throw it up here as a post in case it is of interest. The first clip is a one on one interview between myself and Paikin. The second clip is the discussion panel that occurred afterward with myself, senior editor of Reason magazine Katherine Mangu-Ward , American Prospect executive editor Mark Schmitt and the Sunlight Foundation’s Policy Director John Wonderlich.

Hope you enjoy!

One on one interview with Paikin:

Panel Discussion:

Why Old Media and Social Media Don't Get Along

Earlier today I did a brief drop in phone interview on CPAC’s Goldhawk Live. The topic was “Have social media and technology changed the way Canadians get news?” and Christoper Waddell, the Director of Carleton University’s School of Journalism and Chris Dornan, Director of Carleton University’s Arthur Kroeger School of Public Affairs were Goldhawk’s panel of experts.

Watching the program prior to being brought in I couldn’t help but feel I live on a different planet from many who talk about the media. Ultimately, the debate was characterized by a reactive, negative view on the part of the mainstream media supporters. To them, threats are everywhere. The future is bleak, and everything, especially democratic institutions and civilization itself teeter on the edge. Meanwhile social media advocates such as myself are characterized as delusional techno-utopians. Nothing, of course, could be further from the truth. Indeed, both sides share a lot in common. What distinguishes though, is that while traditionalists are doom and gloom, we are almost defined by the sense of the possible. New things, new ideas, new approaches are becoming available every day. Yes, there will be new problems, but there will also be new possibilities and, at least, we can invent and innovate.

I’m just soooooo tired of the doom and gloom. It really makes one want to give up on the main stream media (like many, many, many people under 30 have). But, we can’t. We’ve got to save these guys from themselves – the institutions and the brands matter (I think). So, in that pursuit, let’s tackle the beast head on, again.

Last, night the worse offender was Goldhawk, who tapped into every myth that surrounds this debate. Let’s review them one by one.

Myth 1: The average blog is not very good – so how can we rely on blogs for media?

For this myth, I’m going to first pull a little from Missing the Link, now about to be published as a chapter in a journalism textbook called “The New Journalist”:

The qualitative error made by print journalists is to assume that they are competing against the average quality of online content. There may be 1.5 million posts a day, but as anyone whose read a friend’s blog knows, even the average quality of this content is poor. But this has lulled the industry into a false sense of confidence. As Paul Graham describes: “In the old world of ‘channels’ (e.g. newspapers) it meant something to talk about average quality, because that’s what everyone was getting whether they liked it or not. But now you can read any writer you want. Consequently, print media isn’t competing against the average quality of online writing, they’re competing against the best writing online…Those in the print media who dismiss online writing because of its low average quality are missing an important point. No one reads the average blog.”

You know what though, I’m going to build on that. Goldhawk keeps talking about the average blog or average twitterer (which of course, no one follows, we all follow big names, like Clay Shirky and Tim O’Reilly). But you know what? They keep comparing the average blog to the best newspapers. The fact is, even the average newspaper sucks. The Globe represents the apex of the newspaper industry in Canada, not the average, so stop using it as an example. To get the average, go into any mid-sized town and grab a newspaper. It won’t be interesting. Especially to you – an outsider. It will have stories that will appeal to a narrow audience, and even then, many of these will not be particularly well written. More importantly still, there will little, and likely no, investigative journalism – that thing that allegedly separates blogs from newspapers. Indeed, even here in Vancouver, a large city, it is frightening how many times press releases get marginally touched up and then released as “a story.” This is the system that we are afraid of losing?

Myth 2: How will people sort good from low quality news?

I always love this myth. In short, it presumes that the one thing the internet has been fantastic at developing – filters – simple won’t evolve in a part of the media ecosystem (news) where people desperately want them. At best, this is naive. At worse, it is insulting. Filters will develop. They already have. Twitter is my favourite news filter – I probably get more news via it than any other source. Google is another. Nothing gets you to a post or article about a subject you are interested in like a good (old-fashioned?) google search. And yes, there is also going to be a market for branded content – people will look for that as short cut for figuring out what to read. But please people are smarter than you think at finding news sources.

Myth 3: People lack media savvy to know good from low quality news.

I love the elitist contempt the media industry sometimes has towards its readers. But, okay, let’s say this is true. Then the newspapers and mainstream media have only themselves to blame. If people don’t know what good news is, it is because they’ve never seen it (and by and large, they haven’t). The most devastating critique on this myth is actually delivered by one of my favourite newspaper men: Kenneth Whyte is his must listen-to Dalton Camp Lecture on journalism. In it Whyte talks about how, in the late 19th and early 20th century NYC had dozens and dozens of newspapers that fought for readership and people were media savvy, shifting from paper to paper depending on quality and perspective. That all changed with consolidation and a shift from paying for content to advertising for content. Advertisers want staid, plain, boring newspapers with big audiences. This means newspapers play to the lowest common denominator and are market oriented to be boring. It also leaves them beholden to corporate interests (when was the last time the Vancouver Sun really did a critical analysis of the housing industry – it’s biggest advertisement source?). If people are not media savvy it is, in part, because the media ecosystem demands so little of them. I suspect that social media can and will change this. Big newspapers may be what we know, but they may not be good for citizenship or democracy.

Myth 4: There will be no good (and certainly no investigative) journalism with mainstream media.

Possible. I think the investigative journalism concern is legitimate. That said, I’m also not convinced there is a ton of investigative journalism going on. There may also be more going on in the blogs than we might know. It could be that these stories a) don’t get prominence and b) even when they do, often newspapers don’t cite blogs, and so a story first broken by a blog may not be attributed. But investigative journalism comes in different shapes and sizes. As I wrote in one of my more viewed posts, The Death of Journalism:

I suspect the ideal of good journalism will shift from being what Gladwell calls puzzle solving to mystery solving. In the former you must find a critical piece of the puzzle – one that is hidden to you – in order to explain an event. This is the Woodward and Bernstein model of journalism – the current ideal. But in a transparent landscape where huge amounts of information about most organizations is being generated and shared the critical role of the journalist will be that of mystery solving – figuring out how to analyze, synthesize and discover the mystery within the vast quantity of information. As Gladwell recounts this was ironically the very type of journalism that brought down Enron (an organization that was open, albeit deeply  flawed). All of the pieces of that lead to the story that “exposed” Enron were freely, voluntarily and happily given to reports by Enron. It’s just a pity it didn’t happen much, much sooner.

I for one would celebrate the rise of this mystery focused style of “journalism.” It has been sorely needed over the past few years. Indeed, the housing crises that lead to the current financial crises is a perfect example of case where we needed mystery solving not puzzle solving, journalism. The fact that sub-prime mortgages were being sold and re-packaged was not a secret, what was lacking was enough people willing to analyze and write about this complex mystery and its dangerous implications.

And finally, Myth 5: People only read stories that confirm their biases.

Rather than Goldhawk it was Christopher Waddell who kept bringing this point up. This problem, sometimes referred to as “the echo chamber” effect is often cited as a reason why online media is “bad.” I’d love to know Waddell’s sources (I’m confident he has some – he is very sharp). I’ve just not seen any myself. Indeed, Andrew Potter recently sent me a link to “Ideological Segregation Online and Offline.” What is it? A peer reviewed study that found no evidence the Internet is becoming more ideologically segregated. And the comparison is itself deeply flawed. How many conservatives read the Globe? How many liberals read the National Post? I love the idea that somehow main stream media doesn’t ideologically segregate an audience. Hasn’t any looked at Fox or MSNBC recently?

Ultimately, it is hard to watch (or participate) in these shows without attributing all sorts of motivations to those involved. I keep feeling like people are defending the status quo and trying to justify their role in the news ecosystem. To be fair, it is a frightening time to be in media.

When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. They are demanding to be told that old systems won’t break before new systems are in place. They are demanding to be told that ancient social bargains aren’t in peril, that core institutions will be spared, that new methods of spreading information will improve previous practice rather than upending it. They are demanding to be lied to.

And I refuse to lie. It sucks to be a newscaster or a journalist or a columnist. Especially if you are older. Forget about the institutions (they’ve already been changing) but the culture of newsmedia, which many employed in the field cling strongly to, is evolving and changing. That is a painful process, especially to those who have dedicated their life to it. But that old world was far from perfect. Yes, the new world will have problems, but they will be new problems, and there may yet be solutions to them, what I do know is that there aren’t solutions to the old problems in the old system and frankly, I’m tired of those old problems. So let’s get on with it. Be critical, but please, stop spreading the myths and the fear mongering.

Datadotgc.ca Launched – the opportunity and challenge

Today I’m really pleased to announce that we’ve launched datadotgc.ca, a volunteer driven site I’m collaboratively creating with a small group of friends and, I hope, a growing community that, if you are interested, may include you.

As many of you already know I, and many other people, want our governments to open up and share their data, in useful, structured formats that people can actually use or analyze. Unlike our American and British peers, the Canadian Federal (and provincial…) government(s) currently have no official, coordinated effort to release government data.

I think that should change.

So rather than merely complain that we don’t have a data.gov or data.gov.uk in Canada, we decided to create one ourselves. We can model what we want our governments to do and even create limited versions of the service ourselves. So that is what we are doing with this site. A stab at showing our government, and Canada, what a federal open data portal could and should look like – one that I’m hoping people will want to help make a success.

Some two things to share.

First, what’s our goal for the site?

  • Be an innovative platform that demonstrates how government should share data.
  • Create an incentive for government to share more data by showing ministers, public servants and the public which ministries are sharing data, and which are not.
  • Provide a useful service to citizens interested in open data by bringing it all the government data together into one place to both make it easier to find.

Second, our big challenge.

As Luke C, one datadotgc.ca community member said to me – getting the site up is the easier part. The real challenge is building a community of people who will care for it and help make it a living, growing and evolving success. Here there is lots of work still to be done. But if you feel passionate about open government and are interested in joining our community, we’d love to have you. At the moment, especially as we still get infrastructure to support the community in place, we are convening at a google group here.

So what our some of the things I think are a priority in the short term?

  • Adding or bulk scraping in more data sets so the site more accurately displays what is available
  • Locating data sets that are open and ready to be “liberated”
  • Documenting how to add or scrape in a data set to allow people to help more easily
  • Implement a more formal bug and feature tracker
  • Plus lots of other functionality I know I at least (and I’m sure there are lots more ideas out there) would like to add (like “request a closed data set”)

As Clay Shirky once noted about any open source project, datadotgc.ca is powered by love. If people love the site and love what it is trying to accomplish, then we will have a community interested in helping make it a success. I know I love datadotgc.ca – and so my goal is to help you love it too, and to do everything I can to make it as easy as possible for you to make whatever contribution you’d like to make. Creating a great community is the hardest but best part of any project. We are off to a great start, and I hope to maybe see you on the google group.

Finally, just want to thank everyone who has helped so far, including the fine people at Raised Eyebrow Web Studio, Luke Closs, and a number of fantastic coders from the Open Knowledge Foundation. There are also some great people over at the Datadotgc.ca Google Group who have helped scrape data, tested for bugs and been supportive and helpful in so many ways.

Opening Parliament and other big announcements

This is going to be an exciting week for online activists seeking to make government more open and engaged.

First off, openparliament.ca launched yesterday. This is a fantastic site with a lot going for it – go check it out (after reading my other updates!). And huge kudos to its creator Michael Mulley. Just another great example of how our democratic institutions can be hacked to better serve our needs – to make them more open, accessible and engaging. There is a ton of stuff that could be built on top of Michael’s and others – like Howdtheyvote, sites. I’ve written more about this in a piece on the Globe’s website titled If You Won’t Tell Us About Our MPs Well Do It For You.

Second, as follow on to the launch of openparliament.ca, I’ve been meaning to share for some time that I’ve been having conversations with the House of Parliament IT staff over the past couple of months. About a month ago parliament IT staff agreed to start sharing the Hansard, MP’s bios, committee calendars and a range of other information via XML (sorry for not sharing this sooner, things have been a little crazy). They informed me that they would start doing this before the year is over – so I suspect it won’t happen in the next couple of months, but will happen at some point in the next 6 months. This is a huge step forward for the house and hopefully not the last (also, there is no movement on the senate as of yet). There are still a ton more ways that information about the proceedings of Canada’s democracy could be made more easily available, but we have some important momentum with great sites like those listed above, and internal recognition to share more data. I’ll be having further conversations with some of the staff over the coming months so will try to update people on progress as I find out.

Finally, I am gearing up to launch datadotgc.ca. This is a project I’ve been working on for quite some time with a number of old and new allies. Sadly, the Canadian government does not have an open data policy and there is no political effort to create a data.gc.ca like that created by the Obama administration (http://www.data.gov/) or of the British Government (http://data.gov.uk/). So, I along with a few friends have decided to create one for them. I’ll have an official post on this tomorrow. Needless to same, I’m excited. We are still looking for people to help us populate the site with open government data sets – and have even located some that we need help scraping – so if you are interested in contributing feel free to join the datadotgc.ca google group and we can get you password access to the site.