Tag Archives: open government

Right to Know Week – going on Right Now

So, for those not in the know (…groan) this week is Right to Know Week.

Right to Know (RTK) Week is and internationally designated week with events taking place around the world. It is designed to improve people’s awareness of their rights to access government information and the role such access plays in democracy and good governance. Here in Canada there is an entire week’s worth of events planned and it is easy to find out what’s happening near you.

Last year, during RTK Week I was invited to speak in Ottawa on a panel for parliamentarians. My talk, called Government Transparency in a Digital Age (blog post about it & slideshare link) seemed to go well and the Information Commissioner soon after started quoting some of my ideas and writings in her speeches and testimony/reports to parliamentary. Unsurprisingly, she has become a fantastic ally and champion in the cause for open data. Indeed, most recently, the Federal Information Commissioner, along with all the her provincial counterparts, released a joint statement calling on their respective governments to proactively disclosing information “in open, accessible and reusable formats.”

What is interesting about all this, is that over the course of the last year the RTK community – as witnessed by the Information Commissioners transformation – has begun to understand why “the digital” is radically transforming what access means and how it can work. There is an opportunity to significantly enlarge the number and type of allies in the cause of “open government.” But for this transformation to take place, the traditional players will need to continue to rethink and revise both their roles and their relationships with these new players. This is something I hope to pick up on in my talk.

So yes… this year, I’ll be back in Ottawa again.

I’ll once again be part of the Conference for Parliamentarians-Balancing Openness and the Public Interest in Protecting Information panel, which I’ll be doing with:

  • David Ferriero, Archivist of the United States
  • Vanessa Brinkmann, Counsel, Initial Request Staff, Office of Information Policy, U.S. Department of Justice; and
  • James Travers of the Toronto Star

Perhaps even more exciting than the panel I’m on though is the panel that shows how quickly both this week and the Information Officer’s are trying to transform. Consider that, this year, RTK will include a panel on open data titled Push or Pull: Liberating Government Information” it will be chaired by Microsoft’s John Weigelt and have on it:

  • Nathalie Des Rosiers, General Counsel, Canadian Civil Liberties Association
  • Toby Mendel, Executive Director of the Centre for Law and Democracy
  • Kady O’Malley, Parliamentary blogger for CBC.ca’s Inside Politics blog
  • Jeff Sallot, Carleton University journalism instructor and former Globe and Mail journalist

Sadly I have a prior commitment back in Vancouver so won’t be there in person, but hope to check it out online, hope you will too.

Welcome to Right to Know Week. Hope you’ll join in the fray.

Links from Gov2.0 Summit talk and bonus material

My 5 minute lightening fast jam packed talk (do I do other formats? answer… yes) from yesterday’s Gov2.0 summit hasn’t yet been has just been posted to youtube. I love that this year the videos have the slides integrated into it.

For those who were, and were not, there yesterday, I wanted to share links to all the great sites and organizations I cited during my talk, I also wanted to share one or two quick stories I didn’t have time to dive into:

VanTrash and 311:

Screen-shot-2010-09-09-at-3.07.32-AM-1024x640As one of the more mature apps in Vancouver using open data Vantrash keeps being showing us how these types of innovations just keep giving back in new and interesting ways.

In addition to being used by over 3000 households (despite never being advertised – this is all word of mouth) it turns out that the city staff are also finding a use for vantrash.

I was recently told that 311 call staff use Vantrash to help trouble shoot incoming calls from residents who are having problems with garbage collection. The first thing one needs to do in such a situation is identify which collection zone the caller lives in – turns out VanTrash is the fastest and more effective way to accomplish this. Simply input the caller’s address into the top right hand field and presto – you know their zone and schedule. Much better than trying to find their address on a physical map that you may or may not have near your station.

TaxiCity, Open Data and Game Development

Another interesting spin off of open data. The TaxiCity development team, which recreated downtown Vancouver in 2-D using data from the open data catalog, noted that creating virtual cities in games could be a lot easier with open data. You could simply randomize the height of buildings and presto an instant virtual city would be ready. While the buildings would still need to be skinned one could recreate cities people know quickly or create fake cities that felt realistic as they’d be based on real plans. More importantly, this process could help reduce the time and resources needed to create virtual cities in games – an innovation that may be of interest to those in the video game industry. Of course, given that Vancouver is a hub for video game development, it is exactly these types of innovations the city wishes to foster and will help sustain Vancouver’s competitive advantage.

Links (in order of appearance in my talk)

Code For America shirt design can be seen in all their glory here and can be ordered here. As a fun aside, I literally took that shirt of Tim O’Reilly’s back! I saw it the day before and said, I’d wear that on stage. Tim overheard me and said he’d give me his if I was serious…

Vancouver’s Open Motion (or Open3, as it is internally referred to by staff) can be read in the city’s PDF version or an HTML version from my blog.

Vancouver’s Open Data Portal is here. keep an eye on this page as new data sets and features are added. You can get RSS feed or email updates on the page, as well as see its update history.

Vantrash the garbage reminder service’s website is here. There’s a distinct mobile interface if you are using your phone to browse.

ParkingMobility, an app that crowdsources the location of disabled parking spaces and enables users to take pictures of cars illegally parked in disabled spots to assist in enforcement.

TaxiCity, the Centre for Digital Media Project sponsored by Bing and Microsoft has its project page here. Links to the sourcecode, documentation, and a ton of other content is also available. Really proud of these guys.

Microsoft’s Internal Vancouver Open Data Challenge fostered a number of apps. Most have been opensourced and so you can get access to the code as well. The apps include:

The Graffiti Analysis written by University of British Columbia undergraduate students can be downloaded from this blog post I posted about their project.

BTA Works – the research arm of Bing Thom Architects has a great website here. You can’t download their report about the future of Vancouver yet (it is still being peer-reviewed) but you can read about it in this local newspaper article.

Long Tail of Public Policy – I talk about this idea in some detail in my chapter on O’Reilly Media’s Open Government. There is also a brief blog post and slide from my blog here.

Vancouver’s Open Data License – is here. Edmonton, Ottawa and Toronto use essentially the exact same thing. Lots that could be done on this front still mind you… Indeed, getting all these cities on a single standard license should be a priority.

Vancouver Data Discussion Group is here. You need to sign in to join but it is open to anyone.

Okay, hope those are interesting and helpful.

Are you a Public Servant? What are your Open Data Challenges?

A number of governments have begun to initiate open data and open government strategies. With more governments moving in this direction a growing number of public servants are beginning to understand the issues, obstacles, challenges and opportunities surrounding open data and open government.

Indeed, these challenges are why many of these public servants frequent this blog.

This is precisely why I’m excited to share that, along with the Sunlight Foundation, the Personal Democracy Forum, Code for America, and GovLoop, I am helping Socrata in a recently launched survey aimed at government employees at the national, regional and local levels in the US and abroad about the progress of Open Data initiatives within their organization.

If you are a government employee please consider taking time to help us understand the state of Open Data in government. The survey is comprehensive, but given how quickly this field and the policy questions that come with it, is expanding, I think the collective result of our work could be useful. So, with that all said, I know you’re busy, but hope you’ll consider taking 10 minutes to fill out the survey. You can find it at: http://www.socrata.com/benchmark-study.

Creating effective open government portals

In the past few years a number of governments have launched open data portals. These sites, like www.data.gov or data.vancouver.ca share data – in machine readable formats (e.g. that you can play with on your computer) that government agencies collect.

Increasingly, people approach me and ask: what makes for a good open data portal? Great question. And now that we have a number of sites out there we are starting to learn what makes a site more or less effective. A good starting point for any of this is 8 Open Government principles, and for those newer to this discussion, there are the 3 laws of open data (also available in German Japanese, Chinese, Spanish, Dutch and Russian).

But beyond that, I think there are some pretty tactical things, data portal owners should be thinking about. So here are some issues I’ve noticed and thought might be helpful.

1. It’s all about automating the back end

Probably the single greatest mistake I’ve seen governments make is, in the rush to get some PR or meet an artificial deadline, they create a data portal in which the data must be updated manually. This means that a public servant must run around copying the data out of one system, converting (and possibly scrubbing it of personal and security information) and then posting it to the data portal.

There are a few interrelated problems with this approach. Yes, it allows you to get a site up quickly but… it isn’t sustainable. Most government IT departments don’t have a spare body that can do this work part time, even less so if the data site were to grow to include 100s or 1000s of data sets.

Consequently, this approach is likely to generate ill-will towards the government, especially from the very community of people who could and should be your largest supporters: local tech advocates and developers.

Consider New York, here is a site where – from I can tell – the data is not regularly updated and grumblings are getting louder. I’ve heard similar grumblings out of some developers and citizens in Canadians cities where open data portals get trumpeted despite infrequent updates and having few data sets available.

If you are going to launch an open data portal, make sure you’ve figured out how to automate the data updates first. It is harder to do, but essential. In the early days open data sites often live and die based on the engagement of a relatively small community or early adopters – the people who will initially make the data come alive and build broader awareness. Frustrate the community and the initiative will have a harder time gaining traction.

2. Keep the barriers low

Both the 8 principles and 3 laws talk a lot about licensing. Obviously there are those who would like the licenses on many existing portals to be more open, but in most cases the licenses are pretty good.

What you shouldn’t do is require users to register. If the data is open, you don’t care who is using it and indeed, as a government, you don’t want the hassle of tracking them. Also, don’t call your data open if members must belong to a educational institution or a non-profit. That is by definition not data that is open (I’m looking at you StatsCan, its not liberated data if only a handful of people can look at it, sadly, you’re not the only site to do this). Worst is one website that, in order to access the online catalogue you have to fax in a form outlining who you are.

This is the antithesis of how an open data portal should work.

3. Think like (or get help from) good librarians and designers

The real problem is when sites demand too much of users to even gain access to the data. Readers of this blog know about my feelings regarding Statistics Canada’s website, the data always seems to be one click away. Of course, that’s if you even think you are able to locate the data you are interested in, which usually seems impossible to find.

And yes, I know that Statistics Canada’s phone operators are very helpful and can help you locate datasets quickly – but I submit to you that this is a symptom of a problem. If every time I went to Amazon.com I had to call a help desk to find the book I was interested in I don’t think we’d be talking about how great Amazon’s help desk was. We’d be talking about how crappy their website is.

The point here is that an open data site is likely to grow. Indeed, looking at data.gov and data.gov.uk these sites now have thousands of data sets on them. In order to be navigable they need to have excellent design. More importantly, you need to have a new breed of librarian – one capable of thinking in the online space – to help create a system where data sets can be easily and quickly located.

This is rarely a problem early on (Vancouver has 140 data sets up, Washington DC, around 250, these can still be trolled through without a sophisticated system). But you may want to sit down with a designer and a librarian during these early stages to think about how the site might evolve so that you don’t create problems in the future.

4. Feedback

Finally, I think good open data portals want, and even encourage feedback. I like that data.vancouver.ca has a survey on the site which asks people what data sets they would be interested in seeing made open.

But more importantly, this is an area where governments can benefit. No data set is perfect. Most have a typo here or there. Once people start using your data they are going to find mistakes.

The best approach is not to pretend like the information is perfect (it isn’t, and the public will have less confidence in you if you pretend this is true). Instead, ask to be notified about errors. Remember, you are using this data internally, so any errors are negatively impacting your own planning and analysis. By harnessing the eyes of the public you will be able to identify and fix problems more quickly.

And, while I’m sure we all agree this is probably not the case, maybe the face that the data us public, there will be a small added incentive to fixing it quickly. Maybe.

Interview on Open Source, Open Gov & Open Data withe CSEDEV

The other week – in the midst of boarding a plane(!) – I did an interview with the CSEDEV on some thoughts around open data, open government and open source.

The kind people at CSEDEV have written up the interview in a kind of paraphrased way and published it as three short blog posts here, part 2 here and part 3 here.

Part of what makes this interesting to me is how a broader set of people are becoming interested in open government. Take CSEDEV for example. Here is an Ottawa based software firm focused on enterprise solutions. It’s part of an increasing number of software companies and IT consulting firms are taking note of the open government and open data meme. Indeed, another concrete example of this is Lagan, a large supplier of 311 systems, announced the other week that they would support the open311 standard. This dramatically alters the benefits of a 311 system and the capacity for it to serve as a platform and innovation driver for a city.

But, even more exciting, the meme is starting to spread beyond IT and software. I was recently asked to write an article on what open data and open government means for business more generally, here in BC. (Will link to it, when published)

These moments represent an important shift in the open data and open government debate. With vendors and consultants taking notice governments can more easily push for, and expect, off the shelf solutions that support open government initiatives. Not only could this reduce cost to government and improve access for public servants and citizens, it could also be a huge boost for open standards which prove to be transformative to the management of information in the public sector.

Exciting times. Watch the open government space – now that it’s linked to IT, it’s beginning to gain speed.

How Science Is Rediscovering "Open" And What It Means For Government

Pretty much everybody in government should read this fantastic New York Times article Sharing of Data Leads to Progress on Alzheimer’s. On one hand the article is a window into what has gone wrong with science – about how all to frequently a process that used to be competitive but open, and problem focused has become a competitive but closed and intellectual property driven (one need only look at scientific journals to see how slow and challenging the process has become).

But strip away the talk about the challenges and opportunities for science. At its core, this is an article is about something more basic and universal. This is an article about open data.

Viewed through this lens it is a powerful case study for all of us. It is a story of how one scientific community’s (re)discovery of open principles can yield powerful lessons and analogies for the private sector and, more importantly the public sector.

Consider first, the similarities in problems. From the article:

Dr. Potter had recently left the National Institutes of Health and he had been thinking about how to speed the glacial progress of Alzheimer’s drug research.

“We wanted to get out of what I called 19th-century drug development — give a drug and hope it does something,” Dr. Potter recalled in an interview on Thursday. “What was needed was to find some way of seeing what was happening in the brain as Alzheimer’s progressed and asking if experimental drugs could alter that progression.”

Our government’s are struggling too. They are caught with a 20th-century organizational, decision-making and accountability structures. More to the point, they move at a glacial speed. On the one hand we should be worried about a government that moves too quickly, but a government that is too slow to be responsive to crises or to address structural problems is one that will lose the confidence of the public. Moreover, like in healthcare, many of the simpler problems have been addressed. citizens are looking for solutions to more complex problems. As with the scientists and Alzheimer’s we may need new models to speed the process up for understanding and testing solutions for these issues.

To overcome this 19th century approach – and achieve the success they currently enjoy – the scientists decided to do some radical.

The key to the Alzheimer’s project was an agreement as ambitious as its goal: not just to raise money, not just to do research on a vast scale, but also to share all the data, making every single finding public immediately, available to anyone with a computer anywhere in the world.

No one would own the data. No one could submit patent applications, though private companies would ultimately profit from any drugs or imaging tests developed as a result of the effort.

Consider this. Here a group of private sector companies recognize the intellectual property slows down innovation. The solution – dilute the intellectual property, focus on sharing data and knowledge, and understand that those who contribute most will be best positioned to capitalize on the gains at the end.

Sadly this is the same problem faced within governments. Sometimes it has to do with actual intellectual property (something I’ve recently argued our governments should abandon). However, the real challenge isn’t about about formal rules, it is more subtle. In complex siloed organizations where knowledge is power the incentives to maximize influence are to not share knowledge and data. Better to use the information you have strategically, in a limited fashion, to maximize influence. The result, data is kept as a scarce, but strategic asset. This is a theme I tackled both in my chapter in Open Government and in blog posts like this one.

In short, the real challenge is structural and cultural. Scientists had previously existed in a system where reputation (and career advancement) was built by hoarding data and publishing papers. While the individual incentives were okay, collectively this behavior was a disaster. The problem was not getting solved.

Today, it would appear that publishing is still important, but there are reputational effects from being the person or group to share data. Open data is itself a currency. This is hardly surprising. If you are sharing data it means you are doing lots of work, which means you are likely knowledgeable. As a result, those with a great deal of experience are respected but there remains the opportunity for those with radical ideas and new perspectives to test hypothesis and gain credibility by using the open data.

Unsurprisingly, this shift wasn’t easy:

At first, the collaboration struck many scientists as worrisome — they would be giving up ownership of data, and anyone could use it, publish papers, maybe even misinterpret it and publish information that was wrong.

Wow, does that sound familiar. This is invariably the first question government officials ask when you begin talking about open data. The answer, both in the scientific community and for government, is that you either believe in the peer-review process and public debate, or you don’t. Yes, people might misrepresent the data, or publish something that is wrong, but the bigger and more vibrant the community, the more likely people will find and point out the errors quickly. This is what innovation looks like… people try out ideas, sometimes they are right, sometimes they are wrong. But the more data you make available to people the more ideas can be tested and so the faster the cycle of innovation can proceed.

Whether it is behind the firewall or open to the public, open data is the core to accelerating the spread of ideas and the speed of innovation. These scientists are rediscovering that fact as our some governments. We’ve much to learn and do, but the case is becoming stronger and stronger that this is the right thing to do.

Canada's emerging opendata mashups (plus some ideas)

Over at IT World Canada, Jennifer Kavur has put together a list of 25 sites and apps for Open Government. What’s fantastic about this list is it demonstrates to government officials and politicians that there is a desire, here in Canada, to take government data and do interesting things with it.

Whether driven by developers like Michael Mulley or Morgan Peers who just want to improve democracy and have fun, or whether it is by those like Jeff Aramini who want to start a business and make money, the appetite to do something is real, and it is growing. Indeed, the number of apps and sites is far greater than 25 including simple mashups like CSEDEV’s environment Canada pollution data display or the 17 apps recently created as part of the Apps For Climate Action competition.

What is all the more remarkable is that this growth is happening even as there is little government data available. Yes, a number of cities have made data available, but provincially and especially federally there is almost no concerted effort to make data easy to use. Indeed, many of the sites cited by Kavur have to “scrape” the data of government websites, a laborious process that can easily break if the government website changes structure. It begs the question, what would happen if the data were accessible?

As an aside, two data sets I’m surprised no one has done much with are both located on the Toronto website: Road Restriction data and DineSafe data. Given how poor the city’s beta road restriction website is and the generally high interest in traffic news, I’d have thought that one of the local papers or media companies would have paid someone to develop an iPhone app or a widget for their website using this data. It is one thing commuters and consumers want to know more about.

As for DineSafe, I’m also surprised that no one in Toronto has approached the eatsure developers and asked them if they can port the site to Toronto. I’m still more surprised that a local restaurant review website has developed a widget that shows you to the DineSafe rating of a restaurant on its review page. Or that an company like urbanspoon or yelp hasn’t hired an iPhone app developer to integrate this data into their app…

Good times for Open Data in Canada. But if the feds and provinces were on board it could be much, much better…

Your Government Just Got Dumber: how it happened and why it matters to you

This piece was published in the Globe and Mail today so always nice when you read it there and let them know it matters to you.

Last week the Conservative Government decided that it would kill the mandatory long census form it normally sends out to thousands of Canadians every five years. On the surface such a move may seem unimportant and, to many, uninteresting, but it has significant implications for every Canadian and every small community in Canada.

Here are 3 reasons why this matters to you:

1. The Death of Smart Government

Want to know who the biggest user of census data is? The government. To understand what services are needed, where problems or opportunities may arise, or how an region is changing depends on having accurate data. The federal government, but also the provincial and, most importantly, local governments use Statistics Canada’s data every day to find ways to save tax payers money, improve services and make plans. Now, at the very moment – thanks to computers – governments are finding new ways to use this information more effectively than ever before, it is to be cut off.

To be clear this is a direct attack on the ability of government to make smart decisions. In fact it is an attack on evidence based public policy. Moreover, it was a political decision – it came from the Minister’s office and does not appear to reflect what Statistics Canada either wants or recommends. Of course, some governments prefer not to have information, all that data and evidence gets in the way of legislation and policies that are ineffective, costly and that reward vested interests (I’m looking at you Crime Bill).

2. The Economy is Less Competitive

But it isn’t just government that will suffer. In the 21st century economies data and information are at the heart of economic activity, it is what drives innovation, efficiencies and productivity. Starve our governments, ngo’s, businesses and citizens of data and you limit the wealth a 21st century economy will generate.

Like roads to the 20th century economy, data is the core infrastructure for a 21st century economy. While just a boring public asset, it can nonetheless foster big companies, jobs and efficiencies. Roads spawned GM. Today, people often fail to recognize that the largest company already created by the new economy – Google – is a data company. Google is effective and profitable not because it sells ads, but because it generates and leverages petabytes of data every day from billions of search queries. This allows it to provide all sorts of useful services such as pointing us, with uncanny accuracy, to merchandises and services we want, or better yet, spam we’d like to avoid. It can even predict when communities will experience flu epidemics four months in advance.

And yet, it is astounding that the Minister in charge of Canada’s digital economy, the minister who should understand the role of information in a 21st century economy, is the minister who authorized killing the creation of this data. In doing so he will deprive Canadians and their businesses of information that would make them, and thus our economy, more efficient, productive and profitable. Of course, the big international companies will probably be able to find the money to do their own augmented census, so those that will really suffer will be small and medium size Canadian businesses.

3. Democracy Just got Weaker

Of course, the most important people who could use the data created by the census aren’t government or businesses. It is ordinary Canadians. In theory, the census creates a level playing field in public policy debates. Were Statistics Canada website usable and its data accessible (data, may I remind you we’ve already paid for) then citizens could use this information to fight ineffective legislation, unjust policies, or wasteful practices. In a world where this information won’t exist those who are able to pay for the creation of this information – read large companies – will have an advantage not only over citizens, but over our governments (which of course, won’t have this data anymore either). Today, the ability of ordinary citizens to defend themselves against government and businesses just got weaker.

So who’s to blame? Tony Clement, the Minister of Industry Canada who oversees Statistics Canada, is to blame. His office authorized this decision. But Statistics Canada also shares in the blame. In an era where the internet has flattened the cost of distributing information Statistics Canada: continues to charge citizens for data their tax dollars already paid for; has an unnavigable website where it is impossible to find anything; and often distributes data in formats that are hard to use. In short, for years the department has made its data inaccessible to ordinary Canadians. As a result it isn’t hard to see why most Canadians don’t know about or understand this issue. Sadly, once they do wake up to the cost of this terrible decisions, I fear it will be too late.

Open Government interview and panel on TVO's The Agenda with Steve Paikin

My interview on TVO’s The Agenda with Steve Paikin has been uploaded to Youtube (BTW, it is fantastic that The Agenda has a YouTube channel where it posts all its interviews. Kudos!). If you live outside Ontario, or were wrapped up in the Senators-Pens playoff game that was on at the same time (which obviously we destroyed in the ratings), I thought I’d throw it up here as a post in case it is of interest. The first clip is a one on one interview between myself and Paikin. The second clip is the discussion panel that occurred afterward with myself, senior editor of Reason magazine Katherine Mangu-Ward , American Prospect executive editor Mark Schmitt and the Sunlight Foundation’s Policy Director John Wonderlich.

Hope you enjoy!

One on one interview with Paikin:

Panel Discussion:

Datadotgc.ca Launched – the opportunity and challenge

Today I’m really pleased to announce that we’ve launched datadotgc.ca, a volunteer driven site I’m collaboratively creating with a small group of friends and, I hope, a growing community that, if you are interested, may include you.

As many of you already know I, and many other people, want our governments to open up and share their data, in useful, structured formats that people can actually use or analyze. Unlike our American and British peers, the Canadian Federal (and provincial…) government(s) currently have no official, coordinated effort to release government data.

I think that should change.

So rather than merely complain that we don’t have a data.gov or data.gov.uk in Canada, we decided to create one ourselves. We can model what we want our governments to do and even create limited versions of the service ourselves. So that is what we are doing with this site. A stab at showing our government, and Canada, what a federal open data portal could and should look like – one that I’m hoping people will want to help make a success.

Some two things to share.

First, what’s our goal for the site?

  • Be an innovative platform that demonstrates how government should share data.
  • Create an incentive for government to share more data by showing ministers, public servants and the public which ministries are sharing data, and which are not.
  • Provide a useful service to citizens interested in open data by bringing it all the government data together into one place to both make it easier to find.

Second, our big challenge.

As Luke C, one datadotgc.ca community member said to me – getting the site up is the easier part. The real challenge is building a community of people who will care for it and help make it a living, growing and evolving success. Here there is lots of work still to be done. But if you feel passionate about open government and are interested in joining our community, we’d love to have you. At the moment, especially as we still get infrastructure to support the community in place, we are convening at a google group here.

So what our some of the things I think are a priority in the short term?

  • Adding or bulk scraping in more data sets so the site more accurately displays what is available
  • Locating data sets that are open and ready to be “liberated”
  • Documenting how to add or scrape in a data set to allow people to help more easily
  • Implement a more formal bug and feature tracker
  • Plus lots of other functionality I know I at least (and I’m sure there are lots more ideas out there) would like to add (like “request a closed data set”)

As Clay Shirky once noted about any open source project, datadotgc.ca is powered by love. If people love the site and love what it is trying to accomplish, then we will have a community interested in helping make it a success. I know I love datadotgc.ca – and so my goal is to help you love it too, and to do everything I can to make it as easy as possible for you to make whatever contribution you’d like to make. Creating a great community is the hardest but best part of any project. We are off to a great start, and I hope to maybe see you on the google group.

Finally, just want to thank everyone who has helped so far, including the fine people at Raised Eyebrow Web Studio, Luke Closs, and a number of fantastic coders from the Open Knowledge Foundation. There are also some great people over at the Datadotgc.ca Google Group who have helped scrape data, tested for bugs and been supportive and helpful in so many ways.