Tag Archives: apps

How Car2Go ruins Car2Go

So let me start by saying, in theory, I LOVE Car2Go. The service has helped prevent me from buying a car and has been indispensable in opening up more of Vancouver to me.

For those not familiar with Car2Go, it is a car sharing service where the cars can be parked virtually anywhere in the city, so when you need one, you just use a special card and pin number to access it, drive it to where you want to go and then log out of the car leaving it for the next person to use it. All this at the affordable rate of 38 cents a minute. It’s genius.

So what’s the problem?

Well, in practice, I’m having an increasingly worse experience with Car2Go, particularly when I’m most in need the service. What’s worse, the reasons are entirely within the control of Car2Go and specifically how it designed its app, its workflow and its security. My hope is there are lessons here for designers and anyone who is thinking about online services, particularly in the mobile space.

Let me explain.

Car2go-find-a-car-150x150First, understand that the Car2Go’s brand is built around convenience. Remember, the use case is that, at almost any time, you can find a car near you, access it, and get to where you want to go. Car2Go is not for people planning to use a car hours ahead (you don’t really want to be paying 38 cents a minute to “hold” a car for 3 hours until you need it. That would cost you $68!). Indeed the price point is designed to discourage long term use and encourage short, convenient trips. As a result ease of access is central to the service and the brand promise.

In theory here is what the process should look like.

  1. Fire up the Car2Go app on your smart phone and geolocate yourself
  2. Locate the nearest car (see screen shot to right)
  3. Reserve it (this allows you to lock the car down for 15 minutes)
  4. Walk to your car, access it using your Car2Go card and pin number
  5. Drive off!

Here is the problem. The process now regularly breaks down for me at step 3. At first blush, this may not seem like a big deal… I mean, if the car is only a few blocks away why not just walk over and grab it?

Alas, I do. But, often when you really want a car someone else does too! This is even more the case when say… it’s raining, or it’s the end of the business day. Indeed, many of the times when you would really like that car are times when someone else might also really want it. So being able to lock it down is important. Because if you can’t…? Well, the other week I walked 12 blocks in the rain trying to get to 4 different Car2Go cars that I could see in the app but couldn’t reserve. Why four? Because by  the time I got each of them, they were gone, scooped by another suer. After 30 minutes of walking around and getting wet, I gave up, abandoned my appointment (very suboptimal) and went home. This is not the first time this has happened.

The impact is that Car2Go is increasingly not a service I see myself relying on. Yes, I keep using it, but I no longer think of it as a service I can count on if I just cushion a little extra time. It’s just… kind of reliable because the split between really frustrating outcome and totally delight, is starting to be 40/60, and that’s not good.

Car2go-login-150x150But here is the killer part. Car2Go could fix this problem in a day. Tops.

The reason I can’t reserve a car is a because the Car2Go app forces you to log back in every once and a while. Why? I don’t know. Even if someone stole my phone and used it to reserve a car it would be useless. Let’s say they managed to also steal my wallet so had my Car2Go card. Even now it doesn’t help them since without my pin they couldn’t turn the car on. So having some rogue person with access to user’s account isn’t exactly putting Car2Go in any danger.

So maybe you’re thinking… well, just remember your password David! So here’s a big user moment.

I WISH I COULD.

But Car2Go has these insanely stupid, deeply unsafe password rules that require you to have at least one number, one letter and a capitalized letter (or a special character – god knows if I remember their rules) in your password. Since the multitude of default passwords I use don’t conform to their rules, I can never remember what my password is, leaving me locked out of my Car2Go app. And trust me, when you are late for a meeting, it’s raining and you’re getting soaked, the last thing you want to be doing is going through a password reset process on webpages built for desktop browsers that takes 10 t0 15 minutes to navigate and complete. Many a curse word has been directed at Car2Go in such moments.

What’s worse is there is evidence that shows that not only do these passwords rules create super crappy user experiences like the one I described above, they also user accounts less secure. Indeed, check out this Wired article on passwords and the tension between convenience and effectiveness:

Security specialists – and many websites – prompt us to use a combination of letters, numbers, and characters when selecting passwords. This results in suggestions to use passwords like “Pn3L!x8@H”, to cite a recent Wired article. But sorry, guys, you’re wrong: Unless that kind of password has some profound meaning for a user (and then he or she may need other help than password help), then guess what? We. Will. Forget. It.

It gets worse. Because you will you forget it, you’ll do something both logical and stupid. YOU’LL WRITE IT DOWN. Probably somewhere that will be easy to access. LIKE IN YOUR PHONE’S ADDRESS BOOK.

Stupid password rules don’t make users create smarter passwords. It makes them do dumb things that often make their accounts less secure.

The result? Car2Go’s design and workflow creates a process that suboptimizes the user experience, all in an effort to (I’m guessing) foster security but that, in reality, likely causes a number of Car2Go users to make terrible decisions and make their accounts more vulnerable.

So if you are creating an online service, I hope this cautionary tale about design, workflow is helpful and password authentication rules. Get them wrong and you can really screw up your product.

So please, don’t do to your service what Car2Go has done to theirs. As a potential user of your product, that would make me sad.

International Open Data Hackathon 2011: Better Tools, More Data, Bigger Fun

Last year, with only a month of notice, a small group passionate people announced we’d like to do an international open data hackathon and invited the world to participate.

We were thinking small but fun. Maybe 5 or 6 cities.

We got it wrong.

In the end people from over 75 cities around the world offered to host an event. Better still we definitively heard from people in over 40. It was an exciting day.

Last week, after locating a few of the city organizers email addresses, I asked them if we should do it again. Every one of them came back and said: yes.

So it is official. This time we have 2 months notice. December 3rd will be Open Data Day.

I want to be clear, our goal isn’t to be bigger this year. That might be nice if it happens. But maybe we’ll only have 6-7 cities. I don’t know. What I do want is for people to have fun, to learn, and to engage those who are still wrestling with the opportunities around open data. There is a world of possibilities out there. Can we seize on some of them?

Why.

Great question.

First off. We’ve got more data. Thanks to more and more enlightened governments in more and more places, there’s a greater amount of data to play with. Whether it is Switzerland, Kenya, or Chicago there’s never been more data available to use.

Second, we’ve got better tools. With a number of governments using Socrata there are more API’s out there for us to leverage. Scrapperwiki has gotten better and new tools like Buzzdata, TheDataHub and Google’s Fusion Tables are emerging every day.

And finally, there is growing interest in making “openess” a core part of how we measure governments. Open data has a role to play in driving this debate. Done right, we could make the first Saturday in December “Open Data Day.” A chance to explain, demo and invite to play, the policy makers, citizens, businesses and non-profits who don’t yet understand the potential. Let’s raise the world’s data literacy and have some fun. I can’t think of a better way than with another global open data hackathon – an maker’s fair like opportunity for people to celebrate open data by creating visualizations, writing up analyses, building apps or doing what ever they want with data.

Of course, like last time, hopefully we can make the world a little better as well. (more on that coming soon)

How.

The basic premises for the event would be simple, relying on 5 basic principles.

1. Together. It can be as big or as small, as long or as short, as you’d like it, but we’ll be doing it together on Saturday, December 3rd, 2011.

2. It should be open. Around the world I’ve seen hackathons filled with different types of people, exchanging ideas, trying out new technologies and starting new projects. Let’s be open to new ideas and new people. Chris Thorpe in the UK has done amazing work getting young and diverse group hacking. I love Nat Torkington’s words on the subject. Our movement is stronger when it is broader.

3. Anyone can organize a local event. If you are keen help organize one in your city and/or just participate add your name to the relevant city on this wiki page. Where ever possible, try to keep it to one per city, let’s build some community and get new people together. Which city or cities you share with is up to you as it how you do it. But let’s share.

4. You can work on anything that involves open data. That could be a local or global app, a visualization, proposing a standard for common data sets, scraping data from a government website to make it available for others in buzzdata.

It would be great to have a few projects people can work on around the world – building stuff that is core infrastructure to future projects. That’s why I’m hoping someone in each country will create a local version of MySociety’s Mapit web service for their country. It will give us one common project, and raise the profile of a great organization and a great project.

We also hope to be working with Random Hacks of Kindness, who’ve always been so supportive, ideally supplying data that they will need to run their applications.

5. Let’s share ideas across cities on the day. Each city’s hackathon should do at least one demo, brainstorm, proposal, or anything that it shares in an interactive way with at members of a hackathon in at least one other city. This could be via video stream, skype, by chat… anything but let’s get to know one another and share the cool projects or ideas we are hacking on. There are some significant challenges to making this work: timezones, languages, culture, technology… but who cares, we are problem solvers, let’s figure out a way to make it work.

Like last year, let’s not try to boil the ocean. Let’s have a bunch of events, where people care enough to organize them, and try to link them together with a simple short connection/presentation.Above all let’s raise some awareness, build something and have some fun.

What next?

1. If you are interested, sign up on the wiki. We’ll move to something more substantive once we have the numbers.

2. Reach out and connect with others in your city on the wiki. Start thinking about the logistics. And be inclusive. Someone new shows up, let them help too.

3. Share with me your thoughts. What’s got you excited about it? If you love this idea, let me know, and blog/tweet/status update about it. Conversely, tell me what’s wrong with any or all of the above. What’s got you worried? I want to feel positive about this, but I also want to know how we can make it better.

4. Localization. If there is bandwidth locally, I’d love for people to translate this blog post and repost it locally. (let me know as I’ll try cross posting it here, or at least link to it). It is important that this not be an english language only event.

5. If people want a place to chat with other about this, feel free to post comments below. Also the Open Knowledge Foundation’s Open Data Day mailing list will be the place where people can share news and help one another out.

Once again, I hope this will sound like fun to a few committed people. Let me know what you think.

The Economics of Open Data – Mini-Case, Transit Data & TransLink

TransLink, the company that runs public transit in the region where I live (Vancouver/Lower Mainland) has launched a real time bus tracking app that uses GPS data to figure out how far away the next the bus you are waiting for really is. This is great news for everyone.

Of course for those interested in government innovation and public policy it also leads to another question. Will this GPS data be open data?

Presently TransLink does make its transit schedule “open” under a non-commercial license (you can download it here). I can imagine a number of senior TransLink officials (and the board) scratching their head asking: “Why, when we are short of money, would we make our data freely available?”

The answer is that TransLink should make its current data, as well as its upcoming GPS data, open and available under a license that allows for both non-commercial and commercial re-use, not just because it is the right thing to do, but because the economics of it make WAY MORE SENSE FOR TRANSLINK.

Let me explain.

First, there are not a lot of obvious ways TransLink could generate wealth directly from its data. But let’s take two possible opportunities: the first involves selling a transit app to the public (or advertising in such an app), the second is through selling a “next bus” service to companies (say coffee shops or organizations) that believe showing this information might be a convenience to their employees or customers.

TransLink has already abandoned doing paid apps – instead it maintains a mobile website at m.translink.ca – but even if it created an app and charged $1 per download, the revenue would be pitiful. Assuming a very generous customer base of 100,000 users, TransLink would generate maybe $85,000 dollars (once Apple takes its cut from the iPhone downloads, assuming zero cut for Androids). But remember, this is not a yearly revenue stream, it is one time. Maybe, 10-20,000 people upgrade their phone, arrive in Vancouver and decide to download every year. So your year on year revenue is maybe $15K? So over a 5 year period, TransLink ends up with an extra, say $145,000 dollars. Nothing to sneeze at, but not notable.

In contrast a free application encourages use. So there is also a cost to not giving it away. It could be that, having transit data more readily available might cause some people to choose taking transit over say, walking, or taking a taxi or driving. Last year TransLink handled 211.3 million trips. Let’s assume that more accessible data from wider access to the data meant there was a .1% increase in the number of trips. An infinitesimally small increase – but it means 211,300 more trips. Assuming each rider pays a one zone $2.50 fare that would still translate in an additional revenue of $528,250. Over the same five year period cited above… that’s revenue of $2.641M, much better than $145,000. And this is just calculating money. Let’s say nothing of less congested roads, less smog and a lower carbon footprint for the region…

When the this analysis is applied to licensing data it produces the same result. Will UBC pay to have TransLink’s real time data on terminals in the Student Union building? I doubt it. Would some strategically placed coffee shops… possibly. Obviously organizations would have to pay for the signs, but adding on annual “data license fee” to display’s cost would cause some to opt out. And once you take into account managing the signs, legal fees, dealing with the contract and going through the sales process, it is almost inconceivable that TransLink would make more money from these agreements than it would from simply having more signs everywhere created by other people that generated more customers for its actual core business: moving people from A to B for a fee. Just to show you the numbers, if shops that weren’t willing to pay for the data put up “next bus” screens that generated a mere 1000 new regular bus users who did only 40 one way trips a year (or 40,000 new trips), this would equal revenue of $100,000 every year at no cost to translink. Someone else could install and maintain the signs, no contracts or licenses would need to be managed.

From a cost recovery perspective it is almost impossible to imagine a scenario where TransLink is better off not allowing commercial re-use of its data.

My point is that TransLink should not be focused on creating a few bucks from licensing its data (which it doesn’t do right now anyway). It should be focused on shifting the competitive value in the marketplace from access to accessibility.

Being the monopoly holder of transit data does not benefit TransLink. All it means is that fewer people see and engage with its data. When it makes the data open and available “access” no longer becomes the defining advantage. When anybody (e.g. TransLink, Google, independent developers) can access the data, the market place shifts to competing on access to competing on accessibility. Consumers don’t turn to who has the data, they turn to who makes the data easiest to use.

For example, Translink has noted that in 2011 it will have a record number of trips. Part of me wonders to what degree the increase in trips over the past few years is a result of making transit data accessible in Google Maps. (Has anyone done a study on this in any jurisdiction?) The simple fact is that Google maps is radically easier to use for planning transit journeys than Translink’s own website AND THAT IS A GOOD THING FOR TRANSLINK. Now imagine if lots of companies were sharing translink’s data? The local Starbucks and Blenz Coffee, to colleges and universities and busy buildings downtown. Indeed, the real crime right now is that Translink has handed Google a defacto monopoly. It is allowed to use the data for commercial re-use. Local tax-paying developers…? Not so according to the license they have to click through.

Translink, you want a world where everyone is competing (including against you) on accessibility. In the end… you win with greater use and revenue.

But let me go further. There are other benefits to having Translink share its data for commercial re-use.

Procurement

Some riders will note that there are already bus stops in Vancouver which display “next bus” data (e.g. how many minutes away the next bus is). If TransLink made its next bus data freely available via an API it could conceivably alter the procurement process for buying and maintaining these signs. Any vendor could see how the data is structured and so take over the management of the signs, and/or experiment with creating more innovative or cheaper ways of manufacturing them.

The same is true of creating the RFP for TransLink’s website. With the data publicly available, TransLink could simple ask developers to mock up what they think is the most effective way of displaying the data. More development houses might be enticed to respond to the RFP increasing the likelihood of innovations and putting downward pressure of fees.

Analysis

Of course, making GPS data free could have an additional benefit. Local news companies might be able to use the bus’s GPS data to calculate traffic flow rates and so predict traffic jams. Might they be willing to pay TransLink for the data? Maybe, but again probably not enough to justify the legal and sales overhead. Moreover, TransLink would benefit from this analysis – as it could use the reports to adjust its schedule and notify its drivers of problems beforehand. Of course everyone would benefit as well as better informed commuters might change their behaviour (including taking transit!) reducing congestion, smog, carbon footprint, etc…

Indeed, the analysis opportunities using GPS data are potentially endless – much of which might be done by bloggers and university students. One could imagine correlating actual bus/subway times with any other number of data sets (crime, commute times, weather) that could yield interesting information that could help TransLink with its planning. There is no world where TransLink has the resources to do all this analysis, so enabling others to do it, can only benefit it.

Conclusion

So if you are at TransLink/Coast Mountain Bus Company (or any transit authority in the world), this post is for you. Here’s what I suggest as next steps:

1) Add GPS bus tracking API to your open data portal.

2) Change your license. Drop the non-commercial part. It hurts your business more than you realize and is anti competitive (why does can Google use the data for a commercial application while residents of the lower mainland cannot?). My suggestion, adopt the BC Government Open Government License or the PDDL.

3) Add an RSS feed to your GTFS data. Like Google, we’d all like to know when you update your data. Given we live here and are users, it be nice to extend the same service to us as you do them.

4) Maybe hold a Transit Data Camp where you could invite local developers and entrepreneurs to meet your staff and encourage people to find ways to get transit data into the hands of more Lower Mainlanders and drive up ridership!

 

 

Smarter Ways to Have School Boards Update Parents

Earlier this month the Vancouver School Board (VSB) released an iPhone app that – helpfully – will use push notifications to inform parents about school holidays, parent interviews, and scheduling disruptions such as snow days. The app is okay, it’s a little clunky to use, and a lot of the data – such as professional days – while helpful in an app, would be even more helpful as an iCal feed parents could subscribe to in their calendars.

That said, the VSB deserves credit for having the vision of developing an app. Positively, the VSB app team hopes to add new features, such as letting parents know about after school activities like concerts, plays and sporting events.

This is a great innovation and without a doubt, other school boards will want apps of their own. The problem is, this is very likely to lead to an enormous amount of waste and duplication. The last thing citizens want is for every school board to be spending $15-50K developing iPhone apps.

Which leads to a broader opportunity for the Minister of Education.

Were I the Education Minister, I’d have my technology team recreate the specs of the VSB app and propose an RFP for it but under an open source license and using phonegap so it would work on both iPhone and Android. In addition, I’d ensure it could offer reminders – like we do at recollect.net – so that people could get email or text messages without a smart phone at all.

I would then propose the ministry cover %60 percent of the development and yearly upkeep costs. The other 40% would be covered by the school boards interested in joining the project. Thus, assuming the app had a development cost of $40K and a yearly upkeep of $5K, if only one school board signed up it would have to pay $16K for the app (a pretty good deal) and $2K a year in upkeep. But if 5 school districts signed up, each would only pay $3.2K in development costs and $400 dollars a year in upkeep costs. Better still, the more that sign up, the cheaper it gets for each of them. I’d also propose a governance model in which those who contribute money for develop would have the right to elect a sub-group to oversee the feature roadmap.

Since the code would be open source other provinces, school districts and private schools could also use the app (although not participate in the development roadmap), and any improvements they made to the code base would be shared back to the benefit of BC school districts.

Of course by signing up to the app project school boards would be committing to ensure their schools shared up to date notifications about the relevant information – probably a best practice that they should be doing anyways. This process work is where the real work lies. However, a simple webform (included in the price) would cover much of the technical side of that problem. Better still the Ministry of Education could offer its infrastructure for hosting and managing any data the school boards wish to collect and share, further reducing costs and, equally important, ensuring the data was standardized across the participating school boards.

So why should the Ministry of Education care?

First, creating new ways to update parents about important events – like when report cards are issued so that parents know to ask for them – helps improve education outcomes. That should probably reason enough, but there are other reasons as well.

Second, it would allow the ministry, and the school boards, to collect some new data: professional day dates, average number of snow days, frequency of emergency disruptions, number of parents in a district interested in these types of notifications. Over time, this data could reveal important information about educational outcomes and be helpful.

But the real benefit would be in both cost savings and in enabling less well resourced school districts to benefit from technological innovation wealthier school districts will likely pursue if left to their own devices. Given there are 59 english school districts in BC, if even half of them spent 30K developing their own iPhone apps, then almost $1M dollars would be collectively spent on software development. By spending $24K, the ministry ensures that this $1M dollars instead gets spent on teachers, resources and schools. Equally important, less tech savvy or well equipped school districts would be able to participate and benefit.

Of course, if the City of Vancouver school district was smart, they’d open source their app, approach the Ministry of Education and offer it as the basis of such a venture. Doing that wouldn’t just make them head of the class, it’d be helping everyone get smarter, faster.

Opendataday & the International Hackathon: What happened. What happens next.

I’m floored.

As many of you know, 5 weeks I had a conversation with a group of open data geeks (like me, likely like you) in Ottawa and Sao Paulo and we agreed to see if we could prompt an international opendata hackathon. At the time we thought there would be our three cities and maybe three of four more. At no point did we think that there would be 1000s of people in over 73 cities on 5 continents who would dedicate the time and energy to helping foster both a local and international community of open data hackers, advocates and citizens. Nor did we know that the wonderful people like those with Random Hacks of Kindness would embrace us and help make this event such a success.

opendataday-1024x515

All of this of course was the results of 100s of people in communities all over the world, working on their own, hustling to set things up and to get people engaged. If you participated, as an organizer, and a hacker, as a gardener of the wiki, or as someone who just wanted to help – congratulations. We are amazed. We hope you are amazed.

odhd-map

If you are out there I’ve a few thoughts on what we’d like to do right away:

  1. Congratulate yourself.
  2. People have only just begun to share the cool work they started. I’m hoping that more of you will share it so that everyone can be inspired by your work. I’m also hoping that these projects will continue to evolve.
  3. Let us know who you are (if you are comfortable with that). A number of you have told me you‘d like to do this again. Part of what made Saturday amazing was how much happened without any of us having to connect directly. That is the power of the internet. And keeping these events simple and loosely joined will always be a goal for us, but I know I’d like to thank more of you personally and be able to connect more so as to make communicating easier.
  4. Finally, we are thinking that another event will be fun to do is something like 6 months. But in the meantime we hope that you, like us, will try to keep the flame burning in your city by hosting the occasional local event. I know I will be endeavoring to do so in Vancouver.

Longer term:

5. I hope we can develop tools and resources to enable participants to engage with politicians and public servants on the importance of open data. The projects we hack on are powerful examples of what can be, but we also need to become more effective at explaining why open data matters in a language everyone understands. I’m hoping we’ll have resources to help us with this important task.

In the meantime, we’ll be figuring out what to do next. We’d love your help, to hear your thoughts and frustrations and your ideas. Please reach out.

Dave, Edward, Mary Beth, Daniel, Daniela and Pedro

Some favourite shots:

Screen-shot-2010-12-05-at-11.07.46-PM

Screen-shot-2010-12-05-at-11.08.45-PM

 

Open Data planning session at BarCamp Vancouver

With the International Open Data Hackathon a little more the 2 weeks away a lot has happened.

On the organizing wiki people in over 50 cities in 21 countries and 4 continents have offered to organize local events. Open data sets that people can use have been posted to a specially created page, a few nascent app ideas have been shared, as has advice on how to run a hackathon. (on twitter, the event hashtag is #odhd)

In Vancouver, the local BarCamp will be taking place this weekend. I’m not in town, however, Aaron Gladders, local hacker with a ton of experience working with and opening up data sets, contacted me to let me know he’d like to do a planning session for the hackathon at Barcamp. If you’re in Vancouver I hope you can attend.

Why? Because this is a great opportunity. And it has lessons for the hackathons around the world.

I love it because it means people can share ideas and projects they would like to hack on, recruit others, as well as hear feedback about challenges, obstacles, alternative approaches, and think about all of this  for two weeks before the hackathon. A planning session also has  has an even bigger benefit. It means more people are likely to arrive on the day with something specific ready to work on. I want the hackathons to be social. But they can’t be exclusively so. It is important that we actually try to create some real products that are useful to us and/or our fellow citizens.

For those elsewhere in the world who are also thinking about December 4th I hope that some of us will start reaching out to one another and thinking about how we will spend the day. A few thoughts on this:

1. Take a look at the data sets that are out there before Dec 4th. People have been putting together a pretty good list here.

2. Localization. I think some of the best wins will be around localizing successful apps from other places. For example, I’ve been encouraging the team in Bangalore to consider localizing Michael Mulley’s OpenParliament.ca application (the source code for which is here). If you have an application you think others might want to localize, add it to the application page on the wiki. If there is an app out there you’d like to localize, write its author/developer team. Ask them if they might be willing to share the code.

3. Get together with 2-3 friends and come up with a plan. What do you want to accomplish on the 4th?

4. If you are looking for a project, let people know on the wiki, leave a twitter handle or some way for people with idea to contact you before the 4th.

Okay, that’s it for now. I’m really excited about how much progress we’ve made in a few short weeks. Ideally at the end of the 4th I’d love for some cities to be able to showcase some apps to the world that they’ve created. We have an opportunity to show the media, politicians, public servants, our fellow citizens, but most importantly, each other, just want is possible with open data.


					

Launching Emitter.ca: Open Data, Pollution and Your Community

This week, I’m pleased to announce the beta launch of Emitter.ca – a website for locating, exploring and assessing pollution in your community.

Why Emitter?

A few weeks ago, Nik Garkusha, Microsoft’s Open Source Strategy Lead and an open data advocate asked me: “are there any cool apps you could imagine developing using Canadian federal government open data?”

Having looked over the slim pickings of open federal data sets – most of which I saw while linking to them datadotgc.ca – I remembered one: Environment Canada’s National Pollutant Release Inventory (NPRI) that had real potential.

Emitter-screen-shot

With NPRI I felt we could build an application that allowed people and communities to more clearly see who is polluting, and how much, in their communities could be quite powerful. A 220 chemicals that NPRI tracks isn’t, on its own, a helpful or useful to most Canadians.

We agreed to do something and set for ourselves three goals:

  1. Create a powerful demonstration of how Canadian Federal open data can be used
  2. Develop an application that makes data accessible and engaging to everyday Canadians and provides communities with a tool to better  understand their immediate region or city
  3. Be open

With the help of a crew of volunteers with knew and who joined us along the way – Matthew Dance (Edmonton), Aaron McGowan (London, ON), Barranger Ridler (Toronto) and Mark Arteaga (Oakville) – Emitter began to come together.

Why a Beta?

For a few reasons.

  1. There are still bugs, we’d love to hear about them. Let us know.
  2. We’d like to refine our methodology. It would be great to have a methodology that was more sensitive to chemical types, combinations and other factors… Indeed, I know Matt would love to work with ENGOs or academics who might be able to help provide us with better score cards that can helps Canadians understand what the pollution near them means.
  3. More features – I’d love to be able to include more datasets… like data on where tumours or asthama rates or even employment rates.
  4. I’d LOVE to do mobile, to be able to show pollution data on a mobile app and even in using augmented reality.
  5. Trends… once we get 2009 and/or earlier data we could begin to show trends in pollution rates by facility
  6. plus much, much more…

Build on our work

Finally, we have made everything we’ve done open, our methodology is transparent, and anyone can access the data we used through an API that we share. Also, you can learn more about Emitter and how it came to be reading blog posts by the various developers involved.

Thank yous

Obviously the amazing group of people who made Emitter possible deserve an enormous thank you. I’d also like to thank the Open Lab at Microsoft Canada for contributing the resources that made this possible. We should also thank those who allowed us to build on their work, including Cory Horner’s Howdtheyvote.ca API for Electoral District boundaries we were able to use (why Elections Canada doesn’t offer this is beyond me and, frankly, is an embarrassment). Finally, it is important to acknowledge and thank the good people at Environment Canada who not only collected this data, but have the foresight and wisdom to share make it open. I hope we’ll see more of this.

In Sum

Will Emitter change the world? It’s hard to imagine. But hopefully it is a powerful example of what can happen when governments make their data open. That people will take that data and make it accessible in new and engaging ways.

I hope you’ll give it a spin and I look forward to sharing new features as they come out.

Update!

Since Yesterday Emitter.ca has picked up some media. Here are some of the links so far…

Hanneke Brooymans of the Edmonton Journal wrote this piece which was in turn picked up by the Ottawa Citizen, Calgary Herald, Canada.com, Leader Post, The Province, Times Columnist and Windsor Star.

Nestor Arellano of ITBusiness.ca wrote this piece

Burke Campbell, a freelance writer, wrote this piece on his site.

Kate Dubinski of the London Free Press writes a piece titled It’s Easy to Dig up Dirt Online about emitter.ca

Canada's emerging opendata mashups (plus some ideas)

Over at IT World Canada, Jennifer Kavur has put together a list of 25 sites and apps for Open Government. What’s fantastic about this list is it demonstrates to government officials and politicians that there is a desire, here in Canada, to take government data and do interesting things with it.

Whether driven by developers like Michael Mulley or Morgan Peers who just want to improve democracy and have fun, or whether it is by those like Jeff Aramini who want to start a business and make money, the appetite to do something is real, and it is growing. Indeed, the number of apps and sites is far greater than 25 including simple mashups like CSEDEV’s environment Canada pollution data display or the 17 apps recently created as part of the Apps For Climate Action competition.

What is all the more remarkable is that this growth is happening even as there is little government data available. Yes, a number of cities have made data available, but provincially and especially federally there is almost no concerted effort to make data easy to use. Indeed, many of the sites cited by Kavur have to “scrape” the data of government websites, a laborious process that can easily break if the government website changes structure. It begs the question, what would happen if the data were accessible?

As an aside, two data sets I’m surprised no one has done much with are both located on the Toronto website: Road Restriction data and DineSafe data. Given how poor the city’s beta road restriction website is and the generally high interest in traffic news, I’d have thought that one of the local papers or media companies would have paid someone to develop an iPhone app or a widget for their website using this data. It is one thing commuters and consumers want to know more about.

As for DineSafe, I’m also surprised that no one in Toronto has approached the eatsure developers and asked them if they can port the site to Toronto. I’m still more surprised that a local restaurant review website has developed a widget that shows you to the DineSafe rating of a restaurant on its review page. Or that an company like urbanspoon or yelp hasn’t hired an iPhone app developer to integrate this data into their app…

Good times for Open Data in Canada. But if the feds and provinces were on board it could be much, much better…

Apps for Climate Action Update – Lessons and some new sexy data

ttl_A4CAOkay, so I’ll be the first to say that the Apps4Climate Action data catalog has not always been the easiest to navigate and some of the data sets have not been machine readable, or even data at all.

That however, is starting to change.

Indeed, the good news is three fold.

First, the data catalog has been tweaked and has better search and an improved capacity to sort out non-machine readable data sets. A great example of a government starting to think like the web, iterating and learning as the program progresses.

Second, and more importantly, new and better sets are starting to be added to the catalog. Most recently the Community Energy and Emissions Inventories were released in an excel format. This data shows carbon emissions for all sorts of activities and infrastructure at a very granular level. Want to compare the GHG emissions of a duplex in Vancouver versus a duplex in Prince George? Now you can.

Moreover, this is the first time any government has released this type of data at all, not to mention making it machine readable. So not only have the app possibilities (how green is your neighborhood, rate my city, calculate my GHG emissions) all become much more realizable, but any app using this data will be among the first in the world.

Finally, probably one of the most positive outcomes of the app competition to date is largely hidden from the public. The fact that members of the public have been asking for better data or even for data sets at all(!) has made a number of public servants realize the value of making this information public.

Prior to the competition making data public was a compliance problem, something you did but you figured no one would ever look at or read it. Now, for a growing number of public servants, it is an innovation opportunity. Someone may take what the government produces and do something interesting with it. Even if they don’t, someone is nonetheless taking interest in your work – something that has rewards in of itself. This, of course, doesn’t mean that things will improve over night, but it does help advance the goal of getting government to share more machine readable data.

Better still, the government is reaching out to stakeholders in the development community and soliciting advice on how to improve the site and the program, all in a cost-effective manner.

So even within the Apps4Climate Action project we see some of the changes the promise of Government 2.0 holds for us:

  • Feedback from community participants driving the project to adapt
  • Iterations of development conducted “on the fly” during a project or program
  • Success and failures resulting in queries in quick improvements (release of more data, better website)
  • Shifting culture around disclosure and cross sector innovation
  • All on a timeline that can be measured in weeks

Once this project is over I’ll write more on it, but wanted to update people, especially given some of the new data sets that have become available.

And if you are a developer or someone who would like to do a cool visualization with the data, check out the Apps4Climate Action website or drop me an email, happy to talk you through your idea.

Two Questions on Canadian Postal Codes

I find it interesting that Postal Codes in Canada are not freely available. No our postal service charges a nasty license fee to get them. This means that people who want to explore creating interest apps that might use postal codes to locate services… don’t. As one would expect, zip codes in the US are freely available for anyone to hack with (this great blog post really shows you all the options).

So, two questions.

First, with all the fuss the competition bureau has kicked up around the MLS data – are Canadian postal codes being used to extend the monopoly of Canada Post? Shouldn’t this be data that, if shared, would improve competition? (Let’s forget the fact that Canadian tax dollars created the Post Office and that it might be nice if Canadian citizens could freely use the capital created with their tax dollars to generate further innovation (like the US counterparts can). Sadly, UK citizens are stuck with the same terrible boat as us.

Second, I’d heard rumours that someone was trying to crowd source the location of postal codes in the UK, essentially asking people to simply type in their address and postal in a website to create a parallel dataset. I was wondering if that might be legal here or if Canada Post would launch a legal battle against it. Can you prevent someone from recreating (not copying) at data set like this? My assumption is no…

Either way, it would be nice if Canada Post joined the rest of North America and made this information freely available. It would certainly generate far more new businesses, innovations and efficiencies that would generate further tax dollars for the government and productivity for the Canadian economy… but then, the Post Office would lose a few dollars in revenue. Sigh.