Tag Archives: technology

The False choice: Bilingualism vs. Open Government (and accountability)

Last week a disturbing headline crossed my computer screen:

B.C. RCMP zaps old news releases from its website

2,500 releases deleted because they weren’t translated into French

1) The worst of all possible outcomes

This is a terrible outcome for accountability and open government. When we erase history we diminish accountability and erode our capacity to learn. As of today, Canadians have a poorer sense of what the RCMP has stood for, what it has claimed and what it has tried to do in British Columbia.

Consider this. The Vancouver Airport is a bilingual designated detachment. As of today, all press releases that were not translated were pulled down. This means that any press release related to the national scandal that erupted after Robert Dziekański – the polish immigrant who was tasered five times by the (RCMP) – is now no longer online. Given the shockingly poor performance the RCMP had in managing (and telling the truth about) this issue – this concerns me.

Indeed, I can’t think that anyone thinks this is a good idea.

The BC RCMP does not appear to think it is a good idea. Consider their press officer’s line: “We didn’t have a choice, we weren’t compliant.”

I don’t think there are any BC residents who believe they are better served by this policy.

Nor do I think my fellow francophone citizens believe they are better served by this decision. Now no one – either francophone or anglophone can find these press releases online. (More on this below)

I would be appalled if a similar outcome occurred in Quebec or a francophone community in Manitoba. If the RCMP pulled down all French press releases because they didn’t happen to have English translations, I’d be outraged – even if I didn’t speak French.

That’s because the one thing worse than not having the document in both official languages, is not having access to the document at all. (And having it hidden in some binder in a barracks that I have to call or visit doesn’t event hint of being accessible in the 21st century).

Indeed, I’m willing to bet almost anything that Graham Fraser, the Official Languages Commissioner – who is himself a former journalist – would be deeply troubled by this decision.

2) Guided by Yesterday, Not Preparing for Tomorrow

Of course, what should really anger the Official Languages Commissioner is an attempt to pit open and accountable government against bilingualism. This is a false choice.

I suspect that the current narrative in government is that translating these documents is too expensive. If one relies on government translators, this is probably true. The point is, we no longer have to.

My friend and colleague Luke C. pinged me after I tweeted this story saying “I’d help them automate translating those news releases into french using myGengo. Would be easy.”

Yes, mygengo would make it cheap at 5 cents a word (or 15 if you really want to overkill it). But even smarter would be to approach Google. Google translate – especially between French and English – has become shockingly good. Perfect… no. Of course, this is what the smart and practical people on the ground at the RCMP were doing until the higher ups got scared by a French CBC story that was critical of the practice. A practice that was ended even though it did not violate any policies.

The problem is there isn’t going to be more money to do translation – not in a world of multi-billion dollar deficits and in a province that boasts 63,000 french speakers. But Google translate? It is going to keep getting better and better. Indeed, the more it translates, the better it gets. If the RCMP (or Canadian government) started putting more documents through Google translate and correcting them it would become still more accurate. The best part is… it’s free. I’m willing to bet that if you ran all 2500 of the press releases through Google translate right now… 99% of them would come out legible and of a standard that would be good enough to share. (again, not perfect, but serviceable). Perhaps the CBC won’t be perfectly happy. But I’m not sure the current outcome makes them happy either. And at least we’ll be building a future in which they will be happy tomorrow.

The point here is that this decision reaffirms a false binary: one based on a 20th century assumptions where translations were expensive and laborious. It holds us back and makes our government less effective and more expensive. But worse, it ignores an option that embraces a world of possibilities – the reality of tomorrow. By continuing to automatically translate these documents today we’d continue to learn how to use and integrate this technology now, and push it to get better, faster. Such a choice would serve the interests of both open and accountable governments as well as bilingualism.

Sadly, no one at the head office of the RCMP – or in the federal government – appears to have that vision. So today we are a little more language, information and government poor.

Three asides:

1) I find it fascinating that the media can get mailed a press release that isn’t translated but the public is not allowed to access it on a website until it is – this is a really interesting form of discrimination, one that supports a specific business model and has zero grounding in the law, and indeed may even be illegal given that the media has no special status in Canadian law.

2) Still more fascinating is how the RCMP appears to be completely unresponsive to news stories about inappropriate behavior in its ranks, like say the illegal funding of false research to defend the war on drugs, but one story about language politics causes the organization to change practices that aren’t even in violation of its policies. It us sad to see more evidence still that the RCMP is one of the most broken agencies in the Federal government.

3) Thank you to Vancouver Sun Reporter Chad Skelton for updating me on the Google aspect of this story.

Visualizing Firefox Plugins Memory Consumption

A few months ago the Mozilla Labs and the Metrics Team, together with the growing Mozilla Research initiative, launched an Open Data Visualization Competition.

Using data collected from Test Pilot users (people who agreed to share anonymous usage data with Mozilla and test pilot new features) Mozilla asked its community to think of creative visual answers to the question: “How do people use Firefox?”

As an open data geek and Mozilla supporter the temptation to try to do something was too great. So I teamed up with my old data partner Diederik Van Liere and we set out to create a visualization. Our goals were simple:

  • have fun
  • focus on something interesting
  • create something that would be useful to Firefox developers and/or users
  • advance the cause for creating a Firefox open data portal

What follows is the result.

It turns out that – in our minds – the most interesting data set revolved around plugin memory consumption. Sure this sounds boring… but plugins (like Adobe reader, Quicktime or Flash) or are an important part of the browser experience – with them we engage in a larger, richer and more diverse set of content.  Plugins, however, also impact memory consumption and, consequently, browser performance. Indeed, some plugins can really slow down Firefox (or any browser). If consumers had a better idea of how much performance would be impacted they might be more selective about which plugins they download, and developers might be more aggressive in trying to make their plugins more efficient.

Presently, if you run Firefox you can go to the Plugin Check page to see if your plugins are up to date. We thought: Wouldn’t it be great if that page ALSO showed you memory consumption rates? Maybe something like this (note the Memory Consumption column, it doesn’t exist on the real webpage, and you can see a larger version of this image here):

Firefox data visualization v2

Please understand (and we are quite proud of this). All of the data in this mockup is real. Memory consumptions are estimates we derived by analyzing the Test Pilot data.

How, you might ask did we (Diederik) do that?

GEEK OUT EXPLANATION: Well, we (Diederik) built a dataset of about 25,000 different testpilot users and parsed the data to see which plugins were installed and how much memory was consumed around the time of initialization. This data was analyzed using ordinary least squares regression where the dependent variable is memory consumption and the different plugins are the explanatory variables. We only included results that are highly significant.

The following table shows our total results (you can download a bigger version here).

Plugin_memory_consumption_chart v2

Clearly, not all plugins are created equal.

Our point here isn’t that we have created the definitive way of assessing plugin impact on the browser, our point is that creating a solid methodology for doing so is likely witihin Mozilla’s grasp. More importantly, doing this could help improve the browsing experience. Indeed, it would probably be even wiser to do something like this for Add-ons, which is where I’m guessing the real lag time around the browsing experience is created.

Also, with such a small data set we were only able to calculate the memory usage for a limited number of plugins and generally those that are more obscure. Our methodology required having several data points from people who are and who aren’t using a given plugin and so with many popular plugins we didn’t have enough data from people who weren’t using it… a problem however, that would likely be easily solved with access to more data.

Finally, I hope this contest and our submission helps make the case for why Mozilla needs an open data portal. Mozilla collects and incredible amount of data of which it does not have the resources to analyze internally. Making it available to the community would do to data what Mozilla has done to code – enable others create value that could affect the product and help advance the open web. I had a great meeting earlier this week with a number of the Mozilla people about this issue, I hope that we can continue to make progress.

International Open Data Hackathon – IRC Channel and project ideas

Okay, going to be blogging a lot more about the international open data hackathon over the next few days. Last count had us at 63 other cities in 25 countries on over 5 continents.

So first and foremost, here are three thoughts/ideas/actions I’m taking right now:

1. Communicating via IRC

First, for those who have been wondering… yes, there will be an IRC channel on Dec 4th (and as of now) that I will try to be on most of the day.

irc.oftc.net #odhd

This could be a great place for people with ideas or open sourced projects to share them with others or for cities that would like to present some of the work they’ve done on the day with others to find an audience. If, by chance, work on a specific project becomes quite intense on the IRC channel, it may be polite for those working on it to start a project specific channel, but we’ll cross the bridge on the day.

Two additional thoughts:

2. Sharing ideas

Second, some interesting projects brainstorms have been cropping up on the wiki. Others have been blogging about them, like say these ideas from Karen Fung in Vancouver.

Some advice to people who have ideas (which is great).

a) describe who the user(s) would be and what the application will it do, why would someone use it, and what value would they derive from it.

b) even if you aren’t a coder (like me) lay out what data sets the application or project will need to draw upon

c) use powerpoint or keynote to create a visual of what you think the end product should look like!

d) keep it simple. Simple things get done and can always get more complicated. Complicated things don’t get done (and no matter how simple you think it is… it’s probably more complicated than you think

These were the basic principles I adhered when laying out the ideas behind what eventually became Vantrash and Emitter.ca.

Look at the original post where I described what I think a garbage reminder service could look like. Look how closely the draft visual resembles what became the final product… it was way easier for Kevin and Luke (who I’d never met at the time) to model vantrash after an image than just a description.

Garbage%20App

Mockup

Vantrash screen shot

3. Some possible projects to localize:

A number of projects have been put forward as initatives that could be localized. I wanted to highlight a few here:

a) WhereDoesMyMoneyGo?

People could create new instances of the site for a number of different countries. If you are interested, please either ping wdmmg-discuss or wdmmg (at) okfn.org.

Things non-developers could do:

  1. locate the relevant spending data on their government’s websites
  2. right up materials explaining the different budget areas
  3. help with designing the localized site.

b) OpenParliament.ca
If you live in a country with a parliamentary system (or not, and you just want to adapt it) here is a great project to localize. The code’s at github.com/rhymeswithcycle.

Things non-developers can do:

  1. locate all the contact information, twitter handles, websites, etc… of all the elected members
  2. help with design and testing

c) How’d They Vote
This is just a wonderful example of a site that creates more data that others can use. The API’s coming out of this site save others a ton of work and essentially “create” open data…

d) Eatsure
This app tracks health inspection data of restaurants done by local health authorities. Very handy. Would love to see someone create a widget or API that companies like Yelp could use to insert this data into the restaurant review… that would be a truly powerful use of open data.

The code is here:  https://github.com/rtraction/Eat-Sure
Do you have a project you’d like to share with other hackers on Opendataday? Let me know! I know this list is pretty North American specific so would love to share some ideas from elsewhere.

Let's do an International Open Data Hackathon

Let’s do it.

Last summer, I met Pedro Markun and Daniela Silva at the Mozilla Summit. During the conversation – feeling the drumbeat vibe of the conference – we agreed it would be fun to do an international event. Something that could draw attention to open data.

A few weeks before I’d met Edward Ocampo-Gooding, Mary Beth Baker and Daniel Beauchamp at GovCamp Ottawa. Fresh from the success of getting the City of Ottawa to see the wisdom of open data and hosting a huge open data hackathon at city hall they were thinking “let’s do something international.” Yesterday, I tested the idea on the Open Knowledge Foundation’s listserve and a number of great people from around the world wrote back right away and said… “We’re interested.”

This idea has lots of owners, from Brazil to Europe to Canada, and so my gut check tells me, there is interest. So let’s take the next step. Let’s do it.

Why.

Here’s my take on three great reasons now is a good time for a global open data hackathon:

1) Build on Success: There are a growing number of places that now have open data. My sense is we need to keep innovating with open data – to show governments and the public why it’s serious, why it’s fun, why it makes life better, and above all, why it’s important. Let’s get some great people together with a common passion and see what we can do.

2) Spread the Word: There are many places without open data. Some places have developed communities of open data activists and hackers, others have nascent communities. In either case these communities should know they are not alone, that there is an international community of developers, hackers and advocates who want to show them material and emotional support. They also need to demonstrate, to their governments and the public, why open data matters. I think an global open data hackathon can’t hurt, and can make help a whole lot. Let’s see.

3) Make a Better World: Finally, there is a growing amount of global open data thanks to the World Bank’s open data catalog and its Apps for Development competition. The Bank is asking for developers to build apps that, using this data (plus any other data you want) will contribute to reaching the Millennium Development Goals by 2015. No matter who, or where, you are in the world this is a cause I believe we can all support. In addition, for communities with little available open data, the bank has a catalog that might provide at least some that is of interest.

So with that all said, I think we should propose hosting a global open data hackathon that is simple and decentralized: locally organized, but globally connected.

How.

The basic premises for the event would be simple, relying on 5 basic principles.

1. It will happen on Saturday, Dec 4th. (after a fair bit of canvassing of colleagues around the world this seems to be a date a number can make work). It can be as big or as small, as long or as short, as you’d like it.

2. It should be open. Daniel, Mary Beth and Edward have done an amazing job in Ottawa attracting a diverse crowd of people to hackathons, even having whole families come out. Chris Thorpe in the UK has done similarly amazing work getting young and diverse group hacking. I love Nat Torkington’s words on the subject. Our movement is stronger when it is broader.

3. Anyone can organize a local event. If you are keen help organize one in your city and/or just participate add your name to the relevant city on this wiki page. Where ever possible, try to keep it to one per city, let’s build some community and get new people together. Which city or cities you share with is up to you as it how you do it. But let’s share.

4. You can hack on anything that involves open data. Could be a local app, or a global apps for development submission, scrape data from a government website and make it available in a useful format for others or create your own data catalog of government data.

5. Let’s share ideas across cities on the day. Each city’s hackathon should do at least one demo, brainstorm, proposal, or anything that it shares in an interactive way with at members of a hackathon in at least one other city. This could be via video stream, skype, by chat… anything but let’s get to know one another and share the cool projects or ideas we are hacking on. There are some significant challenges to making this work: timezones, languages, culture, technology… but who cares, we are problem solvers, let’s figure out a way to make it work.

Again, let’s not try to boil the ocean. Let’s have a bunch of events, where people care enough to organize them, and try to link them together with a simple short connection/presentation.Above all let’s raise some awareness, build something and have some fun.

What’s next?

1. If you are interested, sign up on the wiki. We’ll move to something more substantive once we have the numbers.

2. Reach out and connect with others in your city on the wiki. Start thinking about the logistics. And be inclusive. Someone new shows up, let them help too.

3. Share with me your thoughts. What’s got you excited about it? If you love this idea, let me know, and blog/tweet/status update about it. Conversely, tell me what’s wrong with any or all of the above. What’s got you worried? I want to feel positive about this, but I also want to know how we can make it better.

4. If there is interest let’s get a simple website up with some basic logo that anyone can use to show they are part of this. Something like the OpenDataOttawa website comes to mind, but likely simpler still, just laying out the ground rules and providing links to where events are taking place. Might even just be a wiki. I’ve registered opendataday.org, not wedded to it, but it felt like a good start. If anyone wants to help set that up, please let me know. Would love the help.

5. Localization. If there is bandwidth locally, I’d love for people to translate this blog post and repost it locally. (let me know as I’ll try cross posting it here, or at least link to it). It is important that this not be an english language only event.

6. If people want a place to chat with other about this, feel free to post comments below. Also the Open Knowledge Foundation’s Open Government mailing list is probably a good resource.

Okay, hopefully this sounds fun to a few committed people. Let me know what you think.

The Social Network and the real villains of the old/new economy

The other week I finally got around to watching The Social Network. It’s great fun and I recommend going out and watching it whether you’re a self-professed social media expert or don’t even have a Facebook account.

Here are some of my thoughts about the movie (don’t worry, no spoilers here).

1. Remember this is a Hollywood movie: Before (or after) you watch it, read Lawrence Lessig’s fantastic critique of the movie. This review is so soundly brilliant and devastating I’m basically willing to say, if you only have 5 minutes to read, leave my blog right now and go check it out. If you are a government employee working on innovation, copyright or the digital economy, I doubly urge you to read it. Treble that if you happen to be (or work for) the CIO of a major corporation or organization who (still) believes that social media is a passing phase and can’t see its disruptive implications.

2. It isn’t just the legal system that is broken: What struck me about the movie wasn’t just the problems with the legal system, it was how badly the venture capitalists come off even worse. Here is supposed to be a group of people who are supposed to help support and enable entrepreneurs and yet they’re directing lawyers to draft up contracts that screw some of the original founders. If the story is even remotely true it’s a damning and cautionary tale for anyone starting (or looking to expand) a company. Indeed, in the movie the whole success of Facebook and the ability of (some) of the owners to retain control over it rests on the fact that graduates of the first internet bubble who were screwed over by VCs are able to swoop in and protect this second generation of internet entrepreneurs. Of course they – played by Sean Parker (inventor of Napster) – are parodied as irresponsible and paranoid.

One thought I walked away with was: if, say as a result of the rise of cloud computing, the costs of setting up an online business continue to drop, at a certain point the benefits of VC capital will significantly erode or their value proposition will need to significantly change. More importantly, if you are looking to build a robust innovation cluster, having it built on the model that all the companies generated in it have the ultimate goal of being acquired by a large (American) multinational doesn’t seem like a route to economic development.

Interesting questions for policy makers, especially those outside Silicon Valley, who obsess about how to get venture capital money into their economies.

3. Beyond lawyers and VCs, the final thing that struck me about the movie was the lack of women doing anything interesting. I tweeted this right away and, of course, a quick Google search reveals I’m not the only one who noticed it. Indeed, Aaron Sorkin (the film’s screenwriter) wrote a response to questions regarding this issue on Emmy winner Ken Levine’s blog. What I noticed in The Social Network is there isn’t a single innovating or particularly positive female character. Indeed, in both the new and old economy worlds shown in the film, women are largely objects to be enjoyed, whether it is in the elite house parties of Harvard or the makeshift start-up home offices in Palo Alto. Yes, I’m sure the situation is more complicated, but essentially women aren’t thinkers – or drivers – in the movie. It’s a type of sexism that is real, and in case you think it isn’t just consider a TechCrunch article from the summer titled “Too Few Women In Tech? Stop Blaming The Men” in which the author, Michael Arrington, makes the gobsmacking statement:

The problem isn’t that Silicon Valley is keeping women down, or not doing enough to encourage female entrepreneurs. The opposite is true. No, the problem is that not enough women want to become entrepreneurs.

Really? This is a country (the United States) where women start businesses at twice the rate of men and where 42% of all businesses are women owned? To say that women don’t want to be entrepreneurs is so profoundly stupid and incorrect it perfectly reflects the roles women are shoveled into in The Social Network. And that is something the new economy needs to grapple with.

World Bank Discussion on Open Data – lessons for developers, governments and others

Yesterday the World Bank formally launched its Apps For Development competition and Google announced that in addition to integrating the World Bank’s (large and growing) data catalog into searches, it will now do it in 34 languages.

What is fascinating about this announcement and the recent changes at the bank is it appears to be very serious about open data and even more serious about open development. The repercussions of this shift, especially if the bank starts demanding that its national partners also disclose data, could be significant.

This of course, means there is lots to talk about. So, as part of the overall launch of the competition and in an effort to open up the workings of the World Bank, the organization hosted its first Open Forum in which a panel of guests talked about open development and open data. The bank was kind enough to invite me and so I ducted out of GTEC a pinch early and flew down to DC to meet some of the amazing people behind the world bank’s changes and discuss the future of open data and what it means for open development.

Embedded below is the video of the event.

As a little backgrounder here are some links to the bios of the different panelists and people who cycled through the event.

Our host: Molly Wood of CNET.

Andrew McLaughlin, Deputy Chief Technology Officer, The White House (formerly head of Global Public Policy and Government Affairs for Google) (twitter feed)

Stuart Gill, World Bank expert, Disaster Mitigation and Response for LAC

David Eaves, Open Government Writer and Activist

Rakesh Rajani, Founder, Twaweza, an initiative focused on transparency and accountability in East Africa (twitter)

Aleem Walji, Manager, Innovation Practice, World Bank Institute (twitter)

How Governments misunderstand the risks of Open Data

When I’m asked to give a talk about or consult on policies around open data I’ve noticed there are a few questions that are most frequently asked:

“How do I assess the risks to the government of doing open data?”

or

“My bosses say that we can only release data if we know people aren’t going to do anything wrong/embarrassing/illegal/bad with it”

I would argue that these question are either flawed in their logic, or have already been largely addressed.

Firstly, it seems problematic to assess the risks of open data, without also assessing the opportunity. Any activity – from walking out my front door to scaling Mount Everest carries with it risks. What needs to be measured are not the risks in isolation but the risks balanced against the opportunity and benefits.

But more importantly, the logic of the question is flawed in another manner. It suggests that the government only take action if every possible negative use can be prevented.

Let’s forget about data for a second – imagine you are building a road. Now ask: “what are the risk’s that someone might misuse this road?” Well… they are significant. People are going to speed and they are going to jay walk. But it gets worse. Someone may rob a bank and then use the road as part of their escape route. Of course, the road will also provide more efficient transportation for 1000s of people, it will reduce costs, improve access, help ambulances save peoples lives and do millions of other things, but people will also misuse it.

However, at no point in any policy discussion in any government has anyone said “we can’t build this road because, hypothetically, someone may speed or use it as an escape route during a robbery.”

And yet, this logic is frequently accepted, or at least goes unchallenged, as appropriate when discussing open data.

The fact is, most governments already have the necessary policy infrastructure for managing the overwhelming majority of risks concerning open data. Your government likely has provisions dealing with privacy – if applied to open data this should address these concerns. Your government likely has provisions for dealing with confidential and security related issues – if applied to open data this should address these concerns. Finally, your government(s) likely has a legal system that outlines what is, and is not legal – when it comes to the use of open data, this legal system is in effect.

If someone gets caught speeding, we have enforcement officials and laws that catch and punish them. The same is true with data. If someone uses it to do something illegal we already have a system in place for addressing that. This is how we manage the risk of misuse. It is seen as acceptable for every part of our life and every aspect of our society. Why not with open data too?

The opportunity, of both roads and data, are significant enough that we build them and share them despite the fact that a small number of people may not use them appropriately. Should we be concerned about those who will misuse them? Absolutely. But do we allow a small amount of misuse to stop us from building roads or sharing data? No. We mitigate the concern.

With open data, I’m happy to report that we already have the infrastructure in place to do just that.

From Public Servant to Public Insurgent

Are you a public insurgent?

Today, a generation of young people are arriving into the public service familiar with all sorts of tools – especially online and social media driven tools – that they have become accustomed to using. Tools like wikis, survey monkeys, doodle, instant messaging or websites like wikipedia, or issue specific blogs enable them to be more productive, more efficient and more knowledgeable.

And yet, when they arrive in their office they are told: “You cannot use those tools here.”

In short, they are told: “Don’t be efficient.”

You can, of course, imagine the impact on moral of having a boss tell you that you must do you work in a manner that is slower and less effective then you might otherwise do. Indeed, today, in the public service and even in many large organizations, we may be experiencing the first generation of a work force that is able to accomplish coordination and knowledge building tasks faster at home than at work.

Some, when confronted with this choice simple resign themselves to the power of their organizations rules and become less efficient. Others (and I suspect not an insignificant number), begin the process of looking for their next job. But what I find particularly interesting is a tinier segment who –  as dedicated employees, that love the public service and who want to be as effective as possible – believe in their mission so strongly that they neither leave, nor do they adhere to the rules. They become public insurgents and do some of their work outside the governments infrastructure.

Having spoken about government 2.0 and the future of the public service innumerable times now I have, on several occasions, run into individuals or even groups, of these public insurgents. Sometimes they installed a wiki behind the firewall, sometimes they grab their laptop and head to a cafe so they can visit websites that are useful, but blocked by their ministry, sometimes they simple send around a survey monkey in contravention of an IT policy. The offenses range from the minor to the significant. But in each case these individuals are motivated by the fact that this is the most, and sometimes only, way to do the task they’ve been handed in an effective way. More interesting is that sometimes their acts of rebellion create a debate that causes the organization to embrace the tools they secretly use, sometimes they it doesn’t and they continue to toil in secret.

I find this trend – one that I think may be growing – fascinating.

So my question to you is… are you a public insurgent? If you are I’d love to hear your story. Please post it in the comments (using an anonymous handle) or send me an email.

Links from Gov2.0 Summit talk and bonus material

My 5 minute lightening fast jam packed talk (do I do other formats? answer… yes) from yesterday’s Gov2.0 summit hasn’t yet been has just been posted to youtube. I love that this year the videos have the slides integrated into it.

For those who were, and were not, there yesterday, I wanted to share links to all the great sites and organizations I cited during my talk, I also wanted to share one or two quick stories I didn’t have time to dive into:

VanTrash and 311:

Screen-shot-2010-09-09-at-3.07.32-AM-1024x640As one of the more mature apps in Vancouver using open data Vantrash keeps being showing us how these types of innovations just keep giving back in new and interesting ways.

In addition to being used by over 3000 households (despite never being advertised – this is all word of mouth) it turns out that the city staff are also finding a use for vantrash.

I was recently told that 311 call staff use Vantrash to help trouble shoot incoming calls from residents who are having problems with garbage collection. The first thing one needs to do in such a situation is identify which collection zone the caller lives in – turns out VanTrash is the fastest and more effective way to accomplish this. Simply input the caller’s address into the top right hand field and presto – you know their zone and schedule. Much better than trying to find their address on a physical map that you may or may not have near your station.

TaxiCity, Open Data and Game Development

Another interesting spin off of open data. The TaxiCity development team, which recreated downtown Vancouver in 2-D using data from the open data catalog, noted that creating virtual cities in games could be a lot easier with open data. You could simply randomize the height of buildings and presto an instant virtual city would be ready. While the buildings would still need to be skinned one could recreate cities people know quickly or create fake cities that felt realistic as they’d be based on real plans. More importantly, this process could help reduce the time and resources needed to create virtual cities in games – an innovation that may be of interest to those in the video game industry. Of course, given that Vancouver is a hub for video game development, it is exactly these types of innovations the city wishes to foster and will help sustain Vancouver’s competitive advantage.

Links (in order of appearance in my talk)

Code For America shirt design can be seen in all their glory here and can be ordered here. As a fun aside, I literally took that shirt of Tim O’Reilly’s back! I saw it the day before and said, I’d wear that on stage. Tim overheard me and said he’d give me his if I was serious…

Vancouver’s Open Motion (or Open3, as it is internally referred to by staff) can be read in the city’s PDF version or an HTML version from my blog.

Vancouver’s Open Data Portal is here. keep an eye on this page as new data sets and features are added. You can get RSS feed or email updates on the page, as well as see its update history.

Vantrash the garbage reminder service’s website is here. There’s a distinct mobile interface if you are using your phone to browse.

ParkingMobility, an app that crowdsources the location of disabled parking spaces and enables users to take pictures of cars illegally parked in disabled spots to assist in enforcement.

TaxiCity, the Centre for Digital Media Project sponsored by Bing and Microsoft has its project page here. Links to the sourcecode, documentation, and a ton of other content is also available. Really proud of these guys.

Microsoft’s Internal Vancouver Open Data Challenge fostered a number of apps. Most have been opensourced and so you can get access to the code as well. The apps include:

The Graffiti Analysis written by University of British Columbia undergraduate students can be downloaded from this blog post I posted about their project.

BTA Works – the research arm of Bing Thom Architects has a great website here. You can’t download their report about the future of Vancouver yet (it is still being peer-reviewed) but you can read about it in this local newspaper article.

Long Tail of Public Policy – I talk about this idea in some detail in my chapter on O’Reilly Media’s Open Government. There is also a brief blog post and slide from my blog here.

Vancouver’s Open Data License – is here. Edmonton, Ottawa and Toronto use essentially the exact same thing. Lots that could be done on this front still mind you… Indeed, getting all these cities on a single standard license should be a priority.

Vancouver Data Discussion Group is here. You need to sign in to join but it is open to anyone.

Okay, hope those are interesting and helpful.

Are you a Public Servant? What are your Open Data Challenges?

A number of governments have begun to initiate open data and open government strategies. With more governments moving in this direction a growing number of public servants are beginning to understand the issues, obstacles, challenges and opportunities surrounding open data and open government.

Indeed, these challenges are why many of these public servants frequent this blog.

This is precisely why I’m excited to share that, along with the Sunlight Foundation, the Personal Democracy Forum, Code for America, and GovLoop, I am helping Socrata in a recently launched survey aimed at government employees at the national, regional and local levels in the US and abroad about the progress of Open Data initiatives within their organization.

If you are a government employee please consider taking time to help us understand the state of Open Data in government. The survey is comprehensive, but given how quickly this field and the policy questions that come with it, is expanding, I think the collective result of our work could be useful. So, with that all said, I know you’re busy, but hope you’ll consider taking 10 minutes to fill out the survey. You can find it at: http://www.socrata.com/benchmark-study.