Category Archives: technology

Open Government Consultation, Twitter Townhalls & Doing Advocacy Wrong

Earlier this week the Canadian Federal Government launched its consultation process on Open Government. This is an opportunity for citizens to comment and make suggestions around what data the federal government should make open and what information it should share, and provide feedback on how it can consult more effectively with Canadians. The survey (which, handily, can be saved midway through completion) contains a few straightforward multiple choice questions and about eight open ended questions which I’ve appended to the end of this post so that readers can reflect upon them before starting to fill out the form.

In addition to the online consultations, Tony Clement – the Minister responsible for the Open Government file – will host a Twitter townhall on Open Government this Thursday (December 15). Note! The townhall will be hosted by the treasury board twitter account @TBS_Canada (English) and @SCT_Canada (French) not Minister Clement’s personal (and better known) twitter account. The townhall will first take place in French from 4-4:45pm EST using the hashtags #parlonsgouvert and then in English from 5-5:45 EST using the hashtag #opengovchat.

Some of you may have also noticed that Democracy Watch issued a strongly worded press release last week with the (somewhat long) headline “Federal Conservatives break all of their international Open Government Partnership commitments by failing to consult with Canadians about their draft action plan before meeting in Brazil this week.” This seems to have prompted the CBC to write this article.

Now, to be clear, I’m a strong advocate for Open Government, and there are plenty of things for which one could be critical about this government for not being open about. However, to be credible – especially around issues of transparency and disclosure – one must be factual. And Democracy Watch did more than just stretch the truth. The simple fact is, that while I too wish the government’s consultations had happened sooner, this does not mean it has broken all of its Open Government Partnership commitments. Indeed, it hasn’t broken any of its commitments. A careful read of the Open Government Partnership requirements would reveal that the recent December meeting was to share drafts plans (including the plans by which to consult). The deadline that Democracy Watch is screaming about does not occur until March of 2012.

It would have been fair to say the government has been slow in fulfilling its commitments, but to say it has broken any of them is flatly not true. Indeed the charge feels particularly odd given that in the past two weeks the government signed on greater aid transparency via IATI and released an additional 4000 data sets, including virtually all of StatsCan’s data, giving Canadian citizens, non profits, other levels of governments and companies access to important data sets relevant for social, economic and academic purposes.

Again, there are plenty of things one could talk about when it comes to transparency and the government.  Yes, the consultation could have gotten off the ground faster. And yes, there is much more to done. But this screaming headline is somewhat off base. Publishing it damages both the credibility of the organization making the charge, and risk hurting the credible of open government advocates in general.

 

List of Open Ended Questions in the Open Government Consultation.

1. What could be done to make it easier for you to find and use government data provided online?

2. What types of open data sets would be of interest to you? Please pick up to three categories below and specify what data would be of interest to you.

3. How would you use or manipulate this data?

4. What could be done to make it easier for you to find government information online?

7. Do you have suggestions on how the Government of Canada could improve how it consults with Canadians?

8. Are there approaches used by other governments that you believe the Government of Canada could/should model?

9. Are there any other comments or suggestions you would like to make pertaining to the Government of Canada’s Open Government initiative?

 

Open Data Day 2011 – Recaps from Around the World

This last Saturday was International Open Data Day with hackathons taking place in cities around the world.

How many you ask? We can’t know for certain, but organizers around the world posted events to the wiki in over 50 cities around the world. Given the number of tweets with the #odhd hashtag, and the locations they were coming from, I don’t think we were far off that mark. If you assume 20 people at each event (some had many more – for instance there were over 100 in Ottawa, Vancouver had close to 50, 120+ in New York) it’s safe to say more than 1000 people were hacking on open data projects around the world.

It’s critical to understand that Open data Day is a highly decentralized event. All the work that makes it a success (and I think it was a big success) is in the hands of local organizers who find space, rally participants, push them to create stuff and, of course, try to make the day as fun as possible. Beyond their hard work and dedication there isn’t much, if any, organization. No boss. No central authority. No patron or sponsor to say thank you. So if you know any of the fine people who attended, or even more importantly, helped organize an event, please shake their hand or shoot them a thank you. I know I’m intensely grateful to see there are so many others out there that care about this issue, that want to connect, learn, meet new people, have fun and, of course, make something interesting. Given the humble beginnings of this event, we’ve had two very successful years.

So what about the day? What was accomplished? What Happened?

Government Motivator

I think one of the biggest accomplishments of Open Data Day has been how it has become a motivator – a sort of deadline – for governments keen to share more open data. Think about this. A group of volunteers around the world is moving governments to share more data – to make public assets more open to reuse. For example, in Ireland Fingal County Council released data around trees, parking, playing pitches & mobile libraries for the day. In Ontario, Canada the staff for the Region of Waterloo worked extra hard to get their open data portal up in time for the event. And it wasn’t just local governments. The Government of BC launched new high value data sets in anticipation of the event and the Federal Government of Canada launched 4000 new data sets with International Open Data Day in mind. Meanwhile, the open data evangelist of Data.gov was prepared to open up data sets for anyone who had a specific request.

While governments should always be working to make more data available I think we can all appreciate the benefits of having a deadline, and Open Data Day has helped become just that for more and more governments.

In other places, Open Data Day turns into a place where governments can converse with developers and citizens about why open data matters, and do research into what data the public is interested in. This is exactly what happened in Enschede in the Netherlands where local city staff worked with participants around prioritizing data sets to make open.

Local Events & Cool Hacks

A lot of people have been blogging about, or sharing videos of, Open Data Day events around the world. I’ve seen blog posts and news articles on events in places such as Madrid, Victoria BC, Oakland, Mexico City, Vancouver, and New York City. If there are more, please email them to me or post them on the wiki.

I haven’t been able to keep track of all the projects that got worked on, but here are a sampling of some that I’ve seen via twitter, the wiki and other forums:

Hongbo: The Emergency Location Locator

In Cotonou, Benin the open data day participants developed a web application called Hongbo the Goun word for “Gate.” Hongbo enables users to locate the nearest hospital, drugstore and police stations. As they noted on the open data day wiki, the data sets for this application were public but not easily accessible. They hope Benin citizen can use it quickly identify who to call or where to go in emergencies.

Tweet My Council

In Sydney, Australia participants created Tweetmycouncil. A fantastic simply application that allows a user to know which jurisdiction they are standing in. Simply send a tweet to the hashtag #tmyc and the app will work where you, what council’s jurisdiction you are in and send you a tweet with the response.

Mexican Access to Information Tracker

In Mexico City one team created an application to compare Free of Information requests between different government departments. This could be a powerful tool for citizens and journalists. (Github repo)

Making it Easier for the Next Guy

Another project out of Mexico City, a team from Oaxaca created an API that creates a json file for any public data set. Would be great for this team to connect with Max Ogden and talk about Gut.

Making it Even Easier for the Next Guy

Speaking of, Max Ogden in Oakland shared more on Gut, which is less of a classic app then a process that enables users to convert data between different formats. It had a number of people excited including open data developers at other locations such as Luke Closs and Mike West.

Mapping Census Data in Barcelona

A team of hackers in Barcelona mapped census tracts so they could be visualized, showing things, like say, the number of parks per census tract. You can find the data sets they used in Google Fusion Tabels here.

Foreign Aid Visualizations

In London UK and in Seattle (and possibly other places) developers were also very keen on the growing amount of aid data being made available and in a common structure thanks to IATI. In Seattle developers created this very cool visualization of US Aid over the last 50 years. I know the London UK team has visualizations of their own they’d like to share shortly.

Food Hacking!

One interesting thing about Open Data Day is how it bridges some very different communities. One of the most active are the food hackers which came out in force in both New York and Vancouver.

In New York a whole series of food related tools, apps and visualization got developed, most of which are described here and here. The sheer quantity of participants (120+) and projects developed is astounding, but also fantastic is how inclusive their event is, with lots of people not just working on apps, but analyzing data and creating visualizations to help others understand an issue they share a common passion for: the Food Bill. Please do click on those links to see some of the fun visuals created.

The Ultimate Food API

In Vancouver, the team at FoodTree – who hosted the hackathon there – focused on shipping an API for developers interested in large food datasets.  You can find their preliminary API and datasets in github. You can also track the work they’ve done on their Open Food Wiki.

Homelessness

In Victoria, BC a team created a map of local walk-in community services that you can check out at http://ourservices.ca/.

BC Emergency Tweeting System

Another team in Victoria, BC focused on creating twitter hashtags for each official place in the province with the hopes that the province’s Provincial Emergency Program.

Mapping Shell’s Oils Spills in Nigeria

The good people at the Open Knowledge Foundation worked on getting a ton more data into the Datahub, but they also had people learning how to visualize data, one of whom created this visualization of oil spills in Nigeria. Always great to see people experimenting and learning!

Mapping Vancouver’s Most Dangerous Intersections for Bikes

Open Data hacking and biking accident data have a long history together and this hackathon I uploaded 5 years worth of bike accident I managed to get from ICBC to Buzzdata. As a result – even though I couldn’t be present in Vancouver – two different developers took it and mapped it. You can see @ngriffithshere and @ericp’s will be up soon. It was interesting to learn that Broadway and Cambie is the most dangerous intersection in the city for cyclists?

Looking Forward

Last year open data day attracted individual citizens: those with a passion for an issue (like food) or who want to make their government more effective or citizens lives a little easier. However, this year we already started to see the community grow – the team at Socrata hosted a hackathon at their offices in Seattle. Buzzdata had people online trying to help people share their data. In addition to these private companies some of the more established non-profits were out in force. The Open Knowledge Foundation had a team working on making openspending.org more accessible while MySociety helped a team in Canada set up a local version of MapIt.

For those who think that open data can change the world or, even build medium sized economic ecosystems, over night, we need to reset their expectations. But it is growing. No longer are participants just citizens and hacktavists – there are real organizations and companies participating. Few, but they are there. My hope is that this trend will continues. That open data day will continue to have meaning for individuals and hackers but will also be something that larger more established organizations, non-profits and companies will use as a rallying point as well. Something to shoot for next year.

Feedback

As I mentioned at the beginning, Open Data Day is a very decentralized event. We are, of course, not wedded to that approach and I’d love to hear feedback from people, good or bad, about worked or didn’t work. Please do feel free to email me, post it to the mailing list or simply comment below.

 

 

Postscript

Finally, some of you may have noticed I became conspicuously absent on the day. I want to apologize to everyone. My partner went into labour on Friday night and so by early morning Saturday it was obvious that my open data day was going to be spent with her. Our baby was 11 days over due so we really thought that we’d be in the clear by Dec 3rd… but our baby had other plans. The good news is that despite 35 hours of labour, baby and boy are doing well!

StatsCan's free data costs $2M – a rant

So the other day a reader sent me an email pointing me to a story in iPolitics titled “StatsCan anticipates $2M loss from move to open data” and asked me what I thought.

Frustrated, was my response.

$2M is not a lot of money. Not in a federal budget of almost $200B.  And, the number may have been less. The StatsCan person quoted in the article called this expected loss of revenue a “maximum net loss.” This may mean that the loss from making the data free does not take into account the fact the StatsCan’s expenditures may also go down. For instance, if StatsCan no longer has to handle as many financial transactions or chase down invoices and so forth, the reduction if staff over other overhead (unrelated to its core mission by the way) and so result in lower operating costs not reflected in the $2M cited above.

Moreover it is still unclear to me where the $2M figure comes from. As I noted in a blog post earlier this year, in StatsCan’s own reports it outlined that its online database (the one just made free) generated $559,000 in revenue (not profit) in 2007-08 and was estimated to generate $525,000 in revenue in 2010-11. Where does the extra $1.5M come from? I’m open to the fact that I’m reading these reports incorrectly… but it is hard to see how.

But all this is really an aside.

What really, really, really, frustrates me is that the hard number of $2M. It is a pittance.

This is the unbearable cost that’s been holding up open StatsCan data for years? This may be the tiniest golden goose ever killed. Maybe more like a lame duck. Can anyone believe the loss of $2M (or 500K) was going to break the organization?

Give me a break.

What a colossal lack of imagination and sense of economic and social prosperity on the part of every government since Mulroney (who made StatsCan engage in cost recovery). In the United States open statistical data has helped businesses, the social sector, local and state governments, as well as researchers and academics. Heck, even Canadian teachers tell me that they’ve been forced to train students on US data because they couldn’t afford to train their students on Canadian data. All this lost innovation, efficiency, jobs and social benefits for a measly $2M dollars (if that). Oh lack of vision, at all levels! Both at the top of the political order, and within StatsCan, which has been reluctant to go down this route for years.

Now that we see the “cost” this battle seems more pathetic than ever.

Sigh. Rant over.

Using Open Data to Map Vancouver’s Trees

This week, in preparation for the International Open Data Hackathon on Saturday, the Vancouver Parks Board shared one neighborhood of its tree inventory database (that I’ve uploaded to Buzzdata) so that we could at least see how it might be leveraged by citizens.

What’s interesting is how valuable this data is already (and why it should be open). As it stands this data could be used by urban landscape students and architects, environmentalists, and of course academics and scientists. I could imagine this data would even be useful for analyzing something as obtuse as the impact of the tree’s Albedo effect on the city’s climate. Of course, locked away in the city’s data warehouse, none of those uses are possible.

However, as I outlined in this blog post, having lat/long data would open up some really fun possibilities that could promote civic engagement. People could adopt trees, care for them, water them, be able to report problems about a specific tree to city hall. But to do all this we need to take the city’s data and make it better – specifically, identify the latitude and longitude of each tree. In addition to helping citizens it might make the inventory more use to the city (if they chose to use it) as well as help out the other stakeholders I outlined above.

So here’s what I’ve scoped out would be ideal to do.

Goal

Create an app that would allow citizens to identify the latitude and longitude of trees that are in the inventory.

Data Background

A few things about the city’s tree inventory data. While they don’t have an actual long/lat for each individual tree, they do register trees by city address. (Again, you can look at the data yourself here.) But this means that we can narrow the number of trees down based on proximity to the user.

Process

So here is what I think we need to be able to do.

  1. Convert the addresses in the inventory into a format that can be located within Google Maps
  2. Just show the trees attached to addresses that are either near the user (on a mobile app), or near addresses that are currently visible within Google Maps (on a desktop app).
  3. Enable the user to add a lat/long to a specific tree’s identification number.

Awesome local superstar coder/punk rock star Duane Nickull whipped together a web app that would allow one to map lat/longs. So based on that, I could imagine at desktop app that allows you to map trees remotely. This obviously would not work for many trees, but it would work for a large number.

Tree-MApper-Screen-shot-11

You’ll notice in the right-hand corner, I’ve created an illustrative list of trees to choose from. Obviously, given the cross-section of the city we are looking at, it would be much longer, but if you were zoomed in all the way I could imagine it was no longer than 5-20.

I’ve also taken the city’s data and parsed it in a way that I think makes it easier for users to understand.

tree-language-parsed

This isn’t mind-blowing stuff, but helpful. I mean who knew that dbh (diameter at breast height) was an actual technical term when measuring tree diameters! I’ve also thrown in some hyperlinks (it would be nice to have images people can reference) so users can learn about the species and ideally, even see a photo to compare against.

Tree-Mapper-Screenshot-2

So, in short, you can choose a tree, locate it in Google Maps and assign a lat/long to it. In Google Maps where you can zoom even closer than ESRI, you could really pick out individual trees.

In addition to a desktop web app, I could imagine something similar for the iPhone where it locates you using the GPS, identifies what trees are likely around you, and gives you a list such as the one on the right hand side of the screenshot above, the user then picks a tree from the list that they think they’ve identified, stands next to the tree and then presses a button in the app that assigns the lat/long of where they are standing to that tree.

Canada’s Foreign Aid Agency signs on to IATI: Aid Data get more transparent

Last night, while speaking at the High Level Forum on Aid Effectiveness in Busan Korea, Minister of International Cooperation Bev Oda announced that Canada would be signing on to the International Aid Transparency Initiative (IATI).

So what is IATI and why does this matter?

IATI has developed a common, open and international standard for sharing foreign aid data. By signing on to IATI Canada is agreeing to publish all the data about its projects and who it funds in a form and structure that makes it easy to compare with others who use the IATI standard. This should make it easier to understand where Canadian aid money ends up, in turn allowing analysts to spot efficiencies as well as compare funding and efforts across donor and recipient countries as well as other stakeholders. In short, aid data should become easier to understand, to compare, and to use.

In the medium term it should also make the data available on CIDA’s open data portal (already helpful to non-profits, development groups and students) even more useful.

This is an enormous win for the good people at Engineers Without Borders, as well as the team at Publish What You Fund. Both groups have been working hard for over a year talking Canadian politicians and public servants through the ins and outs – as well as the benefits – of signing onto IATI. I’ve been working with both groups as well, pushing IATI when meeting with Federal Ministers (I recommended we make it part of our Open Government Partnership goals) as well as writing supportive op-eds in newspapers, so needless to say I’m excited about this development.

This really is good news. As governments become increasingly aware of the power data can have in facilitating cooperation and coordination as well as in improving effectiveness and efficiency, it will be critical to push standards around structuring and sharing data so that such coordination can happen easily across and between jurisdictions. IATI is a great example of such an effort and I hope there are more of these, with Canada taking an early lead, in the months and years ahead.

 

 

International Open Data Hackathon, Dec 3rd. It's coming together.

So a number of things have started to really come together for this Saturday Dec 3rd. I’ve noticed a number of new cities being tweeted about (hello Kuala Lumpur & Oakland!) and others adding themselves to the wiki. Indeed, we seem to be above 40 cities. It is hard to know how many people will be showing up in each given city, but in Vancouver I know that we already over 20 registered, while in Ottawa they are well above 40. If other cities have similar numbers it’s a great testament to the size of the community out there interested in playing with open government data.

A few thoughts to share with people as we get ready for the big day.

1. Leverage existing projects.

I’ve mentioned a few times that there are some great existing projects out there that can be easily leveraged.

In that vein I’ve noticed the good people at the Open Knowledge Foundation, who are behind OpenSpending (the project that powers WherDoesMyMoneyGo.org) have not only made their software easier to use but have put up some helpful instructions for creating your own instance up on the wiki. One hope I have for Saturday is that a number of different places might be able to visualize local budgets in much easier to understand ways. OpenSpending has the potential of being an enormously helpful tool for communities trying to understand their budget – hopefully we can provide some great examples and feedback for its creators.

In addition, the folks at MySociety have provided some helpful advice on the wiki for those interested in spinning up a version of MapIt for their country.

2. Get Data Now, Not on Saturday!

Here in Vancouver, my friend Luke C asked if we could get bicycle accident data for the city or province as he wanted to play around with it and maybe visualize it on December 3rd. It just so happened I had a contact at the Insurance Company of British Columbia (ICBC) which insures every vehicle in the province. I reached out and, after going through their request process, now have the data set to share with Luke.

The key piece here: now is the time to check and see if data you are interested in is available, see investigate what is out there, and request it from various stakeholders if it is not.

3. Share Your Code, Share your Data

Indeed, one advantage of having the BC bicycle accident data early is that I can start sharing it with people immediately. I’ve already uploaded the data set (all 6400 lines) onto BuzzData’s site here so others can download it, clone it, and share their own work on it. That way, even if Luke and I get separated, he’s still got something to hack on!

So please do let people know where they can find data you are hacking on, as well as project you’re hacking on. The Open Data Day Projects 2011 wiki page currently sits empty (as should be expected). But take a swing by the page 2010 project page, notice how it is quite full… I’d love to see us replicate this success. I’m hoping people link to not just their projects, but also Github repos, scraperwiki creations, BuzzData accounts and other places.

If you have a project and you think people in open data day hackathons in other cities might be interested, put it in the project page and tweet about it using the #odhd hashtag. You may discover there are people out there who feel as passionately about your project as you do!

4. Let’s Get Connected

Speaking of sharing, my friend Edward O-G, who is organizing the hackathon in Ottawa, did a great job last year setting up some infrastructure so people from different hackathons could video conference with one another. This year I think we’ll try using Google hangouts on google+. However, there is a non-trivial risk that this will not scale super well.

So…

Edward also suggested (brilliantly) that people create YouTube videos of whatever they create during the hackathon or in the days and weeks that follow. Please post those links to the Open Data Day Projects 2011 wiki page as well. There were a few projects last year that had youtube videos and they were very helpful, particularly when a project isn’t quite ready for prime time. It gives us a taste of what will be available. It also becomes something we can point people to.

5. Have Fun, Do What Is Interesting

Remember, Open Data Day is about meeting people, learning about open data, and working on something that you feel passionate about. This is all very decentralized and informal – no one is going to come and save your hackathon… it is up to you! So make sure you find something you think is worth caring about and work on it. Share your idea, and your passion, with others, that’s what makes this fun.

Can’t wait to hear what people are up to. Please feel free to email or tweet at me what you’re working on. I’d love to hear about it and blog about them.

 

Here in Vancouver, the open data hackathon will be happening at the offices of FoodTree, which has some of its own developers working on Open Food data.(If you haven’t signed up yet, definitely do so here).

 

 

The Canadian Government's New Web 2.0 Guidelines: the Good, the Bad & the Ugly

Yesterday, the government of Canada released its new Guidelines for external use of Web 2.0. For the 99.99% of you unfamiliar  with what this is, it’s the guidelines (rules) that govern how, and when, public servants may use web 2.0 tools such as twitter and facebook.

You, of course, likely work in organization that survives without such documents. Congratulations. You work in a place where the general rule is “don’t be an idiot” and your bosses trust your sense of judgement. That said, you probably also don’t work somewhere where disgruntled former employees and the CBC are trolling the essentially personal online statements of your summer interns so they can turn it into a scandal. (Yes, summer student border guards have political opinions, don’t like guns and enjoy partying. Shocker). All this to say, there are good and rational reasons why the public service creates guidelines: to protect not just the government, but public servants.

So for those uninterested in reading the 31 page, 12,055 word guidelines document here’s a review:

The Good

Sending the right message

First off, the document, for all its faults, does get one overarching piece right. Almost right off the bat (top of section 3.2) is shares that Ministries should be using Web 2.0 tools:

Government of Canada departments are encouraged to use Web 2.0 tools and services as an efficient and effective additional channel to interact with the public. A large number of Canadians are now regularly using Web 2.0 tools and services to find information about, and interact with, individuals and organizations.

Given the paucity of Web 2.0 use in the Federal government internally or externally this clear message from Treasury Board, and from a government minister, is the type of encouragement needed to bring government communications into 2008 (the British Government, with its amazing Power of Information Taskforce, has been there for years).

Note: there is a very, very, ugly counterpart to this point. See below.

Good stuff for the little guy

Second, the rules for Professional Networking & Personal Use are fairly reasonable. There are some challenges (notes below), but if any public servant ever finds them or has the energy to read the document, they are completely workable.

The medium is the message

Finally, the document acknowledges that the web 2.0 world is constantly evolving and references a web 2.0 tool by which public servants can find ways to adapt. THIS IS EXACTLY THE RIGHT APPROACH. You don’t deal with fast evolving social media environment by handing out decrees in stone tablets, you manage it by offering people communities of practice where they can get the latest and best information. Hence this line:

Additional guidance on the use of Web 2.0 tools and services is in various stages of development by communities of expertise and Web 2.0 practitioners within the Government of Canada. Many of these resources are available to public servants on the Government of Canada’s internal wiki, GCpedia. While these resources are not official Government of Canada policies or guidelines, they are valuable sources of information in this rapidly evolving environment.

Represents a somewhat truly exciting development in the glacially paced evolution of government procedures. The use of social media (GCPEDIA) to manage social media.

Indeed, still more exciting for me is that this was the first time I’ve seen an official government document reference GCPEDIA as a canonical source of information. And it did it twice, once, above, pointing to a community of practice, the second was pointing to the GCPEDIA “Social media procurement process” page. Getting government to use social media internally is I think the biggest challenge at the moment, and this document does it.

The Bad

Too big to succeed

The biggest problem with the document is its structure. It is so long, and so filled with various forms of compliance, that only the most dedicated public servant (read, communications officer tasked with a social media task) will ever read this. Indeed for a document that is supposed to encourage public servants to use social media, I suspect it will do just the opposite. Its density and list of controls will cause many who were on the fence to stay there – if not retreat further. While the directions for departments are more clear, for the little guy… (See next piece)

Sledgehammers for nails

The documents main problem is that it tries to address all uses of social media. Helpfully, it acknowledges there are broadly two types of uses “Departmental Web 2.0 initiatives” (e.g. a facebook group for a employment insurance program) and “personnel/professional use” (e.g. a individual public servant’s use of twitter or linked in to do their job). Unhelpfully, it addresses both of them.

In my mind 95% of the document relates to departmental uses… this is about ensuring that someone claiming to represent the government in an official capacity does not screw up. The problem is, all those policies aren’t as relevant to Joe/Jane public servant in their cubicle trying to find an old colleague on LinkedIn (assuming they can access linkedin). It’s overkill. These should be separate documents, that way the personal use document could be smaller, more accessible and far less intimidating. Indeed, as the guidelines suggest, all it should really have to do is reference the Values and Ethics Code for the Public Service (essentially the “idiots guide to how not to be an idiot on the job” for public servants) and that would have been sufficient. Happily most public servants are already familiar with this document, so simply understanding that those guidelines apply online as much as offline, gets us 90% of the way there.

In summary, despite a worthy effort, it seem unlikely this document will encourage public servants to use Web 2.0 tools in their jobs. Indeed, for a (Canadian) comparison consider the BC Government’s guidelines document, the dryly named “Policy No. 33: Use of Social Media in the B.C. Public Service.”  Indeed, despite engaging both use cases it manages covers all the bases, is straightforward, and encouraging, and treats the employee with an enormous amount of respect. All this in a nifty 2 pages and 1,394 words. Pretty much exactly what a public servant is looking for.

The Ugly

Sadly, there is some ugliness.

Suggestions, not change

In the good section I mentioned that the government is encouraging ministries to use social media… this is true. But it is not mandating it. Nor does these guidelines say anything to Ministerial IT staff, most of whom are blocking public servant’s access to sites like facebook, twitter, in many cases, my blog, etc… The sad fact is, there may now be guidelines that allow public servants to use these tools, but in most cases, they’d have to go home, or to a local coffee shop (many do) in order to actually make use of these guidelines. For most public servants, much of the internet remains beyond their reach, causing them to fall further and further behind in understanding how technology will effect their jobs and their department/program’s function in society.

It’s not about communication, it’s about control

In his speech at PSEngage yesterday the Treasury Board Minister talked about the need for collaboration on how technology can help the public service reinvent how it collaborates:

The Government encourages the use of new Web 2.0 tools and technologies such as blogs, wikis, Facebook, Twitter and YouTube. These tools help create a more modern, open and collaborative workplace and lead to more “just-in-time” communications with the public.

This is great news. And I believe the Minister believes it too. He’s definitely a fan of technology in all the right ways. However, the guidelines are mostly about control. Consider this paragraph:

Departments should designate a senior official accountable and responsible for the coordination of all Web 2.0 activities as well as an appropriate governance structure. It is recommended that the Head of Communications be the designated official. This designate should collaborate with departmental personnel who have expertise in using and executing Web 2.0 initiatives, as well as with representatives from the following fields in their governance structure: information management, information technology, communications, official languages, the Federal Identity Program, legal services, access to information and privacy, security, values and ethics, programs and services, human resources, the user community, as well as the Senior Departmental Official as established by the Standard on Web Accessibility. A multidisciplinary team is particularly important so that policy interpretations are appropriately made and followed when managing information resources through Web 2.0 tools and services.

You get all that? That’s at least 11 variables that need to be managed. Or, put another way, 11 different manuals you need to have at your desk when using social media for departmental purposes. That makes for a pretty constricted hole for information to get out through, and I suspect it pretty much kills most of the spontaneity, rapid response time and personal voice that makes social media effective. Moreover, with one person accountable, and this area of communications still relatively new, I suspect that the person in charge, given all these requirements, is going to have a fairly low level of risk. Even I might conclude it is safer to just post an ad in the newspaper and let the phone operators at Service Canada deal with the public.

Conclusion

So it ain’t all bad. Indeed, there is much that is commendable and could be worked with. I think, in the end, 80% of the problems with the document could be resolved if the government simply created two versions, one for official departmental uses, the other for individual public servants. If it could then restrain the lawyers from repeating everything in the Values and Ethics code all over again, you’d have something that social media activists in the public service could seize upon.

My sense is that the Minister is genuinely interested in enabling public servants to use technology to do their jobs better – he knows from personal experience how helpful social media can be. This is great news for those who care about these issues, and it means that pressing for a better revised version might yield a positive outcome. Better to try now, with a true ally in the president’s office than with someone who probably won’t care.

 

The New Government of Canada Open Data License: The OGL by another name

Last week the Minister Clement issued a press release announcing some of the progress the government has made on its Open Government Initiatives. Three things caught my eye.

First, it appears the government continues to revise its open data license with things continuing to trend in the right direction.

As some of you will remember, when the government first launched data.gc.ca it had a license that was so onerous that it was laughable. While several provisions were problematic, my favourite was the sweeping, “only-make-us-look-good-clause” which, said, word for word: “You shall not use the data made available through the GC Open Data Portal in any way which, in the opinion of Canada, may bring disrepute to or prejudice the reputation of Canada.”

After I pointed out the problems with this clause to then Minister Day, he managed to have it revoked within hours – very much to his credit. But it is a good reminder to the starting point of the government license and to the mindset of government Canada lawyers.

With the new license, almost all the clauses that would obstruct commercial and non-profit reuse have effectively been eliminated. It is no longer problematic to identify individual companies and the attribution clauses have been rendered slightly easier. Indeed, I would argue that the new license has virtually the same constraints as the UK Open Government License (OGL) and even the Creative Commons CC-BY license.

All this begs the question… why not simply use the language and structure of the OGL in much the same manner that British Columbia Government tried to with its own BC OGL? Such a standardized license across jurisdictions might be helpful, it would certainly simply life for think tanks, academics, developers and other users of the data. This is something I’m pushing for and hope that we might see progress on.

Second, the idea that the government is going to post completed access to information (ATIP) requests online is also a move in the right direction. I suspect that the most common ATIP request is one that someone else has already made. Being able to search through previous requests would enable you to find what you are looking for without having to wait weeks or make public servants redo the entire search and clearing process. What I don’t understand is why only post the summaries? In a digital world it would be better for citizens, and cheaper for the government to simply post the entire request whenever privacy policies wouldn’t prevent it.

Third, and perhaps most important were the lines noting that “That number (of data sets) will continue to grow as the project expands and more federal departments and agencies come onboard. During this pilot project, the Government will also continue to monitor and consider national and international best practices, as well as user feedback, in the licensing of federal open data.”

This means that we should expect more data to hit the site. I seems as though more departments are being asked to figure out what data they can share – hopefully this means that real, interesting data sets will be made public. In particular one hopes that data sets which legislation mandates the government collect, will be high on the list of priorities. Also interesting in this statement is the suggestion that the government will consider national and international best practices. I’ve talked to both the Minister and officials about the need to create common standards and structures for open data across jurisdictions. Fostering and pushing these is an area where the government could take a leadership role and it looks like there may be interesting in this.

 

Open Data Day – a project I'd like to be doing

As some readers and International Open Data Hackathon participants know, I’m really keen on developers reusing each others code. All too often, in hackathons, we like to build something from scratch (which can be fun) but I’ve always liked the idea of hackathons either spurring genuine projects that others can reuse, or using a hackathon as an excuse to find a project they’d like support and contribute to.

That’s why I’ve been really encouraging people to find open source projects out there that they’d find interesting and that will support others efforts. This is a big reason I’ve been thinking about MapIt and the Open Knowledge Foundation’s Where does my Money Go project.

In Vancouver, one project I’m eventually hoping we can contribute to is Adopt-a-Hydrant. A project out of Code for America, the great thing about Adopt-a-Hydrant is that it can be adapted to become an adopt an anything app. It’s the end goal of a project I’m hoping to plan out and start on during the Hackathon.

Here in Vancouver, I’ve been talking with the Parks Board around getting a database of all the cities trees opened up. Interestingly this dataset does not include location data (Lat/Long) for each tree… So what would initially be great is to build a mobile phone app that will show you a list of trees near the address the user is currently at, and then allow the user to use their phone’s GPS to add the lat/long data to the database. That way we can help augment the city’s database. Once you begin to add lat long data then you could map trees in Adopt-a-Hydrant and create an Adopt-a-Tree app. Citizens could then sign up to adopt a tree, offer to take care of it, maybe notify the parks board if something is wrong.

I consider this a fairly ambitious project, but it could end up engaging a number of stakeholders – students, arborists, retirees, and others – that don’t normally engage in open data.

I know that the crew organizing a hackathon in Hamilton, Ontario are also looking to create an instance of Adopt-a-Hydrant, which is awesome. We need to both track what worked and what didn’t work so that the kinks in Adopt-a-hydrant can be worked out. More users and developers like us will help refine it further.

If you are planning a hackathon for the Dec 3rd International Open Data Hackathon, please be sure to update the wiki, join the mailing list, and if you have a project your are planning on working on, please email the list, or me directly, I’d love to blog about it!

 

 

Weaving Foreign Ministries into the Digital Era: Three ideas

Last week I was in Ottawa giving a talk at the Department of Foreign Affairs talking about how technology, new media and open innovation will impact the department’s it work internally, across Ottawa and around the world.

While there is lots to share, here are three ideas I’ve been stewing on:

Keep more citizens safe when abroad – better danger zone notification

Some people believe that open data isn’t relevant to departments like Foreign Affairs or the State Department. Nothing could be further than the truth.

One challenge the department has is getting Canadians to register with them when they visit or live in a country labeled by the department as problematic for traveling in its travel reports (sample here). As you can suspect, few Canadians register with the embassy as they are likely not aware of the program or travel a lot and simply don’t get around to  it.

There are other ways of tackling this problem that might yield broader participation.

Why not turn the Travel Report system into an open data with an API? I’d tackle this by approaching a company like TripIt. Every time I book an airplane ticket or a hotel I simply forward TripIt the reservation, which they scan and turn into events that then automatically appear my calendar. Since they scan my travel plans they also know which country, city and hotel I’m staying in… they also know where I live and could easily ask me for my citizenship. Working with companies like TripIt (or Travelocity, Expedia, etc…) DFAIT could co-design an API into the departments travel report data that would be useful to them. Specifically, I could imagine that if TripIt could query all my trips against those reports then any time they notice I’m traveling somewhere the Foreign Ministry has labelled “exercise a high-degree of caution” or worse trip TripIt could ask me if I’d be willing to let them forward my itinerary to the department. That way I could registry my travel automatically, making the service more convenient for me, and getting the department more information that it believes to be critical as well.

Of course, it might be wise to work with the State Department so that their travel advisories used a similarly structured API (since I can assume TripIt will be more interested in the larger US market than the Canadian market) But facilitating that conversation would be nothing but wins for the department.

More bang for buck in election monitoring

One question that arose during my talk came from an official interested in elections monitoring. In my mind, one thing the department should be considering is a fund to help local democracy groups spin up installations of Ushahidi in countries with fragile democracies that are gearing up for elections. For those unfamiliar with Ushahidi it is a platform developed after the disputed 2007 presidential election in Kenya that plotted eyewitness reports of violence sent in by email and text-message on a google map.

Today it is used to track a number of issues – but problems with elections remain one of its core purposes. The department should think about grants that would help spin up a Ushahidi install to enable citizens of the country register concerns and allegations around fraud, violence, intimidation, etc… It could then verify and inspect issues that are flagged by the countries citizens. This would allow the department to deploy its resources more effectively and ensure that its work was speaking to concerns raised by citizens.

A Developer version of DART?

One of the most popular programs the Canadian government has around international issues is the Disaster Assistance Response Team (DART). In particular, Canadians have often been big fans of DART’s work in purifying water after the boxing day tsunami in Asia as well as its work in Haiti. Maybe the department could have a digital DART team, a group of developers that, in an emergency could help spin up Ushahidi, Fixmystreet, or OpenMRS installations to provide some quick but critical shared infrastructure for Canadians, other countries’ response teams and for non-profits. During periods of non-crisis the team could work on these projects or supporting groups like CrisisCommons or OpenStreetMaps, helping contribute to open source projects that can be instrumental in a humanitarian crisis.