Tag Archives: gov20

Why Social Media behind the Government Firewall Matters

This comment, posted four months ago to my blog by Jesse G. in response to this post on GCPEDIA, remains one of the favorite comments posted to my blog ever. This is a public servant who understands the future and is trying to live it. I’ve literally had this comment sitting in my inbox because this whole time because I didn’t want to forget about it.

For those opposing to the use of wiki’s social media behind the government firewall this is a must read (of course I’d say it is a must read for those in favour as well). It’s just a small example of how tiny transactions costs are killing government, and how social media can flatten them.

I wish more elements of the Canadian government got it, but despite the success of GCPEDIA and its endorsement by the Clerk there are still a ton of forces pitted against it, from the procurement officers in Public Works who’d rather pay for a bulky expensive alternative that no one will use to middle managers who forbid their staff from using it out of some misdirected fear.

Is GCPEDIA the solution to everything? No. But it is a cheap solution to a lot of problems, indeed I’ll bet its solved more problems per dollar than any other IT solution put forward by the government.

So for the (efficient) future, read on:

Here’s a really untimely comment – GCPEDIA now has over 22,000 registered users and around 11,000 pages of content. Something like 6.5 million pageviews and around .5 million edits. It has ~2,000 visitors a week and around 15,000 pageviews a week. On average, people are using the wiki for around 5.5 minutes per visit. I’m an admin for GCPEDIA and it’s sister tools – GCCONNEX (a professional networking platform built using elgg) and GCForums (a forum build using YAF). Collectively the tools are known as GC2.0.

Anyways, I’m only piping up because I love GCPEDIA so much. For me and for thousand of public servants, it is something we use every day and I cannot emphasize strongly enough how friggin’ awesome it is to have so much knowledge in one place. It’s a great platform for connecting people and knowledge. And it’s changing the way the public service works.

A couple of examples are probably in order. I know one group of around 40 public servants from 20 departments who are collaborating on GCPEDIA to develop a new set of standards for IT. Every step of the project has taken place on GCPEDIA (though I don’t want to imply that the wiki is everything – face-to-face can’t be replaced by wiki), from the initial project planning, through producing deliverables. I’ve watched their pages transform since the day they were first created and I attest that they are really doing some innovative work on the wiki to support their project.

Another example, which is really a thought experiment: Imagine you’re a coop student hired on a 4 month term. Your director has been hearing some buzz about this new thing called Twitter and wants an official account right away. She asks you to find out what other official Twitter accounts are being used across all the other departments and agencies. So you get on the internet, try to track down the contact details for the comms shops of all those departments and agencies, and send an email to ask what accounts they have. Anyone who knows government can imagine that a best case turnaround time for that kind of answer will take at least 24 hours, but probably more like a few days. So you keep making calls and maybe if everything goes perfectly you get 8 responses a day (good luck!). There are a couple hundred departments and agencies so you’re looking at about 100 business days to get a full inventory. But by the time you’ve finished, your research is out of date and no longer valid and your 4 month coop term is over. Now a first year coop student makes about $14.50/hour (sweet gig if you can get it students!), so over a 4 month term that’s about $10,000. Now repeat this process for every single department and agency that wants a twitter account and you can see it’s a staggering cost. Let’s be conservative and say only 25 departments care enough about twitter to do this sort of exercise – you’re talking about $275,000 of research. Realistically, there are many more departments that want to get on the twitter bandwagon, but the point still holds.

Anyways, did you know that on GCPEDIA there is a crowd-sourced page with hundreds of contributors that lists all of the official GC twitter accounts? One source is kept up to date through contributions of users that literally take a few seconds to make. The savings are enormous – and this is just one page.

Because I know GCPEDIA’s content so well, I can point anyone to almost any piece of information they want to know – or, because GCPEDIA is also a social platform, if I can’t find the info you’re looking for, I can at least find the person who is the expert. I am not an auditor, but I can tell you exactly where to go for the audit policies and frameworks, resources and tools, experts and communities of practice, and pictures of a bunch of internal auditors clowning around during National Public Service Week. There is tremendous value in this – my service as an information “wayfinder” has won me a few fans.

Final point before I stop – a couple of weeks ago, I was doing a presentation to a manager’s leadership network about unconferences. I made three pages – one on the topic of unconferences, one on the facilitation method for building the unconference agenda, and one that is a practical 12-step guide for anyone who wants to plan and organize their own (this last was a group effort with my co-organizers of Collaborative Culture Camp). Instead of preparing a powerpoint and handouts I brought the page up on the projector. I encouraged everyone to check the pages out and to contribute their thoughts and ideas about how they could apply them to their own work. I asked them to improve the pages if they could. But the real value is that instead of me showing up, doing my bit, and then vanishing into the ether I left a valuable information resource behind that other GCPEDIA users will find, use, and improve (maybe because they are searching for unconferences, or maybe it’s just serendipity). Either way, when public servants begin to change how they think of their role in government – not just as employees of x department, but as an integral person in the greater-whole; not in terms of “information is power”, but rather the power of sharing information; not as cogs in the machine, but as responsible change agents working to bring collaborative culture to government – there is a huge benefit for Canadian citizens, whether the wiki is behind a firewall or not.

p.s. To Stephane’s point about approval processes – I confront resistance frequently when I am presenting about GCPEDIA, but there is always someone who “gets” it. Some departments are indeed trying to prevent employees from posting to GCPEDIA – but it isn’t widespread. Even the most security-conscious departments are using the wiki. And Wayne Wouters, the Clerk of the Privy Council has been explicit in his support of the wiki, going so far as to say that no one requires manager’s approval to use the wiki. I hope that if anyone has a boss that says, “You can’t use GCPEDIA” that that employee plops down the latest PS Renewal Action Plan on his desk and says, “You’ve got a lot to learn”.

Shared IT Services across the Canadian Government – three opportunities

Earlier this week the Canadian Federal Government announced it will be creating Shared Services Canada which will absorb the resources and functions associated with the delivery of email, data centres and network services from 44 departments.

These types of shared services projects are always fraught with danger. While they sometimes are successful, they are often disasters. Highly disruptive with little to show for results (and eventually get unwound). However, I suspect there is a significant amount of savings that can be made and I remain optimistic. With luck the analogy here is the work outgoing US CIO Vivek Kundra accomplished as he has sought to close down and consolidate 800 data centres across the US which is yielding some serious savings.

So here’s what I’m hoping Shared Services Canada will mean:

1) A bigger opportunity for Open Source

What I’m still more hopeful about – although not overly optimistic – is the role that open source solutions could play in the solutions Shared Services Canada will implement. Over on the Drupal site, one contributor claims government officials have been told to hold off buying web content management systems as the government prepares to buy a single solution for across all departments.

If the government is serious about lowering its costs it absolutely must rethink its procurement models so that open source solutions can at least be made a viable option. If not this whole exercise will mean the government may save money, but it will be the we move from 5 expensive solutions to one expensive solution variety.

On the upside some of that work has clearly taken place. Already there are several federal government websites running on Drupal such as this Ministry of Public Works website, the NRCAN and DND intranet. Moreover, there are real efforts in the open source community to accommodate government. In the United States OpenPublic has fostered a version of Drupal designed for government’s needs.

Open source solutions have the added bonus of allowing you the option of using more local talent, which, if stimulus is part of the goal, would be wise. Also, any open source solutions fostered by the federal government could be picked up by the provinces, creating further savings to tax payers. As a bonus, you can also fire incompetent implementors, something that needs to happen a little more often in government IT.

2) More accountability

Ministers Ambrose and Clement are laser focused on finding savings – pretty much every ministry needs to find 5 or 10% savings across the board. I also know both speak passionately about managing tax payers dollars: “Canadians work hard for their money and expect our Government to manage taxpayers dollars responsibly, Shared Services Canada will have a mandate to streamline IT, save money, and end waste and duplication.”

Great. I agree. So one of Shared Service Canada’s first act should be to follow in the footsteps of another Vivek Kundra initiative and recreate his incredibly successful IT Dashboard. Indeed it was by using the dashboard Vivek was able to “cut the time in half to deliver meaningful [IT system] functionality and critical services, and reduced total budgeted [Federal government IT] costs by over $3 billion.” Now that some serious savings. It’s a great example of how transparency can drive effective organizational change.

And here’s the kicker. The White House open sourced the IT Dashboard (the code can be downloaded here). So while it will require some work adapting it, the software is there and a lot of the heavy work has been done. Again, if we are serious about this, the path forward is straightforward.

3) More open data

Speaking of transparency… one place shared services could really come in handy is creating some data warehouses for hosting critical government data sets (ideally in the cloud). I suspect there are a number of important datasets that are used by public servants across ministries, and so getting them on a robust platform that is accessible would make a lot of sense. This of course, would also be an ideal opportunity to engage in a massive open data project. It might be easier to create policy for making the data managed by Shared Service Canada “open.” Indeed, this blog post covers some of the reasons why now is the time to think about that issue.

So congratulations on the big move everyone and I hope these suggestions are helpful. Certainly we’ll be watching with interest – we can’t have a 21st century government unless we have 21st century infrastructure, and you’re now the group responsible for it.

Open Source Data Journalism – Happening now at Buzz Data

(there is a section on this topic focused on governments below)

A hint of how social data could change journalism

Anyone who’s heard me speak in the last 6 months knows I’m excited about BuzzData. This week, while still in limited access beta, the site is showing hints its potential – and it still has only a few hundred users.

First, what is BuzzData? It’s a website that allows data to be easily uploaded and shared among any number of users. (For hackers – it’s essentially github for data, but more social). It makes it easy for people to copy data sets, tinker with them, share the results back with the original master, mash them up with other data sets, all while engaging with those who care about that data set.

So, what happened? Why is any of this interesting? And what does it have to do with journalism?

Exactly a month ago Svetlana Kovalyova of Reuters had her article – Food prices to remain high, UN warns – re-published in the Globe and Mail.  The piece essentially outlined that food commodities were getting cheaper because of local conditions in a number of regions.

Someone at the Globe and Mail decided to go a step further and upload the data – the annual food price indices from 1990-present – onto the BuzzData site, presumably so they could play around with it. This is nothing complicated, it’s a pretty basic chart. Nonetheless a dozen or so users started “following” the dataset and about 11 days ago, one of them, David Joerg, asked:

The article focused on short-term price movements, but what really blew me away is: 1) how the price of all these agricultural commodities has doubled since 2003 and 2) how sugar has more than TRIPLED since 2003. I have to ask, can anyone explain WHY these prices have gone up so much faster than other prices? Is it all about the price of oil?

He then did a simple visualization of the data.

FoodPrices

In response someone from the Globe and Mail entitled Mason answered:

Hi David… did you create your viz based on the data I posted? I can’t answer your question but clearly your visualization brought it to the forefront. Thanks!

But of course, in a process that mirrors what often happens in the open source community, another “follower” of the data shows up and refines the work of the original commentator. In this case, an Alexander Smith notes:

I added some oil price data to this visualization. As you can see the lines for everything except sugar seem to move more or less with the oil. It would be interesting to do a little regression on this and see how close the actual correlation is.

The first thing to note is that Smith has added data, “mashing in” Oil Price per barrel. So now the data set has been made richer. In addition his graph quite nice as it makes the correlation more visible than the graph by Joerg which only referenced the Oil Price Index. It also becomes apparent, looking at this chart, how much of an outlier sugar really is.

oilandfood

Perhaps some regression is required, but Smith’s graph is pretty compelling. What’s more interesting is not once is the price of oil mentioned in the article as a driver of food commodity prices. So maybe it’s not relevant. But maybe it deserves more investigation – and a significantly better piece, one that would provide better information to the public – could be written in the future. In either case, this discussion, conducted by non-experts simply looking at the data, helped surface some interesting leads.

And therein lies the power of social data.

With even only a handful of users a deeper, better analysis of the story has taken place. Why? Because people are able to access the data and look at it directly. If you’re a follower of Julian Assange of wikileaks, you might call this scientific journalism, maybe it is, maybe it isn’t, but it certainly is a much more transparent way for doing analysis and a potential audience builder – imagine if 100s or 1000s of readers were engaged in the data underlying a story. What would that do to the story? What would that do to journalism? With BuzzData it also becomes less difficult to imagine a data journalists who spends a significant amount of their time in BuzzData working with a community of engaged pro-ams trying to find hidden meaning in data they amass.

Obviously, this back and forth isn’t game changing. No smoking gun has been found. But I think it hints at a larger potential, one that it would be very interesting to see unlocked.

More than Journalism – I’m looking at you government

Of course, it isn’t just media companies that should be paying attention. For years I argued that governments – and especially politicians – interested in open data have an unhealthy appetite for applications. They like the idea of sexy apps on smart phones enabling citizens to do cool things. To be clear, I think apps are cool too. I hope in cities and jurisdictions with open data we see more of them.

But open data isn’t just about apps. It’s about the analysis.

Imagine a city’s budget up on Buzzdata. Imagine, the flow rates of the water or sewage system. Or the inventory of trees. Think of how a community of interested and engaged “followers” could supplement that data, analyze it, visualize it. Maybe they would be able to explain it to others better, to find savings or potential problems, develop new forms of risk assessment.

It would certainly make for an interesting discussion. If 100 or even just 5 new analyses were to emerge, maybe none of them would be helpful, or would provide any insights. But I have my doubts. I suspect it would enrich the public debate.

It could be that the analysis would become as sexy as the apps. And that’s an outcome that would warm this policy wonk’s soul.

It's the icing, not the cake: key lesson on open data for governments

At the 2010 GTEC conference I did a panel with David Strigel, the Program Manager of the Citywide Data Warehouse (CityDW) at the District of Columbia Government. During the introductory remarks David recounted the history of Washington DC’s journey to open data.

Interestingly, that journey began not with open data, but with an internal problem. Back around 2003 the city had a hypothesis that towing away abandoned cars would reduce crime rates in the immediate vicinity, thereby saving more money in the long term than the cost of towing. In order to access the program’s effectiveness city staff needed to “mash-up” longitudinal crime data against service request data – specifically, requests to remove abandoned cars. Alas, the data sets were managed by different departments, so this was tricky task. As a result the city’s IT department negotiated bilateral agreements with both departments to host their datasets in a single location. Thus the DC Data Warehouse was born.

Happily, the data demonstrated the program was cost effective. Building on this success the IT department began negotiating more bilateral agreements with different departments to host their data centrally. In return for giving up stewardship of the data the departments retained governance rights but reduced their costs and the IT group provided them with additional, more advanced, analytics. Over time the city’s data warehouse became vast. As a result, when DC decided to open up its data it was, relatively speaking, easy to do. The data was centrally located, was already being shared and used as a platform internally. Extending this platform externally (while not trivial) was a natural step.

In short, the deep problem that needed to solved wasn’t open data. Its was an information management. Getting the information management and governance policies right was essential for DC to move quickly. Moreover, this problem strikes at the heart of what it means to be government. Knowing what data you have, where it is, and under a governance structure that allows it to be shared internally (as well as externally) is a problem every government is going to face if it wants to be efficient, relevant and innovative in the 21st century. In other words, information management is the cake. Open data – which I believe is essential – is however the sweet icing you smother on top of that dense cake you’ve put in place.

Okay, with that said two points that flow from this.

First: Sometime, governments that “do” open data start off by focusing on the icing. The emphasis in on getting data out there, and then after the fact, figuring out  governance model that will make sense. This is a viable strategy, but it does have real risks. When sharing data isn’t at the core function but rather a feature tacked on at the end, the policy and technical infrastructure may be pretty creaky. In addition, developers may not want to innovate on top of your data platform because they may (rightly) question the level of commitment. One reason DC’s data catalog works is because it has internal users. This gives the data stability and a sense of permanence. On the upside, the icing is politically sexier, so it may help marshal resources to help drive a broader rethink of data governance. Either way, at some point, you’ve got to tackle the cake, otherwise, things are going to get messy. Remember it took DC 7 years to develop its cake before it put icing on it. But that was making it from scratch. Today thanks to new services (armies of consultants on this), tools (eg. Socrata) and models (e.g. like Washington, DC) you can make that cake following a recipe and even use cake mix. As David Strigel pointed out, today, he could do it in a fraction of the time.

Second: More darkly, one lesson to draw from DC is that the capacity of a government to do open data may be a pretty good proxy for their ability to share information and coordinate across different departments. If your government can’t do open data in a relatively quick time period, it may mean they simply don’t have the infrastructure in place to share data internally all that effectively either. In a world where government productivity needs to rise in order to deal with budget deficits, that could be worrying.

Lots of Open Data Action in Canada

A lot of movement on the open data (and not so open data) front in Canada.

Canadian International Development Agency (CIDA) Open Data Portal Launched

IATI-imagesSome readers may remember that last week I wrote a post about the imminent launch of CIDA’s open data portal. The site is now live and has a healthy amount of data on it. It is a solid start to what I hope will become a robust site. I’m a big believer – and supporter of the excellent advocacy efforts of the good people at Engineers Without Borders – that the open data portal would be greatly enhanced if CIDA started publishing its data in compliance with the emerging international standard of the International Aid Transparency Initiative as these 20 leading countries and organizations have.

If anyone creates anything using this data, I’d love to see it. One simple start might be to try using the Open Knowledge Foundation’s open source Where Does my Money Go code, to visualize some of the spending data. I’d be happy to chat with anyone interested in doing this, you can also check out the email group to find some people experienced in playing with the code base.

Improved License on the CIDA open data portal and data.gc.ca

One thing I’ve noticed with the launch of the CIDA open data portal was how the license was remarkably better than the license at data.gc.ca – which struck me as odd, since I know the feds like to be consistent about these types of things. Turns out that the data.gc.ca license has been updated as well and the two are identical. This is good news as some of the issues that were broken with the previous license have been fixed. But not all. The best license out there remains the license at data.gov (that’s a trick question, because data.gov has no license, it is all public domain! Tricky eh…? Nice!) but if you are going to have a license, the UK Open Government License used by at data.gov.uk is more elegant, freer and satisfies a number of the concerns I cite above and have heard people raise.

So this new data.gc.ca license is a step in the right direction, but still behind the open gov leaders (teaching lawyers new tricks sadly takes a long time, especially in government).

Great site, but not so open data: WellBeing Toronto

Interestingly, the City of Toronto has launched a fabulous new website called Well Being Toronto. It is definitely worth checking out. The main problem of course is that while it is interesting to look at, the underlying data is, sadly, not open. You can’t play with the data, such as mash it up with your own (or another jurisdiction’s) data. This is disappointing as I believe a number of non-profits in Toronto would likely find the underlying data quite helpful/important. I have, however, been told that the underlying data will be made open. It is something I hope to check in on again in a few months as I fear that it may never get prioritized, so it may be up to Torontonians to whold the Mayor and council’s feet to the fire to ensure it gets done.

Parliamentary Budget Office (PBO) launches (non-open) data website

It seems the PBO is also getting in on the data action with the launch of a beta site that allows you to “see” budgets from the last few years. I know that the Parliamentary Budget Office has been starved of resources, so they deserve to be congratulated for taking this first, important step. Also interesting is that the data has no license on the website, which could make it the most liberally licensed open data portal in the country. The site does have big downsides. First, the data can only be “looked” at, there is no obvious (simple) way to download it and start playing with it. More oddly still the PBO requires that users register with their email address to view the data. This seems beyond odd and actually, down right creepy, to me. First, parliament’s budget should be free and open and one should not need to hand over an email address to access it. Second, the email addresses collected appear to serve no purpose (unless the PBO intends to start spamming us), other than to tempt bad people to hack their site so they can steal a list of email addresses.

Why not create an Open311 add-on for Ushahidi?

This is not a complicated post. Just a simple idea: Why not create an Open311 add-on for Ushahidi?

So what do I mean by that, and why should we care?

Many readers will be familiar with Ushahidi, non-profit that develops open source mapping software that enables users to collect and visualize data in interactive maps. It’s history is now fairly famous, as the Wikipedia article about it outlines: “Ushahidi.com’ (Swahili for “testimony” or “witness”) is a website created in the aftermath of Kenya’s disputed 2007 presidential election (see 2007–2008 Kenyan crisis) that collected eyewitness reports of violence sent in by email and text-message and placed them on a Google map.[2]“Ushahidi’s mapping software also proved to be an important resource in a number of crises since the Kenyan election, most notably during the Haitian earthquake. Here is a great 2 minute video on How how Ushahidi works.

ushahidi-redBut mapping of this type isn’t only important during emergencies. Indeed it is essential for the day to day operations of many governments, particularly at the local level. While many citizens in developed economies may be are unaware of it, their cities are constantly mapping what is going on around them. Broken infrastructure such as leaky pipes, water mains, clogged gutters, potholes, along with social issues such as crime, homelessness, business and liquor license locations are constantly being updated. More importantly, citizens are often the source of this information – their complaints are the sources of data that end up driving these maps. The gathering of this data generally falls under the rubric of what is termed 311 systems – since in many cities you can call 311 to either tell the city about a problem (e.g. a noise complaint, service request or inform them about broken infrastructure) or to request information about pretty much any of the city’s activities.

This matters because 311 systems have generally been expensive and cumbersome to run. The beautiful thing about Ushahidi is that:

  1. it works: it has a proven track record of enabling citizens in developing countries to share data using even the simplest of devices both with one another and agencies (like humanitarian organizations)
  2. it scales: Haiti and Kenya are pretty big places, and they generated a fair degree of traffic. Ushahidi can handle it.
  3. it is lightweight: Ushahidi technical footprint (yeap making that up right now) is relatively light. The infrastructure required to run it is not overly complicated
  4. it is relatively inexpensive: as a result of (3) it is also relatively cheap to run, being both lightweight and leveraging a lot of open source software
  5. Oh, and did I mention IT WORKS.

This is pretty much the spec you would want to meet if you were setting up a 311 system in a city with very few resources but interested in starting to gather data about both citizen demands and/or trying to monitor newly invested in infrastructure. Of course to transform Ushahidi into a process for mapping 311 type issues you’d need some sort of spec to understand what that would look like. Fortunately Open311 already does just that and is supported by some of the large 311 providers system providers – such as Lagan and Motorola – as well as some of the disruptive – such as SeeClickFix. Indeed there is an Open311 API specification that any developer could use as the basis for the add-on to Ushahidi.

Already I think many cities – even those in developing countries – could probably afford SeeClickFix, so there may already be a solution at the right price point in this space. But maybe not, I don’t know. More importantly, an Open311 module for Ushahidi could get local governments, or better still, local tech developers in developing economies, interested in and contributing to the Ushahidi code base, further strengthening the project. And while the code would be globally accessible, innovation and implementation could continue to happen at the local level, helping drive the local economy and boosting know how. The model here, in my mind, is OpenMRS, which has spawned a number of small tech startups across Africa that manage the implementation and servicing of a number of OpenMRS installations at medical clinics and countries in the region.

I think this is a potentially powerful idea for stakeholders in local governments and startups (especially in developing economies) and our friends at Ushahidi. I can see that my friend Philip Ashlock at Open311 had a similar thought a while ago, so the Open311 people are clearly interested. It could be that the right ingredients are already in place to make some magic happen.

The next Open Data battle: Advancing Policy & Innovation through Standards

With the possible exception of weather data, the most successful open data set out there at the moment is transit data. It remains the data with which developers have experimented and innovated the most. Why is this? Because it’s been standardized. Ever since Google and the City of Portland creating the General Transit Feed Specification (GTFS) any developer that creates an application using GTFS transit data can port their application to over 100+ cities around the world with 10s and even 100s of millions of potential users. Now that’s scale!

All in all the benefits of a standard data structure are clear. A public good is more effectively used, citizens receive enjoy better service and companies (both Google and the numerous smaller companies that sell transit related applications) generate revenue, pay salaries, etc…

This is why, with a number of jurisdictions now committed to open data, I believe it is time for advocates to start focusing on the next big issue. How do we get different jurisdictions to align around standard structures so as to increase the number of people to whom an application or analysis will be relevant? Having cities publish open data sets is a great start and has led to real innovation, next generation open data and the next leaps in innovation will require some more standards.

The key, I think, is to find areas that meet three criteria:

  • Government Data: Is there relevant government data about the service or issue that is available?
  • Demand: Is this a service for which there is regular demand? (this is why transit is so good, millions of people touch the service on a daily basis)
  • Business Model: Is there a business that believes it can use this data to generate revenue (either directly, or indirectly)

 

 

opendata-1.0151

Two comments on this.

First, I think we should look at this model because we want to find places where the incentives are right for all the key stakeholders. The wrong way to create a data structure is to get a bunch of governments together to talk about it. That process will take 5 years… if we are lucky. Remember the GTFS emerged because Google and Portland got together, after that, everybody else bandwagoned because the value proposition was so high. This remains, in my mind, not the perfect, but the fastest and more efficient model to get more common data structures. I also respect it won’t work for everything, but it can give us more successes to point to.

Which leads me to point two. Yes, at the moment, I think that target in the middle of this model is relatively small. But I think we can make it bigger. The GTFS shows cities, citizens and companies that there is value in open data. What we need are more examples so that a) more business models emerge and b) more government data is shared in a structured way across multiple jurisdictions. The bottom and and right hand circles in this diagram can, and if we are successful will, move. In short, I think we can create this dynamic:

opendata4.016

So, what does this look like in practice?

I’ve been trying to think of services that fall in various parts of the diagram. A while back I wrote a post about using open restaurant inspection data to drive down health costs. Specifically around finding a government to work with a Yelp!, Bing or Google Maps, Urban Spoon or other company to integrate the  inspection data into the application. That for me is an example of something that I think fits in the middle. Government’s have the data, its a service citizens could touch on a regular base if the data appeared in their workflow (e.g. Yelp! or Bing Maps) and for those businesses it either helps drive search revenue or gives their product a competitive advantage. The Open311 standard (sadly missing from my diagram), and the emergence of SeeClickFix strike me as another excellent example that is right on the inside edge of the sweet spot).

Here’s a list of what else I’ve come up with at the moment:

opendata3.015

You can also now see why I’ve been working on Recollect.net – our garbage pick up reminder service – and helping develop a standard around garbage scheduling data – the Trash & Recycling Object Notation. I think it is a service around which we can help explain the value of common standards to cities.

You’ll notice that I’ve put “democracy data” (e.g. agendas, minutes, legislation, hansards, budgets, etc…) in the area where I don’t think there is a business plan. I’m not fully convinced of this – I could see a business model in the media space for this – but I’m trying to be conservative in my estimate. In either case, that is the type of data the good people at the Sunlight Foundation are trying to get liberated, so there is at least, non-profit efforts concentrated there in America.

I also put real estate in a category where I don’t think there is real consumer demand. What I mean by this isn’t that people don’t want it, they do, but they are only really interested in it maybe 2-4 times in their life. It doesn’t have the high touch point of transit or garbage schedules, or of traffic and parking. I understand that there are businesses to be built around this data, I love Viewpoint.ca – a site that takes mashes opendata up with real estate data to create a compelling real estate website – but I don’t think it is a service people will get attached to because they will only use it infrequently.

Ultimately I’d love to hear from people on ideas they on why might fit in this sweet spot. (if you are comfortable sharing the idea, of course). Part of this is because I’d love to test the model more. The other reason is because I’m engaged with some governments interested in getting more strategic about their open data use and so these types of opportunities could become reality.

Finally, I just hope you find this model compelling and helpful.

New York release road map to becoming a digital city

Yesterday, New York City released its “Road Map for the Digital City: Achieving New York City’s Digital Future.” For those who missed the announcement, especially those concerned about the digital economy, the future of government and citizen services, the document is definitely worth downloading and scanning.

At the heart of the document sits a road map which I’ve ripped from the executive summary and pasted below.What makes me particularly interested in it is how the Open Government section is not uniquely driven by the desire for transparency but with the goal of spurring innovation and increasing access to services. Of course, the devil is in the details but I’m increasingly convinced that open initiatives will be more successful when the government of the day has some specific policy objectives (beyond just transparency) it wishes to drive home, with open data as part of the mix (more on this in a post coming soon).

As such, “government as platform” works best when the government also builds atop the platform. It itself must be a consumer and stakeholder. This is why section 3 is so important and interesting. Essentially section 2 and 3 have parts that are strikingly similar, its just that section 2 outlines the platform and lays out that the government hopes others will build on top of it whereas parts of section 3 outline what the government intends to build atop of it. Of course section 3 goes further and talks as well about gathering information and data from the public which is the big thing in the Gov 2.0 space that many governments have not gotten around to doing effectively – so this will be worth watching more closely. All of this is great news and exactly what governments should be thinking about.

It is great when a big city comes out with a document like this because while New York is not the first to be thinking these ideas, but its profile means that others will start devoting resources to pursue these ideas more aggressively.

Exciting times.

1. Access

The City of New York ensures that all New Yorkers can access the Internet and take advan- tage of public training sessions to use it effectively. It will support more vendor choices to New Yorkers, and introduce Wi-Fi in more public areas.

  1. Connect high needs individuals through federally funded nyc Connected initiatives
  2. Launch outreach and education efforts to increase broadband Internet adoption
  3. Support more broadband choices citywide
  4. Introduce Wi-Fi in more public spaces, including parks

2. Open Government

By unlocking important public information and supporting policies of Open Government, New York City will further expand access to services, enable innovation that improves the lives of New Yorkers, and increase transparency and efficiency.

  1. Develop nyc Platform, an Open Government framework featuring APIs for City data
  2. Launch a central hub for engaging and cultivating feedback from the developer community
  3. Introduce visualization tools that make data more accessible to the public
  4. Launch App Wishlists to support a needs-based ecosystem of innovation
  5. Launch an official New York City Apps hub

3. Engagement

The City will improve digital tools including nyc.gov and 311 online to streamline service and enable citizen-centric, collaborative government. It will expand social media engagement, implement new internal coordination measures, and continue to solicit community input in the following ways:

  1. Relaunch nyc.gov to make the City’s website more usable, accessible, and intuitive
  2. Expand 311 Online through smartphone apps, Twitter and live chat
  3. Implement a custom bit.ly url redirection service on nyc.gov to encourage sharing and transparency
  4. Launch official Facebook presence to engage New Yorkers and customize experience
  5. Launch @nycgov, a central Twitter account and one-stop shop of crucial news and services
  6. Launch a New York City Tumblr vertical, featuring content and commentary on City stories
  7. Launch a Foursquare badge that encourages use of New York City’s free public places
  8. Integrate crowdsourcing tools for emergency situations
  9. Introduce digital Citizen Toolkits for engaging with New York City government online
  10. Introduce smart, a team of the City’s social media leaders
  11. Host New York City’s first hackathon: Reinventing nyc.gov
  12. Launch an ongoing listening sessions across the five boroughs to encourage input

4. Industry

New York City government, led by the New York City Economic Development Corporation, will continue to support a vibrant digital media sector through a wide array of programs, including workforce development, the establishment of a new engineering institution, and a more stream- lined path to do business.

  1. Expand workforce development programs to support growth and diversity in the digital sector
  2. Support technology startup infrastructure needs
  3. Continue to recruit more engineering talent and teams to New York City
  4. Promote and celebrate nyc’s digital sector through events and awards
  5. Pursue a new .nyc top-level domain, led by DOITT

 

Why Does Election Canada Hate Young People?

This weekend the New York Times had an interesting article about how the BBC and other major media organizations are increasingly broadcasting new television episodes simultaneously around the world. The reason? The internet. Fans in the UK aren’t willing to wait months to watch episodes broadcast in the United States and vice versa. Here a multi-billion dollar industry, backed by copyright legislation, law enforcement agencies, and the world’s most powerful governments and trade organizations is recognizing a simple fact: people want information, and it is increasingly impossible to stop them from sharing and getting it.

Someone at Elections Canada should read the article.

Last week Elections Canada took special care to warn Canadian citizens that they risked $25,000 fines if they posted about election results on social network sites before all the polls are closed. Sadly, Election Canada’s approach to the rise of new internet driven technologies speaks volumes about its poor strategy for engaging young voters.

The controversy centers around Section 329 of the Canada Elections Act which prohibits transmitting election results before polling stations have closed. The purpose of the law is to prevent voters on the west coast from being influenced by outcomes on the east coast (or worse, choosing not to vote at all if the election has essentially be decided). Today however, with twitter, facebook and blogs, everybody is a potential “broadcaster.”

Westerner may have a hard time sympathizing with Election Canada’s quandary. It could simply do the equivalent to what the BBC is doing with its new TV shows: not post any results until after all the voting booths had closed. This is a much simpler approach then trying to police and limit the free speech of 10 million Canadian social media users (and to say nothing of the 100s of millions of users outside of Canada who do not fall under its jurisdiction).

More awkwardly, it is hard to feel that the missive wasn’t directed at the very cohort of Election’s Canada is trying to get engaged in elections: young people. Sadly, chastising and scaring the few young people who want to talk about the election with threats of fines seems like a pretty poor way to increase this engagement. If voting and politics is a social behaviour – and the evidence suggests that it is – then you are more likely to vote and engage in politics if you know that your friends vote and engage in politics. Ironically, this might make social media might be the best thing to happen to voting since the secret ballot. So not only is fighting this technology a lost cause, it may also be counter productive from a voter turnout perspective.

Of course, based on the experience many young voters I talk to have around trying to vote, none of this comes as a surprise.

In my first two Canadian elections I lived out of the country. Both times my mail in ballot arrived after the election and were thus ineligible. During the last election I tried to vote at an advanced poll. It was a nightmare. It was hard to locate on the website and the station ended up being a solid 15 minute walk away any of the three nearest bus routes. Totally commute time? For someone without a car? Well over an hour and a half.

This are not acceptable outcomes. Perhaps you think I’m lazy? Maybe. I prefer to believe that if you want people to vote – especially in the age of a service economy – you can’t make it inconvenient. Otherwise the only people who will vote will be those with means and time. That’s hardly democratic.

Besides, it often feels our voting infrastructure was essentially built by and for our grandparents. Try this out. In the 1960’s if you were a “young person” (e.g 20-30) you were almost certainly married and had two kids. You probably also didn’t move every 2 years. In the 60’s the average marriage age was 24 for men, 20 for women. Thinking in terms of the 1950s and 60s: What were the 3 institutions you probably visited on a daily basis? How about A) the local community centre, B) the local elementary school, and C) the local church.

Now, if you are between the age of 20 and 35 or under, name me three institutions you probably haven’t visited in over a decade.

Do young people not vote because they are lazy? Maybe. But they also didn’t have a voting system designed around them like their grandparents did. Why aren’t their voting booths in subway stations? The lobbies of office towers? The local shopping mall? How about Starbucks and Tim Hortons (for both conservatives and liberals)? Somewhere, anywhere, where people actually congregate. Heaven forbid that voting booths be where the voters are.

The fact is our entire voting structure is anti-young people. It’s designed for another era. It needs a full scale upgrade. Call it voting 2.0 or something, I don’t care. Want young people to vote? Then build a voting system that meets their needs, stop trying to force them into a system over a half century old.

We need voting that embraces the internet, social networks, voters without cars and voters that are transient.  These changes alone won’t solve the low voter turn out problem overnight, but if even 5% more young people vote in this election, the parties will take notice and adapt their platforms accordingly. Maybe, just maybe, it could end up creating a virtuous circle.

Back to Reality: The Politics of Government Transparency & Open Data

A number of my friends and advocates in the open government, transparency and open data communities have argued that online government transparency initiatives will be permanent since, the theory goes, no government will ever want to bear the political cost of rolling it back and being perceived as “more opaque.” I myself have, at times, let this argument go unchallenged or even run with it.

This week’s US budget negotiations between Congress and the White House should lay that theory to rest. Permanently.

The budget agreement that has emerged from most recent round of negotiations – which is likely to be passed by congress –  slashes funding to an array of Obama transparency initiatives such as USASpending, the ITDashboard, and data.gov from $34M to $8M. Agree or disagree, Republicans are apparently all too happy to kill initiatives which make the spending and activities of the US government more transparent as well as create a number of economic opportunities around open data. Why? Because they believe it has no political consequences.

So unsurprisingly, it turns out that political transparency initiatives – even when they are online – are as bound to the realities of traditional politics as dot.com’s were bound by the realities of traditional economics. It’s not enough to get a policy created or an initiative launched – it needs to have a community, a group of interested supporters, to nurture and protect it. Otherwise, it will be at risk.

Back in 2009, in the lead up to the drafting and launching of Vancouver’s Open Data motion I talked about creating an open-government bargain. Specifically, I argued that:

..in an open city, a bargain must exists between a government and its citizens. To make open data a success and to engage the community a city must listen, engage, ask for help, and of course, fulfill its promise to open data as quickly as possible. But this bargain runs both ways. The city must to its part, but so too must the local tech community. They must participate, be patient (cities move slower than tech companies), offer help and, most importantly, make the data come alive for each other, policy makers and citizens through applications and shared analysis.

Some friends countered that open data and transparency should simply exist because it is the right thing to do. I don’t disagree – and I wish we lived in a world where the existence of this ideal was sufficient enough to guarantee these initiatives. But it isn’t sufficient. It’s easy to kill something that no one uses (or in the case of data.gov, that hasn’t been given enough time to generate a vibrant user base). It’s much, much harder to kill something that has a community that uses it, especially if that community and the products it creates are valued by society more generally. This is why open data needs users, it needs developers, think tanks and above all, the media, to take interest in it and to leverage it to create content. It’s also why I’ve tried to create projects like Emitter.ca, recollect.net, taxicity and others, because the more value we create with open data for everyone, the more secure government transparency policies will be.

It’s use it or risk losing it. I wish this weren’t the case, but it’s the best defense I can think of.