Monthly Archives: June 2010

Learning from Libraries: The Literacy Challenge of Open Data

We didn’t build libraries for a literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have public policy literate citizens, we build them so that citizens may become literate in public policy.

Yesterday, in a brilliant article on The Guardian website, Charles Arthur argued that a global flood of government data is being opened up to the public (sadly, not in Canada) and that we are going to need an army of people to make it understandable.

I agree. We need a data-literate citizenry, not just a small elite of hackers and policy wonks. And the best way to cultivate that broad-based literacy is not to release in small or measured quantities, but to flood us with data. To provide thousands of niches that will interest people in learning, playing and working with open data. But more than this we also need to think about cultivating communities where citizens can exchange ideas as well as involve educators to help provide support and increase people’s ability to move up the learning curve.

Interestingly, this is not new territory.  We have a model for how to make this happen – one from which we can draw lessons or foresee problems. What model? Consider a process similar in scale and scope that happened just over a century ago: the library revolution.

In the late 19th and early 20th century, governments and philanthropists across the western world suddenly became obsessed with building libraries – lots of them. Everything from large ones like the New York Main Library to small ones like the thousands of tiny, one-room county libraries that dot the countryside. Big or small, these institutions quickly became treasured and important parts of any city or town. At the core of this project was that literate citizens would be both more productive and more effective citizens.

But like open data, this project was not without controversy. It is worth noting that at the time some people argued libraries were dangerous. Libraries could spread subversive ideas – especially about sexuality and politics – and that giving citizens access to knowledge out of context would render them dangerous to themselves and society at large.  Remember, ideas are a dangerous thing. And libraries are full of them.

Cora McAndrews Moellendick, a Masters of Library Studies student who draws on the work of Geller sums up the challenge beautifully:

…for a period of time, censorship was a key responsibility of the librarian, along with trying to persuade the public that reading was not frivolous or harmful… many were concerned that this money could have been used elsewhere to better serve people. Lord Rodenberry claimed that “reading would destroy independent thinking.” Librarians were also coming under attack because they could not prove that libraries were having any impact on reducing crime, improving happiness, or assisting economic growth, areas of keen importance during this period… (Geller, 1984)

Today when I talk to public servants, think tank leaders and others, most grasp the benefit of “open data” – of having the government sharing the data it collects. A few however, talk about the problem of just handing data over to the public. Some questions whether the activity is “frivolous or harmful.” They ask “what will people do with the data?” “They might misunderstand it” or “They might misuse it.” Ultimately they argue we can only release this data “in context”. Data after all, is a dangerous thing. And governments produce a lot of it.

As in the 19th century, these arguments must not prevail. Indeed, we must do the exact opposite. Charges of “frivolousness” or a desire to ensure data is only released “in context” are code to obstruct or shape data portals to ensure that they only support what public institutions or politicians deem “acceptable”. Again, we need a flood of data, not only because it is good for democracy and government, but because it increases the likelihood of more people taking interest and becoming literate.

It is worth remembering: We didn’t build libraries for an already literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have a data or public policy literate citizenry, we build them so that citizens may become literate in data, visualization, coding and public policy.

This is why coders in cities like Vancouver and Ottawa come together for open data hackathons, to share ideas and skills on how to use and engage with open data.

But smart governments should not only rely on small groups of developers to make use of open data. Forward-looking governments – those that want an engaged citizenry, a 21st-century workforce and a creative, knowledge-based economy in their jurisdiction – will reach out to universities, colleges and schools and encourage them to get their students using, visualizing, writing about and generally engaging with open data. Not only to help others understand its significance, but to foster a sense of empowerment and sense of opportunity among a generation that could create the public policy hacks that will save lives, make public resources more efficient and effective and make communities more livable and fun. The recent paper published by the University of British Columbia students who used open data to analyze graffiti trends in Vancouver is a perfect early example of this phenomenon.

When we think of libraries, we often just think of a building with books.  But 19th century mattered not only because they had books, but because they offered literacy programs, books clubs, and other resources to help citizens become literate and thus, more engaged and productive. Open data catalogs need to learn the same lesson. While they won’t require the same centralized and costly approach as the 19th century, governments that help foster communities around open data, that encourage their school system to use it as a basis for teaching, and then support their citizens’ efforts to write and suggest their own public policy ideas will, I suspect, benefit from happier and more engaged citizens, along with better services and stronger economies.

So what is your government/university/community doing to create its citizen army of open data analysts?

ChangeCamp Vancouver, GovCamp Toronto & Open Data Hackathon

For those in Vancouver, ChangeCamp will be taking place Saturday at the W2 Storyeum on 151 W Cordova. You can register here and propose sessions in advance here. I know I’ll be there and I am looking forward to hearing about interesting local projects and trying to find ways to contribute to them.

I’ll probably submit a brain storming session on datadotgc.ca – there are some exciting developments in the work around the website I’d like to test out on an audience. I’d also be interested in a session that asks people about apps they’d like to create using open data. It will be interesting to get a better sense of additional data sets people would like to request from the city.

Out in Toronto, I’ll be speaking at GovCamp Toronto on June 17th at the Toronto Reference Library. I’m not sure how registration is going to work but I would keep an eye on this page if you are interested.

Finally, on the 17th SAP will be hosting an open data hackathon for developers in Vancouver. A great opportunity to come out and work on projects related to Apps 4 Climate Action or open data projects using city of Vancouver data (or both!). I was really impressed to hear that in Ottawa a 130 people came to the first open data hackathon – would love to help foster a community like that here in Vancouver. You can RSVP for this event here.

Hope to see you at these events!

Apps for Climate Action Update – Lessons and some new sexy data

ttl_A4CAOkay, so I’ll be the first to say that the Apps4Climate Action data catalog has not always been the easiest to navigate and some of the data sets have not been machine readable, or even data at all.

That however, is starting to change.

Indeed, the good news is three fold.

First, the data catalog has been tweaked and has better search and an improved capacity to sort out non-machine readable data sets. A great example of a government starting to think like the web, iterating and learning as the program progresses.

Second, and more importantly, new and better sets are starting to be added to the catalog. Most recently the Community Energy and Emissions Inventories were released in an excel format. This data shows carbon emissions for all sorts of activities and infrastructure at a very granular level. Want to compare the GHG emissions of a duplex in Vancouver versus a duplex in Prince George? Now you can.

Moreover, this is the first time any government has released this type of data at all, not to mention making it machine readable. So not only have the app possibilities (how green is your neighborhood, rate my city, calculate my GHG emissions) all become much more realizable, but any app using this data will be among the first in the world.

Finally, probably one of the most positive outcomes of the app competition to date is largely hidden from the public. The fact that members of the public have been asking for better data or even for data sets at all(!) has made a number of public servants realize the value of making this information public.

Prior to the competition making data public was a compliance problem, something you did but you figured no one would ever look at or read it. Now, for a growing number of public servants, it is an innovation opportunity. Someone may take what the government produces and do something interesting with it. Even if they don’t, someone is nonetheless taking interest in your work – something that has rewards in of itself. This, of course, doesn’t mean that things will improve over night, but it does help advance the goal of getting government to share more machine readable data.

Better still, the government is reaching out to stakeholders in the development community and soliciting advice on how to improve the site and the program, all in a cost-effective manner.

So even within the Apps4Climate Action project we see some of the changes the promise of Government 2.0 holds for us:

  • Feedback from community participants driving the project to adapt
  • Iterations of development conducted “on the fly” during a project or program
  • Success and failures resulting in queries in quick improvements (release of more data, better website)
  • Shifting culture around disclosure and cross sector innovation
  • All on a timeline that can be measured in weeks

Once this project is over I’ll write more on it, but wanted to update people, especially given some of the new data sets that have become available.

And if you are a developer or someone who would like to do a cool visualization with the data, check out the Apps4Climate Action website or drop me an email, happy to talk you through your idea.

How to Engage with Social Media: An Example

The other week I wrote a blog post titled Canadian Governments: How to Waste millions online ($30M and Counting) in which I argued that OpenID should be the cornerstone of the government’s online identification system. The post generated a lot of online discussion, much of which was of very high quality and deeply thoughtful. On occasion, comments can enhance and even exceed a post’s initial value, and I’d argue this is one of these cases – something that is always a joy when it happens.

There was however, one comment that struck me as particularly important, not only because it was thoughtful, but because the type of comment is so rare. This is because it came from a government official. In this case, from Dave Nikolejsin, the CIO of the Government of British Columbia.

Everything about Mr. Nikolejsin’s comment deserves to be studied and understood by those in the public and private sector seeking to understand how to engage the public online. His comment is a perfect case of how and why governments should allow public servants to comment on blogs that tackle issues they are themselves addressing.

What makes Mr. Nikolejsin’s comment (which I’ve reprinted below) so effective? Let me break out the key components:

  1. It’s curious: Given the nature of my blog post a respondent could easily have gone on the offensive and merely countered claims they disagreed with. Instead Mr Nikolejsin remains open and curious about the ideas in the post and its claims. This makes readers and other commentators less likely to attack and more likely to engage and seek to understand.
  2. It seeks to return to first principles: The comment is effective because it is concise and it tackles the specific issues raised by the post. But part of what really makes it shine is how it seeks to identify first principles by talking about different approaches to online ID’s. Rather than ending up arguing about solutions, the post engages readers to identify what assumptions they may or may not have in common with one another. This won’t necessarily makes people more likely to agree, but they’ll end up debating the right thing (goals, assumptions) rather than the wrong thing (specific solutions).
  3. It links to further readings: Rather than try to explain everything in his response, the comment instead links to relevant work. This keeps the comment shorter and more readable, while also providing those who care about this issue (like me) with resources to learn more.
  4. It solicits feedback: “I really encourage you to take a look at the education link and tell me what you think.Frequently comments simply retort points in the original post they disagree with. This can reinforce the sense that the two parties are in opposition. Mr. Nikolejsin and I actually agree far more than we disagree: we both want a secure, cost effective, and user friendly online ID management system for government. By asking for feedback he implicitly recognizes this and is asking me to be a partner, not an antagonist.
  5. It is light: One thing about the web is that it is deeply human. Overly formal statements looks canned and cause people to tune out. This comment is intelligent and serious with its content, but remains light and human in its style. I get the sense a human wrote this, not a communications department. People like engaging with humans. They don’t like engaging with communication departments.
  6. Community Feedback: The comment has already sparked a number of responses which contain supportive thoughts, suggestions and questions, including some by those working in municipalities, as experts in the field and citizen users. It’s actually a pretty decent group of people there – the kind a government would want to engage.

In short, this is a comment that sought to engage. And I can tell you, it has been deeply, deeply successful. I know that some of what I wrote might have been difficult to read but after reading Mr. Nikolejsin’s comments, I’m much more likely to bend over backwards to help him out. Isn’t this what any government would want of its citizens?

Now, am I suggesting that governments should respond to every blog post out there? Definitely not. But there were a number of good comments on this post and the readership in terms of who was showing up makes commenting on a post likely worthwhile.

I’ve a number of thoughts on the comment that I hope to post shortly. But first, I wanted to repost the comment, which you can also read in the original post’s thread here.

Dave Nikolejsin <dave.nikolejsin@gov.bc.ca> (unregistered) wrote: Thanks for this post David – I think it’s excellent that this debate is happening, but I do need to set the record straight on what we here in BC are doing (and not doing).

First and foremost, you certainly got my attention with the title of your post! I was reading with interest to see who in Canada was wasting $30M – imagine my surprise when I saw it was me! Since I know that we’ve only spent about 1% of that so far I asked Ian what exactly it was he presented at the MISA conference you mentioned (Ian works for me). While we would certainly like someone to give us $30M, we are not sure where you got the idea we currently have such plans.

That said I would like to tell you what we are up to and really encourage the debate that your post started. I personally think that figuring out how we will get some sort of Identity layer on the Internet is one of the most important (and vexing) issues of our day. First, just to be clear, we have absolutely nothing against OpenID. I think it has a place in the solution set we need, but as others have noted we do have some issues using foreign authentication services to access government services here in BC simply because we have legislation against any personal info related to gov services crossing the border. I do like Jeff’s thinking about whom in Canada can/will issue OpenID’s here. It is worth thinking about a key difference we see emerging between us and the USA. In Canada it seems that Government’s will issue on line identity claims just like we issue the paper/plastic documents we all use to prove our Identities (driver’s licenses, birth certificates, passports, SIN’s, etc.). In the USA it seems that claims will be issued by the private sector (PayPal, Google, Equifax, banks, etc.). I’m not sure why this is, but perhaps it speaks to some combination of culture, role of government, trust, and the debacle that REALID has become.

Another issue I see with OpenID relates to the level of assurance you get with an OpenID. As you will know if you look at the pilots that are underway in US Gov, or look at what you can access with an OpenID right now, they are all pretty safe. In other words “good enough” assurance of who you are is ok, and if someone (either the OpenID site or the relying site) makes a mistake it’s no big deal. For much of what government does this is actually an acceptable level of assurance. We just need a “good enough” sense of who you are, and we need to know it’s the same person who was on the site before. However, we ALSO need to solve the MUCH harder problem of HIGH ASSURANCE on-line transactions. All Government’s want to put very high-value services on-line like allowing people access to their personal health information, their kids report cards, driver’s license renewals, even voting some day, and to do these things we have to REALLY be sure who’s on the other end of the Internet. In order to do that someone (we think government) needs to vouch (on-line) that you are really you. The key to our ability to do so is not technology, or picking one solution over the other, the key is the ID proofing experience that happens BEFORE the tech is applied. It’s worth noting that even the OpenID guys are starting to think about OpenID v.Next (http://self-issued.info/?p=256) because they agree with the assurance level limitation of the current implementation of OpenID. And OpenID v.Next will not be backward compatible with OpenID.

Think about it – why is the Driver’s License the best, most accepted form of ID in the “paper” world. It’s because they have the best ID proofing practices. They bring you to a counter, check your foundation documents (birth cert., Card Card, etc.), take your picture and digitally compare it to all the other pictures in the database to make sure you don’t have another DL under another name, etc. Here in BC we have a similar set of processes (minus the picture) under our Personal BCeID service (https://www.bceid.ca/register/personal/). We are now working on “claims enabling” BCeID and doing all the architecture and standards work necessary to make this work for our services. Take a look at this work here (http://www.cio.gov.bc.ca/cio/idim/index.page?).

I really encourage you to take a look at the education link and tell me what you think. Also, the standards package is getting very strong feedback from vendors and standards groups like the ICF, OIX, OASIS and Kantara folks. This is really early days and we are really trying to make sure we get it right – and spend the minimum by tracking to Internet standards and solutions wherever possible.

Sorry for the long post, but like I said – this is important stuff (at least to me!) Keep the fires burning!

Thanks – Dave.

Saving Millions: Why Cities should Fork the Kuali Foundation

For those interested in my writing on open source, municipal issues and technology, I want to be blunt: I consider this to be one of the most important posts I’ll write this year.

A few months ago I wrote an article and blog post about “Muniforge,” an idea based on a speech I’d given at a conference in 2009 in which I advocated that cities with common needs should band together and co-develop software to reduce procurement costs and better meet requirements. I continued to believe in the idea, but have recognized that cultural barriers would likely mean it would be difficult to realize.

Last month that all changed. While at Northern Voice I ended up talking to Jens Haeusser an IT strategist at the University of British Columbia and confirmed something I’d long suspected: that some people much smarter than me had already had the same idea and had made it a reality… not among cities but among academic institutions.

The result? The Kuali foundation. “…A growing community of universities, colleges, businesses, and other organizations that have partnered to build and sustain open-source administrative software for higher education, by higher education.”

In other words for the past 5 years over 35 universities in the United States, Canada, Australia and South Africa have been successfully co-developing software.

For cities everywhere interested in controlling spending or reducing costs, this should be an earth shattering revelation – a wake up call – for several reasons:

  • First, a viable working model for muniforge has existed for 5 years and has been a demonstrable success, both in creating high quality software and in saving the participating institutions significant money. Devising a methodology to calculate how much a city could save by co-developing software with an open source license is probably very, very easy.
  • Second, what is also great about universities is that they suffer from many of the challenges of cities. Both have: conservative bureaucracies, limited budgets, and significant legacy systems. In addition, neither have IT as the core competency and both are frequently concerned with licenses, liability and the “owning” intellectual property.
  • Which thirdly, leads to possibly the best part. The Kuali Foundation has already addressed all the critical obstacles to such an endeavour and has developed licensing agreements, policies, decision-making structures, and work flows processes that address necessary for success. Moreover, all of this legal, policy and work infrastructure is itself available to be copied. For free. Right now.
  • Fourth, the Kuali foundation is not a bunch of free-software hippies that depend on the kindness of strangers to patch their software (a stereotype that really must end). Quite the opposite. The Kuali foundation has helped spawn 10 different companies that specialize in implementing and supporting (through SLAs) the software the foundation develops. In other words, the universities have created a group of competing firms dedicated to serving their niche market. Think about that. Rather than deal with vendors who specialize in serving large multinationals and who’ve tweaked their software to (somewhat) work for cities, the foundation has fostered competing service providers (to say it again) within the higher education niche.

As a result, I believe a group of forwarding thinking cities – perhaps starting with those in North America – should fork the Kuali Foundation. That is, they should copy Kuali’s bylaws, it structure, its licenses and pretty much everything else – possibly even the source code for some of its projects – and create a Kuali for cities. Call it Muniforge, or Communiforge or CivicHub or whatever… but create it.

We can radically reduce the costs of software to cities, improve support by creating the right market incentive to help foster companies whose interests are directly aligned with cities and create better software that meets cities unique needs. The question is… will we? All that is required is for CIO’s to being networking and for a few to discover some common needs. One I idea I have immediately is for the City of Nanaimo to apply the Kuali modified Apache license to its council monitoring software package it developed in house, and to upload it to GitHub. That would be a great start – one that could collectively save cities millions.

If you are a city CIO/CTO/Technology Director and are interested in this idea, please check out these links:

The Kuali Foundation homepage

Open Source Collaboration in Higher Education: Guidelines and Report of the Licensing and Policy Framework Summit for Software Sharing in Higher Education by Brad Wheeler and Daniel Greenstein (key architects behind Kuali)

Open Source 2010: Reflections on 2007 by Brad Wheeler (a must read, lots of great tips in here)

Heck, I suggest looking at all of Brad Wheeler’s articles and presentations.

Another overview article on Kuali by University Business

Phillip Ashlock of Open Plans has an overview article of where some cities are heading re open source.

And again, my original article on Muniforge.

If you aren’t already, consider reading the OpenSF blog – these guys are leaders and one way or another will be part of the mix.

Also, if you’re on twitter, consider following Jay Nath and Philip Ashlock.

Half victory in making BC local elections more transparent

Over the past few months the British Columbia government (my home province – or for my American friends – state) has had a taskforce looking at reforming local (municipal) election rules.

During the process I submitted a suggestion to the taskforce outlining why campaign finance data should be made available online and in machine readable format (ie. so you can open it in Microsoft Excel, or Google Docs, for example).

Yesterday the taskforce published their conclusions and… they kind of got it right.

At first blush, things look great… The press release and taskforce homepage list, as one of the core recommendations:

Establish a central role for Elections BC in enforcement of campaign finance rules and in making campaign finance disclosure statements electronically accessible

Looks promising… yes? Right. But note the actual report (which ironically, is only available in PDF, so I can’t link to the specific recommendations… sigh). The recommendation around disclosure reads:

Require campaign finance disclosure information to be published online
and made centrally accessible though Elections BC

and the explanatory text reads:

Many submissions suggested that 120 days is too long to wait for disclosure reports, and that the public should be able to access disclosure information sooner and more easily. Given the Task Force’s related recommendations on Elections BC’s role in overseeing local campaign finance rules, it is suggested that Elections BC act as a central repository of campaign finance disclosure statements. Standardizing disclosure statement forms is of practical importance if the statements are to be published online and centrally available, and would help members of the public, media and academia analyze the information. [my italics]

My take? That the spirit of the recommendation is for campaign finance data be machine readable – that you should be able to download, open, and play with it on your own computer. However, the literally reading of this text suggests that simple scanning account ledgers and sharing them as an image file or unstructured pdf might suffice.

This would be essentially doing the same thing that generally happens presently and so would not mark a step forward. Another equally bad outcome? That the information gets shared in a manner similar to the way federal MP campaign data is shared on Elections Canada website where it cannot be easily downloaded and you are only allowed to look at one candidates financial data at a time. (Elections Canada site is almost designed to prevent you from effectively analyzing campaign finance data).

So in short, the Taskforce members are to be congratulated as I think their intentions were bang on: they want the public to be able to access and analyze campaign finance data. But we will need to continue to monitor this issue carefully as the language is vague enough that the recommendation may not produce the desired outcome.