Tag Archives: gov20

Lessons from Michigan's "Innovation Fund" for Government Software

So it was with great interest that several weeks ago a reader emailed me this news article coming out of Michigan. Turns out the state recently approved a $2.5 million dollar innovation fund that will be dispersed in $100,000 to $300,000 chunks to fund about 10 projects. As Government Technology reports:

The $2.5 million innovation fund was approved by the state Legislature in Michigan’s 2012 budget. The fund was made formal this week in a directive from Gov. Rick Snyder. The fund will be overseen by a five-person board that includes Michigan Department of Technology, Management and Budget (DTMB) Director John Nixon and state CIO David Behen.

There are lessons in this for other governments thinking about how to spur greater innovation in government while also reducing the cost of software.

First up: the idea of an innovation fund – particularly one that is designed to support software that works for multiple governments – is a laudable one. As I’ve written before, many governments overpay for software. I shudder to think of how many towns and counties in Michigan alone are paying to have the exact same software developed for them independently. Rather than writing the same piece of software over and over again for each town, getting a single version that is usable by 80% (or heck, even just 25%) of cities and counties would be a big win. We have to find a way to get governments innovating faster, and getting them back in the driver’s seat on  the software they need (as opposed to adapting stuff made for private companies) would be a fantastic start.

Going from this vision – of getting something that works in multiple cities – to reality, is not easy. Read the Executive Directive more closely. What’s particularly interesting (from my reading) is the flexibility of the program:

In addition to the Innovation Fund and Investment Board, the plan may include a full range of public, private, and non-profit collaborative innovation strategies, including resource sharing…

There is good news and bad news here.

The bad news is that all this money could end up as loans to mom and pop software shops that serve a single city or jurisdiction, because they were never designed from the beginning to be usable across multiple jurisdictions. In other words, the innovation fund could go to fund a bunch of vendors who already exist and who, at best, do okay, or at worse, do mediocre work and, in either case, will never be disruptive and blow up the marketplace with something that is both radically helpful and radically low cost.

What makes me particularly nervous about the directive is that there is no reference to open source license. If a government is going to directly fund the development of software, I think it should be open source; otherwise, taxpayers are acting as venture capitalists to develop software that they are also going to pay licenses to use. In other words, they’re absorbing the risk of a VC in order to have the limited rights of being a client; that doesn’t seem right. An open source requirement would be the surest way to ensure an ROI on the program’s money. It assures that Michigan governments that want access to what gets developed can get use it at the lowest possible cost. (To be clear, I’ve no problem with private vendors – I am one – but their software can be closed because they (should) be absorbing the risk of developing it themselves. If the government is giving out grants to develop software for government use, the resulting software should be licensed open.)

Which brings us to the good. My interest in the line of the executive directive cited above was piqued by the reference to public and non-profit “collaborative innovation strategies.” I read that and I immediately think of one of my favourite organizations: Kuali.

Many readers have heard me talk about Kuali, an organization in which a group of universities collectively set the specs for a piece of software they all need and then share in the costs of developing it. I’m a big believer that this model could work for local and even state level governments. This is particularly true for the enterprise management software packages (like financial management), for which cities usually buy over-engineered, feature rich bloatware from organizations like SAP. The savings in all this could be significant, particularly for the middle-sized cities for whom this type of software is overkill.

My real hope is that this is the goal of this fund – to help provide some seed capital to start 10 Kuali-like projects. Indeed, I have no idea if the governor and his CIO’s staff have heard of or talked to the Kuali team before signing this directive, but if they haven’t, they should now. (Note: It’s only a 5 hour drive from the capital, Lansing, Michigan to the home of Kuali in Bloomington, Indiana).

So, if you are a state, provincial or national government and you are thinking about replicating Michigan’s directive – what should you do? Here’s my advice:

  • Require that all the code created by any projects you fund be open source. This doesn’t mean anyone can control the specs – that can still reside in the hands of a small group of players, but it does mean that a variety of companies can get involved in implementation so that there is still competition and innovation. This was the genius of Kuali – in the space of a few months, 10 different companies emerged that serviced Kuali software – in other words, the universities created an entire industry niche that served them and their specific needs exclusively. Genius.
  • Only fund projects that have at least 3 jurisdictions signed up. Very few enterprise open source projects start off with a single entity. Normally they are spec’ed out with several players involved. This is because if just one player is driving the development, they will rationally always choose to take shortcuts that will work for them, but cut down on the likelihood the software will work for others. If, from the beginning, you have to balance lots of different needs, you end up architecting your solution to be flexible enough to work in a diverse range of environments. You need that if your software is going to work for several different governments.
  • Don’t provide the funds, provide matching funds. One way to ensure governments have skin in the game and will actually help develop software is to make them help pay for the development. If a city or government agency is devoting $100,000 towards helping develop a software solution, you’d better believe they are going to try to make it work. If the State of Michigan is paying for something that may work, maybe they’ll contribute and be helpful, or maybe they’ll sit back and see what happens. Ensure they do the former and not the latter – make sure the other parties have skin in the game.
  • Don’t just provide funds for development – provide funds to set up the organization that will coordinate the various participating governments and companies, set out the specs, and project manage the development. Again, to understand what that is like – just fork Kuali’s governance and institutional structure.
  • Ignore government agencies or jurisdictions that believe they are a special unique flower. One of the geniuses of Kuali is that they abstracted the process/workflow layer. That way universities could quickly and easily customize the software so that it worked for how their university does its thing. This was possible not because the universities recognized they were each a unique and special flower but because they recognized that for many areas (like library or financial management) their needs are virtually identical. Find partners that look for similarities, not those who are busy trying to argue they are different.

There is of course more, but I’ll stop there. I’m excited for Michigan. This innovation fund has real promise. I just hope that it gets used to be disruptive, and not to simply fund a few slow and steady (and stodgy) software incumbents that aren’t going to shake up the market and help change the way we do government procurement. We don’t need to spend $2.5 million to get software that is marginally better (or not even). Governments already spend billions every year for that. If we are going to spend a few million to innovate, let’s do it to be truly disruptive.

Mainstreaming The Gov 2.0 Message in the Canadian Public Service

A couple of years ago I wrote a Globe Op-Ed “A Click Heard Across the Public Service” that outlined the significance of the clerk using GCPEDIA to communicate with public servants. It was a message – or even more importantly – an action to affirm his commitment to change how government works. For those unfamiliar, the Clerk of the Privy Council is the head of the public service for the federal government, a crude analogy would be he is the CEO and the Prime Minister is the Chairman (yes, I know that analogy is going to get me in trouble with people…)

Well, the clerk continues to broadcast that message, this time in his Nineteenth Annual Report to the Prime Minister on the Public Service of Canada. As an observer in this space what is particularly exciting for me is that:

  • The Clerk continues to broadcast this message. Leadership and support at the top is essential on these issues. It isn’t sufficient, but it is necessary.
  • The role of open data and social media is acknowledged on several occasions

And as a policy entrepreneur, what is doubly exciting is that:

  • Projects I’ve been personally involved in get called out; and
  • Language I’ve been using in briefs, blog posts and talks to public servants is in this text

You can, of course, read the whole report here. There is much more in it than just talk of social media and rethinking the public service, there is obviously talk about the budget and other policy areas as well. But bot the continued prominence given to renewal and technology, and explicit statements about the failure to move fast enough to keep up with the speed of change in society at large, suggests that the clerk continues to be worried about this issue.

For those less keen to read the whole thing, here are some juice bits that mattered to me:

In the section “The World in Which We Serve” which is basically providing context…

At the same time, the traditional relationship between government and citizens continues to evolve. Enabled by instantaneous communication and collaboration technologies, citizens are demanding a greater role in public policy development and in the design and delivery of services. They want greater access to government data and more openness and transparency from their institutions.

Under “Our Evolving Institution” which lays out some of the current challenges and priorities we find this as one of the four areas of focus mentioned:

  • The Government expanded its commitment to Open Government through three main streams: Open Data (making greater amounts of government data available to citizens), Open Information (proactively releasing information about Government activities) and Open Dialogue (expanding citizen engagement with Government through Web 2.0 technologies).

This is indeed interesting. The more this government talks about open in general, the more it will be interesting to see how the public reacts, particularly in regards to its treatment of certain sectors (e.g. environmental groups). Still more interesting is what appears to be a growing recognition of the importance of data (from a government that cut the long form census). Just yesterday the Health Minister, while talking about a controversial multiple sclerosis vein procedure stated that:

“Before our government will give the green light to a limited clinical trial here in Canada, the proposed trial would need to receive all necessary ethical and medical approvals. As Minister of Health, when it comes to clinical issues, I rely on advice from doctors and scientists who are continually monitoring the latest research, and make recommendations in the best interests of patient health and safety.”

This is, interestingly, an interesting statement from a government that called doctors “unethical” because of their support for the insite injection site which, the evidence shows, is the best way to save lives and get drug users into detox programs.

For evidence based policy advocates – such as myself – the adoption of the language of data is one that I think could help refocus debates onto a more productive terrain.

Then towards the bottom of the report there is a call out that mentions the Open Policy conference at DFAIT I had the real joy of helping out convene and that I served as the host and facilitator for.

Policy Built on Shared Knowledge
The Department of Foreign Affairs and International Trade (DFAIT) has been experimenting with an Open Policy Development Model that uses social networking and technology to leverage ideas and expertise from both inside and outside the department. A recent full-day event convened 400 public and private sector participants and produced a number of open policy pilots, e.g., an emergency response simulation involving consular officials and a volunteer community of digital crisis-mappers.

DFAIT is also using GCConnex, the Public Service’s social networking site, to open up policy research and development to public servants across departments.

This is a great, a much deserved win for the team at DFAIT that went out on a limb to run this conference and we rewarded with participation from across the public service.

Finally, anyone who has seen me speak will recognize a lot of this text as well:

As author William Gibson observed, “The future is already here, it’s just unevenly distributed.” Across our vast enterprise, public servants are already devising creative ways to do a better job and get better results. We need to shine a light on these trailblazers so that we can all learn from their experiments and build on them. Managers and senior leaders can foster innovation—large and small—by encouraging their teams to ask how their work can be done better, test out new approaches and learn from mistakes.

So much innovation in the 21st century is being made possible by well-developed communication technologies. Yet many public servants are frustrated by a lack of access to the Web 2.0 and social media tools that have such potential for helping us transform the way we work and serve Canadians. Public servants should enjoy consistent access to these new tools wherever possible. We will find a way to achieve this while at the same time safeguarding the data and information in our care.

I also encourage departments to continue expanding the use of Web 2.0 technologies and social media to engage with Canadians, share knowledge, facilitate collaboration, and devise new and efficient services.

To be fully attribute, the William Gibson quote, which I use a great deal, was something I first saw used by my friend Tim O’Reilly who is, needless to say, a man with a real ability to understand a trend and explain an idea to people. I hope his approach to thinking is reflected in much of what I do.

What, in sum, all these call outs really tell us is that the Gov 2.0 message in the federal public service is being mainstreamed, at the very least among the most senior public servants. This does not mean that our government is going to magically transform, it simply means that the message is getting through and people are looking for ways to push this type of thinking into the organization. As I said before, this is not sufficient to change the way government works, but it is necessary.

Going to keep trying to see what I can do to help.

Canada's Action Plan on Open Government: A Review

The other day the Canadian Government published its Action Plan on Open Government, a high level document that both lays out the Government’s goals on this file as well as fulfill its pledge to create tangible goals as part of its participation in next week’s Open Government Partnership 2012 annual meeting in Brazil.

So what does the document say and what does it mean? Here is my take.

Take Away #1: Not a breakthrough document

There is much that is good in the government’s action plan – some of which I will highlight later. But for those hoping that Canada was going to get the Gov 2.0 bug and try to leapfrog leaders like the United States or the United Kingdom, this document will disappoint. By and large this document is not about transforming government – even at its most ambitious it appears to be much more about engaging in some medium sized experiments.

As a result the document emphasizes a number of things that the UK and US started doing several years ago such  getting license that adheres to international norms or posting government resource allocation and performance management information online in machine readable forms or refining the open data portal.

What you don’t see are explicit references to try to re-think how government leverages citizens experience and knowledge with a site like Challenge.gov, engage experts in innovative ways such as with Peer to Patent, or work with industries or provinces to generate personal open data such as the US has done with the Blue Button (for Healthcare) or the Green Button (for utilities).

Take Away #2: A Solid Foundation

This said, there is much in the document that is good. Specifically, in many areas, it does lay a solid foundation for some future successes. Probably the most important statements are the “foundational commitments” that appear on this page. Here are some key points:

Open Government Directive

In Year 1 of our Action Plan, we will confirm our policy direction for Open Government by issuing a new Directive on Open Government. The Directive will provide guidance to 106 federal departments and agencies on what they must do to maximize the availability of online information and data, identify the nature of information to be published, as well as the timing, formats, and standards that departments will be required to adopt… The clear goal of this Directive is to make Open Government and open information the ‘default’ approach.

This last sentence is nice to read. Of course the devil will be in the detail (and in the execution) but establishing a directive around open information could end being as important (although admittedly not as powerful – an important point) as the establishment of Access to Information. Done right such a directive could vastly expand the range of documents made available to the public, something that should be very doable as more and more government documentation moves into digital formats.

For those complaining about the lack of ATI reform in the document this directive, and its creation will be with further exploration. There is an enormous opportunity here to reset how government discloses information – and “the default to open” line creates a public standard that we can try to hold the government to account on.

And of course the real test for all this will come in years 2-3 when it comes time to disclose documents around something sensitive to the government… like, say, around the issue of the Northern Gateway Pipeline (or something akin to the Afghan Prisoner issue). In theory this directive should make all government research and assessments open, when this moment happens we’ll have a real test of the robustness of any new such directive.

Open Government License:

To support the Directive and reduce the administrative burden of managing multiple licensing regimes across the Government of Canada, we will issue a new universal Open Government License in Year 1 of our Action Plan with the goal of removing restrictions on the reuse of published Government of Canada information (data, info, websites, publications) and aligning with international best practices… The purpose of the new Open Government License will be to promote the re-use of federal information as widely as possible...

Full Disclosure: I have been pushing (in an unpaid capacity) for the government to reform its license and helping out in its discussions with other jurisdictions around how it can incorporate the best practices and most permissive language possible.

This is another important foundational piece. To be clear, this is not about an “open data” license. This is about creating a licensing for all government information and media. I suspect this appeals to this government in part because it ends the craziness of having lawyers across government constantly re-inventing new licenses and creating a complex set of licenses to manage. Let me be clear about what I think this means: This is functionally about neutering crown copyright. It’s about creating a licensing regime that makes very clear what the users rights are (which crown copyright does not doe) and that is as permissive as possible about re-use (which crown copyright, because of its lack of clarity, is not). Achieving such a license is a critical step to doing many of the more ambitious open government and gov 2.0 activities that many of us would like to see happen.

Take Away #3: The Good and Bad Around Access to Information

For many, I think this may be the biggest disappointment is that the government has chosen not to try to update the Access to Information Act. It is true that this is what the Access to Information Commissioners from across the country recommended they do in an open letter (recommendation #2 in their letter). Opening up the act likely has a number of political risks – particularly for a government that has not always been forthcoming documents (the Afghan detainee issue and F-35 contract both come to mind) – however, I again propose that it may be possible to achieve some of the objectives around improved access through the Open Government Directive.

What I think shouldn’t be overlooked, however, is the government’s “experiment” around modernizing the administration of Access to Information:

To improve service quality and ease of access for citizens, and to reduce processing costs for institutions, we will begin modernizing and centralizing the platforms supporting the administration of Access to Information (ATI). In Year 1, we will pilot online request and payment services for a number of departments allowing Canadians for the first time to submit and pay for ATI requests online with the goal of having this capability available to all departments as soon as feasible. In Years 2 and 3, we will make completed ATI request summaries searchable online, and we will focus on the design and implementation of a standardized, modern, ATI solution to be used by all federal departments and

These are welcome improvements. As one colleague – James McKinney – noted, the fact that you have to pay with a check means that only people with Canadian bank accounts can make ATIP requests. This largely means just Canadian citizens. This is ridiculous. Moreover, the process is slow and painful (who uses check! the Brits are phasing them out by 2018 – good on em!). The use of checks creates a real barrier – particularly I think, for young people.

Also, being able search summaries of previous requests is a no-brainer.

Take Away #4: The is a document of experiments

As I mentioned earlier, outside the foundational commitments, the document reads less like a grand experiment and more like a series of small experiments.

Here the Virtual Library is another interesting commitment – certainly during the consultations the number one complaint was that people have a hard time finding what they are looking for on government websites. Sadly, even if you know the name of the document you want, it is still often hard to find. A virtual library is meant to address this concern – obviously it is all going to be in the implementation – but it is a response to a genuine expressed need.

Meanwhile the Advancing Recordkeeping in the Government of Canada and User-Centric Web Services feel like projects that were maybe already in the pipeline before Open Government came on the scene. They certainly do conform with the shared services and IT centralization announced by Treasury Board last year. They could be helpful but honestly, these will all be about execution since these types of projects can harmonize processes and save money, or they can become enormous boondoggles that everyone tries to work around since they don’t meet anyone’s requirements. If they do go the right way, I can definitely imagine how they might help the management of ATI requests (I have to imagine it would make it easier to track down a document).

I am deeply excited about the implementation of International Aid Transparency Initiative (IATI). This is something I’ve campaigned for and urged the government to adopt, so it is great to see. I think these types of cross jurisdictional standards have a huge role to play in the open government movement, so joining one, figuring out what about the implementation works and doesn’t work, and assessing its impact, is important both for Open Government in general but also for Canada, as it will let us learn lessons that, I hope, will become applicable in other areas as more of these types of standards emerge.

Conclusion:

I think it was always going to be a stretch to imagine Canada taking a leadership role in Open Government space, at least at this point. Frankly, we have a lot of catching up to do, just to draw even with places like the US and the UK which have been working hard to keep experimenting with new ideas in the space. What is promising about the document is that it does present an opportunity for some foundational pieces to be put into play. The bad news is that real efforts to rethink governments relationship with citizens, or even the role of the public servant within a digital government, have not been taken very far.

So… a C+?

 

Additional disclaimer: As many of my readers know, I sit on the Federal Government’s Open Government Advisory Panel. My role on this panel is to serve as a challenge function to the ideas that are presented to us. In this capacity I share with them the same information I share with you – I try to be candid about what I think works and doesn’t work around ideas they put forward. Interestingly, I did not see even a draft version of the Action Plan until it was posted to the website and was (obviously by inference) not involved in its creation. Just want to share all that to be, well, transparent, about where I’m coming from – which remains as a citizen who cares about these issues and wants to push governments to do more around gov 2.0 and open gov.

Also, sorry or the typos, but I’m sick and it is 1am. So I’m checking out. Will proof read again when I awake.

Using BHAG's to Change Organizations: A Management, Open Data & Government Mashup

I’m a big believer in the ancillary benefits of a single big goal. Set a goal that has one clear objective, but as a result a bunch of other things have to change as well.

So one of my favourite Big Hairy Audacious Goals (BHAG) for an organization is to go paperless. I like the goal for all sorts of reasons. Much like a true BHAG it is is clear, compelling, and has obvious “finish line.” And while hard, it is achievable.

It has the benefit of potentially making the organization more “green” but, what I really like about it is that it requires a bunch of other steps to take place that should position the organization to become more efficient, effective and faster.

This is because paper is dumb technology. Among many, many other things, information on paper can’t be tracked, changes can’t be noted, pageviews can’t be recorded, data can’t be linked. It is hard to run a lean business when you’re using paper.

Getting rid of it often means you have get a better handle on workflow and processes so they can be streamlined. It means rethinking the tools you use. It means getting rid of checks and into direct deposit, moving off letters and into email, getting your documents, agendas, meeting minutes, policies and god knows what else out of MS Word and onto wikis, shifting from printed product manuals to PDFs or better still, YouTube videos. These changes in turn require a rethinking of how your employees work together and the skills they require.

So what starts off as a simple goal – getting rid of paper – pretty soon requires some deep organizational change. Of course, the rallying cry of “more efficient processes!” or “better understanding our workflow” have pretty limited appeal and, can be hard for everyone to wrap their head around. However, “getting rid of paper”? It is simple, clear and, frankly, is something that everyone in the organization can probably contribute an idea towards achieving. And, it will achieve many of the less sexy but more important goals.

Turns out, maybe some governments may be thinking this way.

The State of Oklahoma has a nice website that talks about all their “green” initiatives. Of course, it just so happens that many of these initiatives – reducing travel, getting rid of paper, etc… also happen to reduce costs and improve service but are easier to measure. I haven’t spoken with anyone at the State of Oklahoma to see if this is the real goal, but the website seems to acknowledges that it is:

OK.gov was created to improve access to government, reduce service-processing costs and enable state agencies to provide a higher quality of service to their constituents.

So for Oklahoma, going paperless becomes a way to get at some larger transformations. Nice BHAG. Of course, as with any good BHAG, you can track these changes and share them with your shareholders, stakeholders or… citizens.

And behold! The Oklahoma go green website invites different state agencies to report data on how their online services reduce paper consumption and/or carbon emissions. Data that they in turn track and share with the public via the state’s Socrata data portal. This graph shows how much agencies have reduced their paper output over the past four years.

Notice how some departments have no data – if I were an Oklahoma taxpayer, I’m not too sure I’d be thrilled with them.But take a step back. This is a wonderful example of how transparency and open data can help drive a government initiative. Not only can that data make it easier for the public to understand what has happened (and so be more readily engaged) but it can help cultivating a culture of accountability as well as – and perhaps more importantly – promote a culture of metrics that I believe will be critical for the future of government.

I often say to governments “be strategic about how you use some of the data you make open.” Don’t just share a bunch of stuff, use what you share to achieve policy or organizational objectives. This is a great example. It’s also a potentially a great example at organizational change in a large and complex environment. Interesting stuff.

 

 

Next Generation Open Data: Personal Data Access

Background

This Monday I had the pleasure of being in Mexico City for the OECD’s High Level Meeting on e-Government. CIO’s from a number of countries were present – including Australia, Canada, the UK and Mexico (among others). But what really got me going was a presentation by Chris Vein, the Deputy United States Chief Technology Officer for Government Innovation.

In his presentation he referenced work around the Blue Button and the Green Button – both efforts I was previously familiar with. But my conversation with Chris sparked several new ideas and reminded me of just how revolutionary these initiatives are.

For those unacquainted with them, here’s a brief summary:

The Blue Button Initiative emerged out of the US Department of Veterans Affairs (VA) with a simple goal – create a big blue button on their website that would enable a logged in user to download their health records. That way they can then share those records with whoever they wish, a new doctor, a hospital, an application or even just look at it themselves. The idea has been deemed so good, so important and so popular, that it is now being championed as industry standard, something that not just the VA but all US health providers should do.

The Green Button Initiative is similar. I first read about it on ReadWriteWeb under the catchy and insightful title “Green Button” Open Data Just Created an App Market for 27M US Homes. Essentially the Green Button would enable users to download their energy consumption data from their utility. In the United States 9 utilities have already launched Green Buttons and an app ecosystem – applications that would enable people to monitor their energy use – is starting to emerge. Indeed Chris Vein talked about one app that enabled a user to see their thermostat in real time and then assess the financial and environmental implications of raising and/or lowering it. I personally see the Green Button evolving into an API that you can give others access to… but that is a detail.

Why it Matters

Colleagues like Nigel Shadbolt in the UK have talked a lot about enabling citizens to get their data out of websites like Facebook. And Google has it’s own very laudable Data Liberation Front run by great guy and werewolf expert, Brian Fitzpatrick. But what makes the Green Button and Blue Button initiatives unique and important is that they create a common industry standard for sharing consumer data. This creates incentives for third parties to develop applications and websites that can analyze this data because these applications will scale across jurisdictions. Hence the Read Write Web article’s focus on a new market. It also makes the data easy to share. Healthcare records downloaded using the blue button are easily passed on to a new doctor or a new hospital since now people can design systems to consumer these healthcare records. Most importantly, it gives the option of sharing these records so they don’t have to wait for lumbering bureaucracies.

This is a whole new type of open data. Open not to the public but to the individual to whom the data really belongs.

A Proposal

I would love to see the blue button and green button initiative spread to companies and jurisdictions outside the United States. There is no reason why for examples there cannot be Blue Buttons on Provincial Health Care website in Canada, or the UK. Nor is there any reason why provincial energy corporations like BC Hydro or Bullfrog Energy (there’s a progressive company that would get this) couldn’t implement the Green Button. Doing so would enable Canadian software developers to create applications that could use this data and help citizens and tap into the US market. Conversely, Canadian citizens could tap into applications created in the US.

The opportunity here is huge. Not only could this revolutionize citizens access to their own health and energy consumption data, it would reduce the costs of sharing health care records, which in turn could potentially create savings for the industry at large.

Action

If you are a consumer, tell your local health agency, insurer and energy utility about this.

If you are a energy utility or Ministry of Health and are interested in this – please contact me.

Either way, I hope this is interesting. I believe there is huge potential in Personal Open Data, particular around data currently held by crown corporations and in critical industries, like healthcare.

Data.gc.ca – Data Sets I found that are interesting, and some suggestions

Yesterday was the one year anniversary of the Canadian federal government’s open data portal. Over the past year government officials have been continuously adding to the portal, but as it isn’t particularly easy to browse data sets on the website, I’ve noticed a lot of people aren’t aware of what data is now available (self included!). Consequently, I want to encourage people to scan the available data sets and blog about ones that they think might be interesting to them personally, to others, or to communities of interests they may know.

Such an undertaking has been rendered MUCH easier thanks to the data.gc.ca administrators decision to publish a list of all the data sets available on the site. Turns out, there are 11680 data sets listed in this file. Of course, reviewing all this data took me much longer than I thought it would! (and to be clear, I didn’t explore each one in detail), but the process has been deeply interesting. Below are some thoughts, ideas and data sets that have come out of this exploration – I hope you’ll keep reading, and that it will be of interest to ordinary citizens, prospective data users and to managers of open government data portals.

TagCloud_GC_OpenData

A TagCloud of the Data Sets on data.gc.ca

Some Brief Thoughts on the Portal (and for others thinking about exploring the data)

Trying to review all the data sets on the portal is a enormous task and trying to do it has taught me some lessons about what works and doesn’t. The first is that, while the search function on the website is probably good if you have a keyword or a specific data you are looking for, it is much easier to browse the data in an excel than on the website. What was particularly nice about this is that, in excel, the data was often clustered by type. This made easy to spot related data sets – a great example of this when I found the data on “Building permits, residential values and number of units, by type of dwelling” I could immediately see there were about 12 other data sets on building permits available.

Another issue that became clear to me is the problem of how a data set is classified. For example, because of the way the data is structured (really as a report) the Canadian Dairy Exports data has a unique data file for every month and year (you can look at May 1988 as an example). That means each month is counted as a unique “data set” in the catalog. Of course, French and English versions are also counted as unique. This means that what I would consider to be a single data set “Canadian Dairy Exports Month Dairy Year from 1988 to present” actually counts as 398 data sets. This has two outcomes. First, it is hard to imagine anyone wants the data for just one month. This means a user looking for longitudinal data on this subject has to download 199 distinct data sets (very annoying). Why not just group it into one? Second, given that governments like to keep score about how many data sets they share – counting each month as a unique data set feels… unsportsmanlike. To be clear, this outcome is an artifact of how Agriculture Canada gathers and exports this data, but it is an example of the types of problems an open data catalog needs to come to grips with.

Finally, many users – particularly, but not exclusively, developers – are looking for data that is up to date. Indeed, real time data is particularly sexy since its dynamic nature means you can do interesting things with it. This it was frustrating to occasionally find data sets that were no longer being collected. A great example of this was the Provincial allocation of corporate taxable income, by industry. This data set jumped out at me as I thought it could be quite interesting. Sadly, StatsCan stopped collecting data on this in 1987 so any visualization will have limited use today. This is not to say data like this should be pulled from the catalog, but it might be nice to distinguish between datasets that are being collected on an ongoing basis versus those that are no longer being updated.

Data Sets I found Interesting

Just quickly before I begin, some quick thoughts on my very unscientific methodology for identifying interesting data sets.

  • First, browsing the data sets really brought home to me how many will be interesting to different groups – we really are in the world of the long tail of public policy. As a result, there is lots of data that I think will be interesting to many, many people that is not on this list.
  • Second, I tried to not include too much of StatsCan’s data. StatsCan data already has a fairly well developed user base. And while I’m confident that base is going to get bigger still now that its data is free, I figure there are already a number of people who will be sharing/talking about it
  • Finally, I’ve tried to identify some data sets that I think would make for good mashups or apps. This isn’t easy with federal government data sets since they tend do be more aggregate and high-level than say municipal data sets… but I’ve tried to tease out what I can. That said, I’m sure there is much, much more.

New GeoSpatial API!

So the first data set is a little bit of a cheat since it is not on the open data portal, but I was emailed about it yesterday and it is so damn exciting, I’ve got to share it. It is a recently released public BETA of a new RESTful API from the very cool people at GeoGratis that provides a consolidated access point to several repositories of geospatial data and information products including GeoGratis, GeoPub and Mirage. (huge thank you to the GeoGratis team for sending this to me).

Documentation can be found here (and in french here) and a sample search client that demonstrates some of its functionality and how to interact with the API can be found here. Formats include ATOM, HTML Fragment, CSV, RSS, JSON, and KML. (So you can see results – for example – in Google Earth by using the KML format (example here).

I’m also told that these fine folks have been working on geolocation service, so you can do sexy things like search by place name, by NTS map or by the first three characters of a postal code. Documentation will be posted here in english and french. Super geeks may notice that there is a field in the JSON called CGNDBkey. I’m also told you can use this key to select an individual placename according to the Canadian Geographic names board. Finally, you can also search all their Metadata through search engines like google (here is a sample search for gold they sent me).

All data is currently licensed under GeoGratis.

The National Pollutant Release Inventory

Description: The National Pollutant Release Inventory (NPRI) is Canada’s public inventory of pollutant releases (to air, water and land), disposals and transfers for recycling.

Notes: This is the same data set (but updated) that we used to create emitter.ca. I frankly feel like the opportunities around this data set, for environmentalists, investors (concerned about regulatory and lawsuit risks), the real estate industry, and others, is enormous. The public could be very interested in this.

Greenhouse Gas Emissions Reporting Program

Description: The Greenhouse Gas Emissions Reporting Program (GHGRP) is Canada’s legislated, publicly-accessible inventory of facility-reported greenhouse gas (GHG) data and information.

Notes: What interesting here is that while it doesn’t have lat/longs, it does have facility names and addresses. That means you should be able to cross reference it with the NPRI (which does have lat/longs) to be able to plot where the big greenhouse gas emitters are on a map. Think the same people as the NPRI might be interested in this data.

The Canadian Ice Thickness Program

Description: The Ice Thickness program dataset documents the thickness of ice on the ocean. Measurements begin when the ice is safe to walk on and continue until it is no longer safe to do so. This data can help gauge the impact of global warming and is relevant to shipping data in the north of Canada.

Notes: Students interested in global warming… this could make for some fun visualization.

Argo: Canadian Tracked Data

Description: Argo Data documents some of the approximately 3000 profiling floats were deployed around the world. Once at sea, the float sinks to a preprogrammed target depth of 2000 meters for a preprogrammed period of time. It then floats to the surface, taking temperature and salinity values during its ascent at set depths. — The Canadian Tracked Argo Datadescribes the Argo programme in Canada and provides data and information about Canadian floats.

Notes: Okay, so I can think of no use for this data, but I just that it was so awesome that people are doing this that I totally geeked out.

Civil Aircraft Register Database

Description: Civil Aircraft Register Database – this file contains the current mark, aircraft and owner information of all Canadian civil registered aircraft.

Notes: Here I really think there could be a geeky app. Just a simple app that you can type an aircraft’s number into and it will tell you the owner and details about the plane. I actually think the government could do a lot of work with this data. If regulatory and maintenance data were made available as well – then you’d have a powerful app that would tell you a lot about the planes you fly in. At a minimum would be of interest to flight enthusiasts.

Real Time Hydrometric Data Tool

Description: Real Time Hydrometric Data Tool – this site provides public access to real-time hydrometric (water level and streamflow) data collected at over 1700 locations in Canada. These data are collected under a national program jointly administered under federal-provincial and federal-territorial cost-sharing agreements. It is through partnerships that the Water Survey of Canada program has built a standardized and credible environmental information base for Canada. This dataset contains both current and historical datasets. The current month can be viewed in an HTML table, and historical data can be downloaded in CSV format.

Notes: So ripe for an API! What is cool is that the people at Environment Canada have integrated it into google maps. I could imagine fly fisherman and communities at risk of flooding being interested in this data set.

Access to information data sets

Description: 2006-2010 Access to Information and Privacy Statistics (With the previous years here, here and here.) is a compilation of statistical information about access to information and privacy submitted by government institutions subject to the Access to Information Act and the Privacy Act for 2006-2010.

Notes: I’d love to crunch this stuff again and see whose naughty and nice in the ATIP world…

Poultry and Forestry data

No links, BECAUSE THERE IS SO MUCH OF IT. Anyone interested in the Poultry or Forestry industry will find lots of data… obviously this stuff is useful to people who analyze these industries but I suspect there are a couple of “A” university level papers hidden in that data set as well.

Building Permits

There is tons on building permits., construction.. Actually one of the benefits of looking at the data in a spread sheet, easy to see other related data sets.

StatsCan

It really is amazing how much Statistic Canada data there is. Even reviewing something like the supply and demand of natural gas liquids got me thinking about the wealth of information trapped in there. One thing I do hope statscan starts to do is geolocate its data whenever possible.

Crime Data

As this has been in the news I couldn’t help but include it. It’s nice that any citizen can look at the crime data direct from StatsCan too see how our crime rate is falling (which is why we should build more expensive prisons) Crime statistics, by detailed offences. Of course unreported crime, which we all know is climbing at 3000% a year, is not included in these stats.

Legal Aid Applications

Legal aid applications, by status and type of matter. This was interesting to me since, here in BC there is much talk about funding for the Justice system and yet, the number of legal aid applications has remained more or less flat over the past 5 years.

National Broadband Coverage data

Description: The National Broadband Coverage Data represents broadband coverage information, by technology, for existing broadband service providers as of January 2012. Coverage information for Broadband Canada Program projects is included for all completed projects. Coverage information is aggregated over a grid of hexagons, which are each 6 km across. The estimated range of unserved / underserved population within in each hexagon location is included.

Notes: What’s nice is that there is lat/long data attached to all this, so mapping it, and potentially creating a heat map is possible. I’m certain the people at OpenMedia might appreciate such a map.

Census Consolidated Subdivision

Description: Census Consolidated Subdivision Cartographic Boundary Files portrays the geographic limits used for the 2006 census dissemination. The Census Consolidated Subdivision Boundary Files contain the boundaries of all 2,341 census consolidated subdivisions.

Notes: Obviously this one is on every data geeks radar, but just in case you’ve been asleep for the past 5 months, I wanted to highlight it.

Non-Emergency Surgeries, distribution of waiting times

Description: Non-emergency surgeries, distribution of waiting times, household population aged 15 and over, Canada, provinces and territories

Notes: Would love to see this at the hospital and clinic level!

Border Wait Times

Description: Estimates Border Wait Times (commercial and travellers flow) for the top 22 Canada Border Services Agency land border crossings.

Notes: Here I really think there is an app that could be made. At the very least there is something that could tell you historical averages and ideally, could be integrated into Google and Bing maps when calculating trip times… I can also imagine a lot of companies that export goods to the US are concerned about this issue and would be interested in better data to predict the costs and times of shipping goods. Big potential here.

Okay, that’s my list. Hope it inspires you to take a look yourself, or play with some of the data listed above!

Access to Information, Open Data and the Problem with Convergence

In response to my post yesterday one reader sent me a very thoughtful commentary that included this line at the end:

“Rather than compare [Freedom of Information] FOI legislation and Open Gov Data as if it’s “one or the other”, do you think there’s a way of talking about how the two might converge?”

One small detail:

So before diving in to the meat let me start by saying I don’t believe anything in yesterday’s post claimed open data was better or worse than Freedom of Information (FOI often referred to in Canada as Access to Information or ATI). Seeing FOI and open data as competing suggests they are similar tools. While they have similar goals – improving access – and there may be some overlap, I increasingly see them as fundamentally different tools. This is also why I don’t see an opportunity for convergence in the short term (more on that below). I do, however, believe open data and FOI processes can be complimentary. Indeed, I’m hopeful open data can alleviate some of the burden placed on FOI system which are often slow. Indeed, in Canada, government departments regularly violate rules around disclosure deadlines. If anything, this complimentary nature was the implicit point in yesterday’s post (which I could have made more explicit).

The Problem with Convergence:

As mentioned above, the overarching goals of open data and FOI systems are similar – to enable citizens to access government information – but the two initiatives are grounded in fundamentally different approaches to dealing with government information. From my view FOI has become a system of case by case review while open data is seeking to engage in an approach of “pre-clearance.”

Part of this has to do with what each system is reacting to. FOI was born, in part, out of a reaction to scandals in the mid 20th century which fostered public support for a right to access government information.

FOI has become a powerful tool for accessing government information. But the infrastructure created to manage it has also had some perverse effects. In some ways FOI has, paradoxically made it harder to gain access to government information. I remember talking to a group of retired reporters who talk about how it was easier to gain access to documents in a pre-FOI era since there were no guidelines and many public servants saw most documents as “public” anyways. The rules around disclosure today – thanks in part to FOI regimes – mean that governments can make closed the “default” setting for government information. In the United States the Ashcroft Memo serves as an excellent example of this problem. In this case the FOI legislation actually becomes a tool that helps governments withhold documents, rather than enable citizens to gain legitimate access.

But the bigger problem is that the process by which access to information requests are fulfilled is itself burdensome. While relevant and necessary for some types of information it is often overkill for others. And this is the niche that open data seeks to fill.

Let me pause to stress, I don’t share the above to disparage FOI. Quite the opposite. It is a critical and important tool and I’m not advocating for its end. Nor am I arguing the open data can – in the short or even medium term – solve the problems raised above.

This is why, over the short term, open data will remain a niche solution – a fact linked to its origins. Like FOI Open data has its roots in government transparency. However, it also evolved out of efforts to tear down antiquated intellectual property regimes to the facilitate sharing of data/information (particularly between organizations and governments). Thus the emphasis was not on case by case review of documents, but rather of clearing rights to categories of information, both created and to be created in the future. In other words, this is about granting access to the outputs of a system, not access to individual documents.

Another way of thinking about this is that open data initiatives seek to leverage the benefits of FOI while jettisoning its burdensome process. If a category of information can be pre-clear in advanced and in perpetuity for privacy, security and IP concerns then FOI processes – essential for individual documents and analysis – becomes unnecessary and one can reduce the transaction costs to citizens wishing to access the information.

Maybe, in the future, the scope of these open data initiatives could become broader, and I hope they will. Indeed there is, ample evidence to suggest that technology could be used to pre-clear or assess the sensitivity of any government document. An algorithm that assess a mixture of who the author is, the network of people who review it and a scan of the words would probably allow ascertain if a document could be released to an ATIP request in seconds, rather than weeks. It could at least give a risk profile and/or strip out privacy related information. These types of reforms would be much more disruptive (in the positive sense) to FOI legislation than open data.

But all that said, just getting the current focus of open data initiatives right would be a big accomplishment. And, even if such initiatives could be expanded, there are limits. I am not so naive to believe that government can be entirely open. Nor am I sure that would be an entirely good outcome. When trying to foster new ideas or assess how to balance competing interests in society, a private place to initiate and play with ideas may be essential. And despite the ruminations above, the limits of government IT systems means there will remain a lot of information – particularly non-data information like reports and analysis – that we won’t be able to “pre-clear- for sharing and downloading. Consequently an FOI regime – or something analogous – will continue to be necessary.

So rather than replace or converge with FOI systems, I hope open data will, for the short to medium term actually divert information out of the FOI, not because it competes, but because it offers a simpler and more efficient means of sharing (for both government and citizens) certain types of information. That said, open data initiatives offer none of the protections or rights of FOI and so this legislation will continue to serve as the fail safe mechanism should a government choose to stop sharing data. Moreover, FOI will continue to be a necessary tool for documents and information that – for all sorts of reasons (privacy, security, cabinet confidence, etc…) cannot fall under the rubric of an open data initiative. So convergence… not for now. But co-existence feels both likely and helpful for both.

Let's Hack data.gc.ca

In just under two weeks data.gc.ca will celebrate its one year anniversary. This will also mark the period that the pilot project is officially supposed to end.

Looking at data.gc.ca three things stand out. First, the license has improved a great deal since its launch. Second, a LOT of data has been added to the site over the last year. And finally, the website is remarkably bad at searching for data and enabling a community of users.

Indeed, I believe that a lot of people have stopped visiting the site and don’t even know what data is available. My suspicion is that almost none of us know what is actually available since a) there is a lot, b) much of it is not sexy and c) it is very hard to search.

Let’s do something about that.

I have managed to create, and upload to buzzdata, a list of all the data sets in data.gc.ca – both geographic and non-geographic data sets.

I’m proposing that we go through the data.gc.ca data sets and find what is interesting to each of us, and on March 15th, find a way to highlight it or talk about it so that other people find out about it. Maybe you tweet about it (use the hashtah #gcdata) or blog about it.

Even more interesting would be if we could find a way to do it collaboratively – have a way of collectively marking what data sets are interesting (in say, a piratepad somewhere). If someone had a clever proposal about how to go through all the datasets, I’d love for us to collectively highlight the high value datasets (if there are any) available in data.gc.ca.

Speaking with the great community of open data activists in Ottawa, we brainstormed about organizing an event after work on the 15th where people might get together and do this. We could call it “The Big Search” – an effort in any city where people are interested to gather and comb through the data. All with the goal of signaling to developers, non-profits, journalists and others, what, if any, data in data.gc.ca might be of interest for analysis, applications, or other uses. In addition, this exercise would also help us write supportive and critical comments about the government’s open data trial.

Finally, and most ambitiously, I’ve heard some people say they’d like to design an alternative data portal – I’m definitely game for that and am happy to offer up the datadotgc.ca url for that too.

So, I’m throwing this out there. If there is interest, please comment below. Would love to hear your thoughts and hope we can maybe organize some events on March 15th, or at least posts data sets in blogs, on facebook and on twitter, that people think are interesting.

Adapting KUALI financials for cities: Marin County is looking for Partners

Readers of my blog will be familiar Kuali – the coalition of universities that co-create a suite software  core to their operations – as I’ve blogged about several times and argued that it is a powerful model for local governments interested in rethinking how they procure (or really, co-create) their software.

For some time now I’ve heard rumors that some local governments have been playing with Kuali’s software to see if they can adapt it to work for their needs. Yesterday, David Hill of Marin County posted the comment below to a blog post I’d written about Kuali in which he openly states that he is looking for other municipalities to partner with as they try to fork Kuali financials and adapt it to local government.

<dhill@marincounty.org> (unregistered) wrote:

I completely agree.  It is a radical change for government in at least four ways:

1)  Government developers (are there any?) have little experience with open source
2)  CIOs have no inherent motivation to leave the commercial market model
3)  Governments have little experience is sharing
4)  CIOs are losing their staff due to budget cuts, and have no excess resources to take on a project that appears risky

But, let’s not waste a crisis.  Now is the best time to get KUALI financials certified for government finance and accounting and into production.

Please contact me if you are  planning to upgrade or replace your financial system and would like to look at KFS.
Randy Ozden,  VivanTech CEO is a great commercial partner
David Hill,
CIO
County of Marin

David’s offer is an exciting opportunity and I definitely encourage any municipal and county government officials interested in finding a cheap alternative to their financial management software to reach out to David Hill and at least explore this option. (or if you know any local government officials, please forward this to them). I would love nothing more to see some Kuali style projects start to emerge at the local level.

Algorithmic Regulation Spreading Across Government?

I was very, very excited to learn that the City of Vancouver is exploring implementing a program started in San Francisco in which “smart” parking meters adjust their price to reflect supply and demand (story is here in the Vancouver Sun).

For those unfamiliar with the program, here is a breakdown. In San Francisco, the city has the goal of ensuring at least one free parking spot is available on every block in the downtown core. As I learned during the San Fran’s presentation at the Code for America summit, such a goal has several important consequences. Specifically, it reduces the likelihood of people double parking, reduces smog and greenhouse gas emissions as people don’t troll for parking as long and because trolling time is reduced, people searching for parking don’t slow down other traffic and buses as they drive around slowly looking for a spot. In short, it has a very helpful impact on traffic more broadly.

So how does it work? The city’s smart parking meters are networked together and constantly assess how many spots on a given block are free. If, at the end of the week, it turns out that all the spaces are frequently in use, the cost of parking on that block is increased by 25 cents. Conversely if many of the spots were free, the price is reduced by 25 cents. Generally, each block finds an equilibrium point where the cost meets the demand but is also able to adjust in reaction to changing trends.

Technologist Tim O’Reilly has referred to these types of automated systems in the government context as “algorithmic regulation” – a phrase I think could become more popular over the coming decade. As software is deployed into more and more systems, the algorithms will be creating market places and resource allocation systems – in effect regulating us. A little over a year ago I said that contrary to what many open data advocates believe, open data will make data political – e.g. that open data wasn’t going to depoliticize public policy and make it purely evidenced base, quite the opposite, it will make the choices around what data we collect more contested (Canadians, think long form census). The same is also – and already – true of the algorithms, the code, that will increasingly regulate our lives. Code is political.

Personally I think the smart parking meter plan is exciting and hope the city will consider it seriously, but be prepared, I’m confident that much like smart electrical meters, an army of naysayers will emerge who simply don’t want a public resource (roads and parking spaces) to be efficiently used.

It’s like the Spirit of the West said: Everything is so political.