Tag Archives: open source

International Open Data Hackathon – IRC Channel and project ideas

Okay, going to be blogging a lot more about the international open data hackathon over the next few days. Last count had us at 63 other cities in 25 countries on over 5 continents.

So first and foremost, here are three thoughts/ideas/actions I’m taking right now:

1. Communicating via IRC

First, for those who have been wondering… yes, there will be an IRC channel on Dec 4th (and as of now) that I will try to be on most of the day.

irc.oftc.net #odhd

This could be a great place for people with ideas or open sourced projects to share them with others or for cities that would like to present some of the work they’ve done on the day with others to find an audience. If, by chance, work on a specific project becomes quite intense on the IRC channel, it may be polite for those working on it to start a project specific channel, but we’ll cross the bridge on the day.

Two additional thoughts:

2. Sharing ideas

Second, some interesting projects brainstorms have been cropping up on the wiki. Others have been blogging about them, like say these ideas from Karen Fung in Vancouver.

Some advice to people who have ideas (which is great).

a) describe who the user(s) would be and what the application will it do, why would someone use it, and what value would they derive from it.

b) even if you aren’t a coder (like me) lay out what data sets the application or project will need to draw upon

c) use powerpoint or keynote to create a visual of what you think the end product should look like!

d) keep it simple. Simple things get done and can always get more complicated. Complicated things don’t get done (and no matter how simple you think it is… it’s probably more complicated than you think

These were the basic principles I adhered when laying out the ideas behind what eventually became Vantrash and Emitter.ca.

Look at the original post where I described what I think a garbage reminder service could look like. Look how closely the draft visual resembles what became the final product… it was way easier for Kevin and Luke (who I’d never met at the time) to model vantrash after an image than just a description.

Garbage%20App

Mockup

Vantrash screen shot

3. Some possible projects to localize:

A number of projects have been put forward as initatives that could be localized. I wanted to highlight a few here:

a) WhereDoesMyMoneyGo?

People could create new instances of the site for a number of different countries. If you are interested, please either ping wdmmg-discuss or wdmmg (at) okfn.org.

Things non-developers could do:

  1. locate the relevant spending data on their government’s websites
  2. right up materials explaining the different budget areas
  3. help with designing the localized site.

b) OpenParliament.ca
If you live in a country with a parliamentary system (or not, and you just want to adapt it) here is a great project to localize. The code’s at github.com/rhymeswithcycle.

Things non-developers can do:

  1. locate all the contact information, twitter handles, websites, etc… of all the elected members
  2. help with design and testing

c) How’d They Vote
This is just a wonderful example of a site that creates more data that others can use. The API’s coming out of this site save others a ton of work and essentially “create” open data…

d) Eatsure
This app tracks health inspection data of restaurants done by local health authorities. Very handy. Would love to see someone create a widget or API that companies like Yelp could use to insert this data into the restaurant review… that would be a truly powerful use of open data.

The code is here:  https://github.com/rtraction/Eat-Sure
Do you have a project you’d like to share with other hackers on Opendataday? Let me know! I know this list is pretty North American specific so would love to share some ideas from elsewhere.

Launching datadotgc.ca 2.0 – bigger, better and in the clouds

Back in April of this year we launched datadotgc.ca – an unofficial open data portal for federal government data.

At a time when only a handful of cities had open data portals and the words “open data” were not being even talked about in Ottawa, we saw the site as a way to change the conversation and demonstrate the opportunity in front of us. Our goal was to:

  • Be an innovative platform that demonstrates how government should share data.
  • Create an incentive for government to share more data by showing ministers, public servants and the public which ministries are sharing data, and which are not.
  • Provide a useful service to citizens interested in open data by bringing it all the government data together into one place to both make it easier to find.

In every way we have achieved this goal. Today the conversation about open data in Ottawa is very different. I’ve demoed datadotgc.ca to the CIO’s of the federal government’s ministries and numerous other stakeholders and an increasing number of people understand that, in many important ways, the policy infrastructure for doing open data already exists since datadotgc.ca show the government is already doing open data. More importantly, a growing number of people recognize it is the right thing to do.

Today, I’m pleased to share that thanks to our friends at Microsoft & Raised Eyebrow Web Studio and some key volunteers, we are taking our project to the next level and launching Datadotgc.ca 2.0.

So what is new?

In short, rather than just pointing to the 300 or so data sets that exist on federal government websites members may now upload datasets to datadotg.ca where we can both host them and offer custom APIs. This is made possible since we have integrated Microsoft’s Azure cloud-based Open Government Data Initiative into the website.

So what does this mean? It means people can add government data sets, or even mash up government data sets with their own data to create interest visualization, apps or websites. Already some of our core users have started to experiment with this feature. London Ontario’s transit data can be found on Datadotgc.ca making it easier to build mobile apps, and a group of us have taken Environment Canada’s facility pollution data, uploaded it and are using the API to create an interesting app we’ll be launching shortly.

So we are excited. We still have work to do around documentation and tracking some more federal data sets we know are out there but, we’ve gone live since nothing helps us develop like having users and people telling us what is, and isn’t working.

But more importantly, we want to go live to show Canadians and our governments, what is possible. Again, our goal remains the same – to push the government’s thinking about what is possible around open data by modeling what should be done. I believe we’ve already shifted the conversation – with luck, datadotgc.ca v2 will help shift it further and faster.

Finally, I can never thank our partners and volunteers enough for helping make this happen.

Rethinking Wikipedia contributions rates

About a year ago news stories began to surface that wikipedia was losing more contributors that it was gaining. These stories were based on the research of Felipe Ortega who had downloaded and analyzed millions the data of contributors.

This is a question of importance to all of us. Crowdsourcing has been a powerful and disruptive force socially and economically in the short history of the web. Organizations like Wikipedia and Mozilla (at the large end of the scale) and millions of much smaller examples have destroyed old business models, spawned new industries and redefined the idea about how we can work together. Understand how the communities grow and evolve is of paramount importance.

In response to Ortega’s research Wikipedia posted a response on its blog that challenged the methodology and offered some clarity:

First, it’s important to note that Dr. Ortega’s study of editing patterns defines as an editor anyone who has made a single edit, however experimental. This results in a total count of three million editors across all languages.  In our own analytics, we choose to define editors as people who have made at least 5 edits. By our narrower definition, just under a million people can be counted as editors across all languages combined.  Both numbers include both active and inactive editors.  It’s not yet clear how the patterns observed in Dr. Ortega’s analysis could change if focused only on editors who have moved past initial experimentation.

This is actually quite fair. But the specifics are less interesting then the overall trend described by the Wikmedia Foundation. It’s worth noting that no open source or peer production project can grow infinitely. There is (a) a finite number of people in the world and (b) a finite amount of work that any system can absorb. At some point participation must stabilize. I’ve tried to illustrate this trend in the graphic below.

Open-Source-Lifecyclev2.0021-1024x606

As luck would have it, my friend Diederik Van Liere was recently hired by the Wikimedia Foundation to help them get a better understanding of editor patterns on Wikipedia – how many editors are joining and leaving the community at any given moment, and over time.

I’ve been thinking about Diederik’s research and three things have come to mind to me when I look at the above chart:

1. The question isn’t how do you ensure continued growth, nor is it always how do you stop decline. It’s about ensuring the continuity of the project.

Rapid growth should probably be expected of an open source or peer production project in the early stage that has LOTS of buzz around it (like Wikipedia was back in 2005). There’s lots of work to be done (so many articles HAVEN’T been written).

Decline may also be reasonable after the initial burst. I suspect many open source lose developers after the product moves out of beta. Indeed, some research Diederik and I have done of the Firefox community suggests this is the case.

Consequently, it might be worth inverting his research question. In addition to figuring out participation rates, figure out what is the minimum critical mass of contributors needed to sustain the project. For example, how many editors does wikipedia need to at a minimum (a) prevent vandals from destroying the current article inventory and/or at the maximum (b) sustain an article update and growth rate that sustains the current rate of traffic rate (which notably continues to grow significantly). The purpose of wikipedia is not to have many or few editors, it is to maintain the world’s most comprehensive and accurate encyclopedia.

I’ve represented this minimum critical mass in the graphic above with a “Maintenance threshold” line. Figuring out the metric for that feels like it may be more important than participation rates independently as such as metric could form the basis for a dashboard that would tell you a lot about the health of the project.

2. There might be an interesting equation describing participation rates

Another thing that struck me was that each open source project may have a participation quotient. A number that describes the amount of participation required to sustain a given unit of work in the project. For example, in wikipedia, it may be that every new page that is added needs 0.000001 new editors in order to be sustained. If page growth exceeds editors (or the community shrinks) at a certain point the project size outstrips the capacity of the community to sustain it. I can think of a few variables that might help ascertain this quotient – and I accept it wouldn’t be a fixed number. Change the technologies or rules around participation and you might make increase the effectiveness of a given participant (lowering the quotient) or you might make it harder to sustain work (raising the quotient). Indeed, the trend of a participation quotient would itself be interesting to monitor… projects will have to continue to find innovative ways to keep it constant even as the projects article archive or code base gets more complex.

3. Finding a test case – study a wiki or open source project in the decline phase

One things about open source projects is that they rarely die. Indeed, there are lots of open source projects out there that are the walking zombies. A small, dedicated community struggles to keep a code base intact and functioning that is much too large for it to manage. My sense is that peer production/open source projects can collapse (would MySpace count as an example?) but the rarely collapse and die.

Diederik suggested that maybe one should study a wiki or open source project that has died. The fact that they rarely do is actually a good thing from a research perspective as it means that the infrastructure (and thus the data about the history of participation) is often still intact – ready to be downloaded and analyzed. By finding such a community we might be able to (a) ascertain what “maintenance threshold” of the project was at its peak, (b) see how its “participation quotient” evolved (or didn’t evolve) over time and, most importantly (c) see if there are subtle clues or actions that could serve as predictors of decline or collapse. Obviously, in some cases these might be exogenous forces (e.g. new technologies or processes made the project obsolete) but these could probably be controlled for.

Anyways, hopefully there is lots here for metric geeks and community managers to chew on. These are only some preliminary thoughts so I hope to flesh them out some more with friends.

Rethinking Freedom of Information Requests: from Bugzilla to AccessZilla

Last week I gave a talk at the Conference for Parliamentarians hosted by the Information Commission as part of Right to Know Week.

During the panel I noted that, if we are interested in improving response times for Freedom of Information (FOI) requests (or, in Canada, Access to Information (ATIP) requests) why doesn’t the Office of the Information Commissioner use a bugzilla type software to track requests?

Such a system would have a number of serious advantages, including:

  1. Requests would be public (although the identity of the requester could remain anonymous), this means if numerous people request the same document they could bandwagon onto a single request
  2. Requests would be searchable – this would make it easier to find documents already released and requests already completed
  3. You could track performance in real time – you could see how quickly different ministries, individuals, groups, etc… respond to FOI/ATIP requests, you could even sort performance by keywords, requester or time of the year
  4. You could see who specifically is holding up a request

In short such a system would bring a lot of transparency to the process itself and, I suspect, would provide a powerful incentive for ministries and individuals to improve their performance in responding to requests.

For those unfamiliar with Bugzilla it is an open source software application used by a number of projects to track “bugs” and feature requests in the software. So, for example, if you notice the software has a bug, you register it in Bugzilla, and then, if you are lucky and/or if the bug is really important, so intrepid developer will come along and develop a patch for it. Posted below, for example, is a bug I submitted for Thunderbird, an email client developed by Mozilla. It’s not as intuitive as it could be but you can get the general sense of things: when I submitted the bug (2010-01-09), who developed the patch (David Bienvenu), it’s current status (Fixed), etc…

ATIPPER

Interestingly, an FOI or ATIP request really isn’t that different than a “bug” in a software program. In many ways, bugzilla is just a complex and collaborative “to do” list manager. I could imagine it wouldn’t be that hard to reskin it so that it could be used to manage and monitor access to information requests. Indeed, I suspect there might even be a community of volunteers who would be willing to work with the Office of the Information Commissioner to help make it happen.

Below I’ve done a mock up of what I think revamped Bugzilla, (renamed AccessZilla) might look like. I’m put numbers next to some of the features so that I can explain in detail about them below.

ATIPPER-OIC1

So what are some of the features I’ve included?

1. Status: Now an ATIP request can be marked with a status, these might be as simple as submitted, in process, under review, fixed and verified fixed (meaning the submitter has confirmed they’ve received it). This alone would allow the Information Commissioner, the submitter, and the public to track how long an individual request (or an aggregate of requests) stay in each part of the process.

2.Keywords: Wouldn’t it be nice to search of other FOR/ATIP requests with similar keywords? Perhaps someone has submitted a request for a document that is similar to your own, but not something you knew existed or had thought of… Keywords could be a powerful way to find government documents.

3. Individual accountability: Now you can see who is monitoring the request on behalf of the Office of the information commissioner and who is the ATIP officer within the Ministry. If the rules permitted then potential the public servants involved in the document might have their names attached here as well (or maybe this option will only be available to those who log on as ATIP officers.

4. Logs: You would be able to see the last time the request was modified. This might include getting the documents ready, expressing concern about privacy or confidentiality, or simply asking for clarification about the request.

5. Related requests: Like keywords, but more sophisticated. Why not have the software look at the words and people involved in the request and suggest other, completed requests, that it thinks might similar in type and therefor of interest to the user. Seems obvious.

6. Simple and reusable resolution: Once the ATIP officer has the documentation, they can simply upload it as an attachment to the request. This way not only can the original user quickly download the document, but anyone subsequent user who stumbles upon the request during a search could download the documents. Better still any public servant who has unclassified documents that might relate to the request can simply upload them directly as well.

7. Search: This feels pretty obvious… it would certainly make citizens life much easier and be the basic ante for any government that claims to be interested in transparency and accountability.

8. Visualizing it (not shown): The nice thing about all of these features is that the data coming out of them could be visualized. We could generate realt time charts showing average response time by ministry, list of respondees by speed from slowest to fastest, even something as mundane as most searched keywords. The point being that with visualizations is that a governments performance around transparency and accountability becomes more accessible to the general public.

It may be that there is much better software out there for doing this (like JIRA), I’m definitely open do suggestions. What I like about bugzilla is that it can be hosted, it’s free and its open source. Mostly however, software like this creates an opportunity for the Office of the Information Commissioner in Canada, and access to information managers around the world, to alter the incentives for governments to complete FOI/ATIP requests as well as make it easier for citizens to find out information about their government. It could be a fascinating project to reskin bugzilla (or some other software platform) to do this. Maybe even a Information Commissioners from around the world could pool their funds to sponsor such a reskinning of bugzilla…

Getting Government Right Behind the Firewall

The other week I stumbled on this fantastic piece by Susan Oh of Ragan.com about a 50 day effort by the BC government to relaunch its intranet set.

Yes, 50 days.

If you run a large organization’s intranet site I encourage to read the piece. (Alternatively, if you are forced (or begged) to use one, forward this article to someone in charge). The measured results are great – essentially a doubling in pretty much all the things you want to double (like participation) – but what is really nice is how quick and affordable the whole project was, something rarely seen in most bureaucracies.

Here is an intranet for 30,000 employees, that “was rebuilt from top to bottom within 50 days with only three developers who were learning the open-source platform Drupal as they as went along.”

I beg someone in the BC government to produce an example of such a significant roleout being accomplished with so few resources. Indeed, it sounds eerily similar to GCPEDIA (available to 300,000 people using open source software and 1 FTE, plus some begged and borrowed resources) and OPSPedia (a test project also using open source software with tiny rollout costs). Notice a pattern?

Across our governments (not to mention a number of large conservative companies) there are tiny pockets where resourceful teams find a leader or project manager willing to buck the idea that a software implementations must be a multi-year, multimillion dollar roll out. And they are making the lives of public servants better. God knows our public servants need better tools, and quickly. Even the set of tools being offered in the BC examples weren’t that mind-blowing, pretty basic stuff for anyone operating as a knowledge worker.

I’m not even saying that what you do has to be open source (although clearly, the above examples show that it can allow one to move speedily and cheaply) but I suspect that the number of people (and the type of person) interested in government would shift quickly if, internally, they had this set of tools at their disposal. (Would love to talk to someone at Canada’s Food Inspection Agency about their experience with Socialtext)

The fact is, you can. And, of course, this quickly get us to the real problem… most governments and large corporations don’t know how to deal with the cultural and power implications of these tools.

We’ll we’d better get busy experimenting and trying cause knowledge workers will go where they can use their and their peers brains most effectively. Increasingly, that isn’t government. I know I’m a fan of the long tail of public policy, but we’ve got to fix government behind the firewall, otherwise their won’t be a government behind the firewall to fix.

Collaborate: "Governments don't do that"

The other day while enjoying breakfast with a consultant friend I heard her talk of about how smaller local governments didn’t have the resources to afford her, or her firms services.

Hogwash I thought! Fresh from the launch of CivicCommons.com at the Gov2.0 Summit I jumped in and asked, surely a couple of the smaller municipalities with similar needs could come together, jointly spec out a project and pool their budgets. It seems like a win-win-win, budgets go further, better services are offered and, well, of less interest but still nice, my friend gets to work on rolling out some nice technologies in the community in which she lives.

The response?

“Government’s doesn’t work that way.”

Followed up by…

“Why would we work with one of those other communities, they are our competitors.”

Once you’ve stopped screaming at your monitor… (yes, I’m happy to give you a few seconds to vent that frustration) let me try to explain in as cool as a manor as possible why this makes no sense. And while I don’t live in the numerous municipalities that border on Vancouver, if you do, consider writing you local councillor/mayor. I think your IT department is wasting your tax dollars.

First, government’s don’t work that way? Really? So an opportunity arises for you to save money and offer better services to your citizens and you’re going to say no because the process offends you in some way? I’m pretty sure there’s a chamber full of council people and a mayor who feel pretty differently about that.

The fact is, that governing a city is going to get more complicated. The expectations of citizens are going to become greater. There is going to be a gap, and no amount of budget is going to cover it. Citizens increasingly have access to top tier services on the web – they know what first class systems look like. They look like Google, Amazon, Travelocity, etc… and vary rarely your municipal website site and the services it offers. It almost doesn’t matter where you are reading this from, I’m willing to bet your city’s site isn’t world class. Thanks to the web however your citizens, even the ones who never leave your bedroom community, are globe traveling super consumers of the web. They are getting faster, better and more effective service on and off the web. You might want to consider this because as the IT director in a city of 500,000 people you probably don’t have the resources to keep up.

Okay, so sharing a budget to be able to build better online infrastructure (or whatever) for your city makes sense. But now you’re thinking – we can’t work with that neighboring community… their our competitors.

Stop. Stop right there.

That community is not your competitor. Let me tell you right now. No one is moving to West Van over Burnaby because their website is better, or their garbage service is more efficient. They certainly aren’t moving because you offer webbased forms on your city’s website and the other guys (annoyingly) make you print out a PDF. That’s not influencing the 250K-500K decision about where I live. Basically, if it doesn’t involve the quality of the school it probably isn’t factoring in.

Hell even other cities like Toronto, Calgary or Seattle aren’t your competitor. If anyone is moving there it’s likely because of family or a job. Maybe if you really got efficient then a marginally lower muncipal tax would help, but if that were the case, then partner with as many cities as possible and benefit from some collaborative economies of scale… cause now you kicking the but of the 99% of cities that aren’t collaborating and sharing costs.

And, of course, this isn’t limited to cities. Pretty much any level of government could benefit from pooling budgets to sponsor some commonly speced out projects.

It’s depressing to see that the biggest challenge to driving down the costs of running a city (or any government) aren’t going to technological, but a cultural obsession with the belief that everybody else is different, competing and not as good as us.

Saving Cities Millions: Introducing CivicCommons.com

Last year, after speaking at the MISA West conference I blogged about an idea I’d called Muniforge (It was also published in the Municipal Information Systems Association’s journal Municipal Interface but behind a paywall). The idea was to create a repository like SourceForge that could host open source software code developed by and/or for cities to share with one another. A few months later I followed it up with another post Saving Millions: Why Cities should Fork the Kuali Foundation which chronicled how a coalition of universities have been doing something similar (they call it community source) and have been saving themselves millions of dollars.

Last week at the Gov 2.0 Summit in Washington, DC my friends over at OpenPlans, with who I’ve exchanged many thoughts about this idea, along with the City of Washington DC brought this idea to life with the launch of Civic Commons. It’s an exciting project that has involved the work of a lot of people: Phillip Ashlock at OpenPlans who isn’t in the video below deserves a great deal of congratulations, as does the team over at Code for America who were also not on the stage.

At the moment Civic Commons is a sort of whitepages for open sourced civic government applications and policies. It doesn’t actually host the software it just points you to where the licenses and code reside (say, for example, at GitHub). There are lots of great tools out there for collaborating on software that don’t need replicating, instead Civic Commons is trying to foster community, a place where cities can find projects they’d like to leverage or contribute to.

The video below outlines it all in more detail. If you find it interesting (or want to skip it and get to that action right away) take a look at the Civic Commons.com website, there are already a number of applications being shared and worked on. I’m also thrilled to share that I’ve been asked to be an adviser to Civic Commons, so more on that and what it means for non-American cities, after the video.

One thing that comes through when looking at this video is the sense this is a distinctly American project. Nothing could be further from the truth. Indeed, during a planning meeting on Thursday I mentioned that a few Canadian cities have contacted me about software applications they would like to make open source to share with other municipalities, everyone and especially Bryan Sivak (CIO for Washington, DC) was keen that other countries join and partake in Civic Commons.

It may end up that municipalities in other countries wish to create their own independent project. That is fine (I’m in favour of diverse approaches), but in the interim I’m keen to have some international participation early on so that processes and issues it raises will be addressed and baked into the project early on. If you work at a city and are thinking that you’d like to add a project feel free to contact me, but also don’t be afraid to just go straight to the site and add it directly!

Anyway, just to sum up, I’m over the moon excited about this project and hope it will turn out. I’ve been hoping something like this would be launched since writing about Muniforge and am excited to both see it happening and be involved.