Category Archives: technology

Why not create an Open311 add-on for Ushahidi?

This is not a complicated post. Just a simple idea: Why not create an Open311 add-on for Ushahidi?

So what do I mean by that, and why should we care?

Many readers will be familiar with Ushahidi, non-profit that develops open source mapping software that enables users to collect and visualize data in interactive maps. It’s history is now fairly famous, as the Wikipedia article about it outlines: “Ushahidi.com’ (Swahili for “testimony” or “witness”) is a website created in the aftermath of Kenya’s disputed 2007 presidential election (see 2007–2008 Kenyan crisis) that collected eyewitness reports of violence sent in by email and text-message and placed them on a Google map.[2]“Ushahidi’s mapping software also proved to be an important resource in a number of crises since the Kenyan election, most notably during the Haitian earthquake. Here is a great 2 minute video on How how Ushahidi works.

ushahidi-redBut mapping of this type isn’t only important during emergencies. Indeed it is essential for the day to day operations of many governments, particularly at the local level. While many citizens in developed economies may be are unaware of it, their cities are constantly mapping what is going on around them. Broken infrastructure such as leaky pipes, water mains, clogged gutters, potholes, along with social issues such as crime, homelessness, business and liquor license locations are constantly being updated. More importantly, citizens are often the source of this information – their complaints are the sources of data that end up driving these maps. The gathering of this data generally falls under the rubric of what is termed 311 systems – since in many cities you can call 311 to either tell the city about a problem (e.g. a noise complaint, service request or inform them about broken infrastructure) or to request information about pretty much any of the city’s activities.

This matters because 311 systems have generally been expensive and cumbersome to run. The beautiful thing about Ushahidi is that:

  1. it works: it has a proven track record of enabling citizens in developing countries to share data using even the simplest of devices both with one another and agencies (like humanitarian organizations)
  2. it scales: Haiti and Kenya are pretty big places, and they generated a fair degree of traffic. Ushahidi can handle it.
  3. it is lightweight: Ushahidi technical footprint (yeap making that up right now) is relatively light. The infrastructure required to run it is not overly complicated
  4. it is relatively inexpensive: as a result of (3) it is also relatively cheap to run, being both lightweight and leveraging a lot of open source software
  5. Oh, and did I mention IT WORKS.

This is pretty much the spec you would want to meet if you were setting up a 311 system in a city with very few resources but interested in starting to gather data about both citizen demands and/or trying to monitor newly invested in infrastructure. Of course to transform Ushahidi into a process for mapping 311 type issues you’d need some sort of spec to understand what that would look like. Fortunately Open311 already does just that and is supported by some of the large 311 providers system providers – such as Lagan and Motorola – as well as some of the disruptive – such as SeeClickFix. Indeed there is an Open311 API specification that any developer could use as the basis for the add-on to Ushahidi.

Already I think many cities – even those in developing countries – could probably afford SeeClickFix, so there may already be a solution at the right price point in this space. But maybe not, I don’t know. More importantly, an Open311 module for Ushahidi could get local governments, or better still, local tech developers in developing economies, interested in and contributing to the Ushahidi code base, further strengthening the project. And while the code would be globally accessible, innovation and implementation could continue to happen at the local level, helping drive the local economy and boosting know how. The model here, in my mind, is OpenMRS, which has spawned a number of small tech startups across Africa that manage the implementation and servicing of a number of OpenMRS installations at medical clinics and countries in the region.

I think this is a potentially powerful idea for stakeholders in local governments and startups (especially in developing economies) and our friends at Ushahidi. I can see that my friend Philip Ashlock at Open311 had a similar thought a while ago, so the Open311 people are clearly interested. It could be that the right ingredients are already in place to make some magic happen.

Mind. Prepare to be blown away. Big Data, Wikipedia and Government.

Okay, super psyched about this. Back at the Strata Conference in Feb (in San Diego) I introduced my long time uber-quant friend and now Wikimedia Foundation data scientist Diederik Van Liere to fellow Gov2.0 thinker Nicholas Gruen (Chairman) and Anthony Goldbloom (Founder and CEO) of an awesome new company called Kaggle.

As usually happens when awesome people get together… awesomeness ensued. Mind. Be prepared to be blown.

So first, what is Kaggle? They’re a company that helps companies and organizations post their data and run competitions with the goal of having it scrutinized by the world’s best data scientists towards some specific goal. Perhaps the most powerful example of a Kaggle competition to date was their HIV prediction competition, in which they asked contestants to use a data set to find markers in the HIV sequence which predict a change in the severity of the infection (as measured by viral load and CD4 counts).

Until Kaggle showed up the best science to date had a prediction rate of 70% – a feat that had taken years to achieve. In 90 days contributors to the contest were able to achieve a prediction rate of 77%. A 10% improvement. I’m told that achieving an similar increment had previously taken something close to a decade. (Data geeks can read how the winner did it here and here.)

Diederik and Anthony have created a similar competition, but this time using Wikipedia participation data. As the competition page outlines:

This competition challenges data-mining experts to build a predictive model that predicts the number of edits an editor will make in the five months after the end date of the training dataset. The dataset is randomly sampled from the English Wikipedia dataset from the period January 2001 – August 2010.

The objective of this competition is to quantitively understand what factors determine editing behavior. We hope to be able to answer questions, using these predictive models, why people stop editing or increase their pace of editing.

This is of course, a subject matter that is dear to me as I’m hoping that we can do similar analysis in open source communities – something Diederik and I have tried to theorize with Wikipedia and actually do Bugzilla data.

There is a grand prize of $5000 (along with a few others) and, amazingly, already 15 participants and 7 submissions.

Finally, I hope public policy geeks, government officials and politicians are paying attention. There is power in data and an opportunity to use it to find efficiencies and opportunities. Most governments probably don’t even know how to approach an organization like Kaggle or to run a competition like this, despite (or because?) it is so fast, efficient and effective.

It shouldn’t be this way.

If you are in government (or any org), check out Kaggle. Watch. Learn. There is huge opportunity here.

12:10pm PST – UPDATE: More Michael Bay sized awesomeness. Within 36 hours of the wikipedia challenge being launched the leading submission has improved on internal Wikimedia Foundation models by 32.4%

Links on Social Media & Politics: Notes from "We Want Your Thoughts #4"

Last night I had a great time taking the stage with Alexandra Samuel in Vancouver for “We Want Your Thoughts” at the Khafka coffee house on Main St. The night’s discussion was focused on Social Media – from chit chat to election winner – what next?” (with a little on the social media driven response to the riots thrown in for good measure).

Both Alex and I promised to post some links from our blogs for attendees so what follows is a list of some thoughts on the subject I hope everyone can find engaging.

On Social Media generally, probably the most popular post on this blog is this piece: Twitter is my Newspaper: explaining twitter to newbies. More broadly thinking about the internet and media, this essay I wrote with Taylor Owen is now a chapter in this university textbook on journalism. Along with this post as a sidebar note (different textbook), which has been one of my most read.

On the riots, I encourage you to read Alexandra Samuel’s post on the subject (After a Loss in Vancouver, Troubling Signals of Citizen Surveillance) and my counter thoughts (Social Media and Rioters) – a blogging debate! You can also hear me talk about the issue on an interview on CBC’s Cross Country Checkup on the issue (around hour 1).

On social media and politics, maybe some of the most notable pieces include a back forth between myself and Michael Valpy who felt that social media was ending our social cohesion and destroying democracy (obviously, this was pre-Middle East Riots and the proroguing Parliament debate). I responded with a post on why his arguments were flawed and that actually the reverse was true. He responded to that post in The Mark. And I posted response to that as well. It all makes for a good read.

Rob-Cottingham-graphic-summary

Rob Cottingham’s Visual Notes of the first 15 minutes

Then there were some pieces on Social Media and the Proroguing of Parliament. I had this piece in the Globe and then this post talking a little more about the media’s confused relationship with social media and politics.

Finally, one of the points I referred to several times yesterday was the problem of assuming social values won’t change when talking about technology adoption and its impact, probably the most explicit post I’ve written on the subject is this one: Why the Internet Will Shape Social Values (and not the other way around)

Finally, some books/articles I mentioned or on topic:

Everything Bad is Good for You by Steven Johnson

What Technology Wants by Kevin Kelly

Here Comes Everybody by Clay Shirky

The Net Delusion: How Not to Liberate the World by Evgeny Morozov

The Inside Story of How Facebook Responded to Tunisian Hacks an article in the Atlantic by Alexis Madrigal

I hope this is interesting.

The next Open Data battle: Advancing Policy & Innovation through Standards

With the possible exception of weather data, the most successful open data set out there at the moment is transit data. It remains the data with which developers have experimented and innovated the most. Why is this? Because it’s been standardized. Ever since Google and the City of Portland creating the General Transit Feed Specification (GTFS) any developer that creates an application using GTFS transit data can port their application to over 100+ cities around the world with 10s and even 100s of millions of potential users. Now that’s scale!

All in all the benefits of a standard data structure are clear. A public good is more effectively used, citizens receive enjoy better service and companies (both Google and the numerous smaller companies that sell transit related applications) generate revenue, pay salaries, etc…

This is why, with a number of jurisdictions now committed to open data, I believe it is time for advocates to start focusing on the next big issue. How do we get different jurisdictions to align around standard structures so as to increase the number of people to whom an application or analysis will be relevant? Having cities publish open data sets is a great start and has led to real innovation, next generation open data and the next leaps in innovation will require some more standards.

The key, I think, is to find areas that meet three criteria:

  • Government Data: Is there relevant government data about the service or issue that is available?
  • Demand: Is this a service for which there is regular demand? (this is why transit is so good, millions of people touch the service on a daily basis)
  • Business Model: Is there a business that believes it can use this data to generate revenue (either directly, or indirectly)

 

 

opendata-1.0151

Two comments on this.

First, I think we should look at this model because we want to find places where the incentives are right for all the key stakeholders. The wrong way to create a data structure is to get a bunch of governments together to talk about it. That process will take 5 years… if we are lucky. Remember the GTFS emerged because Google and Portland got together, after that, everybody else bandwagoned because the value proposition was so high. This remains, in my mind, not the perfect, but the fastest and more efficient model to get more common data structures. I also respect it won’t work for everything, but it can give us more successes to point to.

Which leads me to point two. Yes, at the moment, I think that target in the middle of this model is relatively small. But I think we can make it bigger. The GTFS shows cities, citizens and companies that there is value in open data. What we need are more examples so that a) more business models emerge and b) more government data is shared in a structured way across multiple jurisdictions. The bottom and and right hand circles in this diagram can, and if we are successful will, move. In short, I think we can create this dynamic:

opendata4.016

So, what does this look like in practice?

I’ve been trying to think of services that fall in various parts of the diagram. A while back I wrote a post about using open restaurant inspection data to drive down health costs. Specifically around finding a government to work with a Yelp!, Bing or Google Maps, Urban Spoon or other company to integrate the  inspection data into the application. That for me is an example of something that I think fits in the middle. Government’s have the data, its a service citizens could touch on a regular base if the data appeared in their workflow (e.g. Yelp! or Bing Maps) and for those businesses it either helps drive search revenue or gives their product a competitive advantage. The Open311 standard (sadly missing from my diagram), and the emergence of SeeClickFix strike me as another excellent example that is right on the inside edge of the sweet spot).

Here’s a list of what else I’ve come up with at the moment:

opendata3.015

You can also now see why I’ve been working on Recollect.net – our garbage pick up reminder service – and helping develop a standard around garbage scheduling data – the Trash & Recycling Object Notation. I think it is a service around which we can help explain the value of common standards to cities.

You’ll notice that I’ve put “democracy data” (e.g. agendas, minutes, legislation, hansards, budgets, etc…) in the area where I don’t think there is a business plan. I’m not fully convinced of this – I could see a business model in the media space for this – but I’m trying to be conservative in my estimate. In either case, that is the type of data the good people at the Sunlight Foundation are trying to get liberated, so there is at least, non-profit efforts concentrated there in America.

I also put real estate in a category where I don’t think there is real consumer demand. What I mean by this isn’t that people don’t want it, they do, but they are only really interested in it maybe 2-4 times in their life. It doesn’t have the high touch point of transit or garbage schedules, or of traffic and parking. I understand that there are businesses to be built around this data, I love Viewpoint.ca – a site that takes mashes opendata up with real estate data to create a compelling real estate website – but I don’t think it is a service people will get attached to because they will only use it infrequently.

Ultimately I’d love to hear from people on ideas they on why might fit in this sweet spot. (if you are comfortable sharing the idea, of course). Part of this is because I’d love to test the model more. The other reason is because I’m engaged with some governments interested in getting more strategic about their open data use and so these types of opportunities could become reality.

Finally, I just hope you find this model compelling and helpful.

If the Prime Minister Wants Accountable Healthcare, let's make it Transparent too

Over at the Beyond the Commons blog Aaron Wherry has a series of quotes from recent speeches on healthcare by Canadian Prime Minister Stephen Harper in which the one constant keyword is… accountability.

Who can blame him?

Take everyone promising to limit growth to a still unsustainable 6% (gulp) and throw in some dubiously costly projects ($1 billion spent on e-health records in Ontario when an open source solution – VistA – could likely have been implemented at a fraction of the cost) and the obvious question is… what is the country going to do about healthcare costs?

I don’t want to claim that open data can solve the problem. It can’t. There isn’t going to be a single solution. But I think it could help spread best practices, improve customer choice and service as well as possibly yield other potential benefits.

Anyone who’s been around me for the last month knows about my restaurant inspection open data example (which could also yield healthcare savings) but I think we can go bigger. A Federal Government that is serious about accountability in Healthcare needs to build a system where that accountability isn’t just between the provinces and the feds, it needs to be between the Healthcare system and its users; us.

Since the feds usually attach several provisions to their healthcare dollars, the one I’d like to see is an open data provision. One where provinces, and hospitals are required to track and make open a whole set of performance data, in machine readable formats, in a common national standard, that anyone in Canada (or around the world) can download and access.

Some of the data I’d love to see mandated to be tracked and shared, includes:

  • Emergency Room wait times – in real time.
  • Wait times, by hospital, for a variety of operations
  • All budget data, down to the hospital or even unit level, let’s allow the public to do a cost/patient analysis for every unit in the country
  • Survival rates for various surgeries (obviously controversial since some hospitals that have the lowest rates are actually the best since they get the hardest cases – but let’s trust the public with the data)
  • Inspection data – especially if we launched something akin to the Institute for Health Management’s Protecting 5 Millions Lives Campaign
  • I’m confident there is much, much more…

I can imagine a slew of services and analysis that emerge from these, if nothing than a citizenry that is better informed about the true state of its healthcare system. Even something as simple as being able to check ER wait times at all the hospitals near you, so you can drive to the one where the wait times are shortest. That would be nice.

Of course, if the Prime Minister wants to go beyond accountability and think about how data could directly reduce costs, he might take a look at one initiative launched south of the border.

If he did, he might be persuaded to demand that the provinces share a set of anonymized patient records to see if academics or others in the country might be able to build better models for how we should manage healthcare costs. In January of this year I witnessed the launch of the $3 million dollar Heritage Health Prize at the O’Reilly Strata Conference in San Diego. It is a stunningly ambitious, but realistic effort. As the press release notes:

Contestants in the challenge will be provided with a data set consisting of the de-identified medical records of 100,000 patients from the 2008 calendar year. Contestants will then be required to create a predictive algorithm to predict who was hospitalized during the 2009 calendar year. HPN will award the $3 million prize(more than twice what is paid for the Nobel Prize in medicine) to the first participant or team that passes the required level of predictive accuracy. In addition, there will be milestone prizes along the way, which will be awarded to teams leading the competition at various points in time.

In essence Heritage Health is doing to patient management what Netflix (through the $1M Netflix prize) did to movie selections. It’s crowdsourcing the problem to get better results.

The problem is, any algorithm developed by the winners of the Heritage Health Prize will belong to… Heritage Health. This means the benefits of this innovation cannot benefit Canadians (nor anyone else). So why not launch a prize of our own. We have more data, I suspect our data is better (not limited to a single state) and we could place the winning algorithm in the public domain so that it can benefit all of humanity. If Canadian data helped find efficiencies that lowered healthcare costs and improved healthcare outcomes for everyone in the world… it could be the biggest contribution to global healthcare by Canada since Federick Banting discovered insulin and rescued diabetics everywhere.

Of course, open data, and sharing (even anonymized) patient data would be a radical experiment for government, something new, bold and different. But 6% growth is itself unsustainable and Canadians need to see that their government can do something bold, new and innovative. These initiatives would fit the bill.

Birthday, technology adoption and my happiness

Yesterday I was reminded by the fact that I have great friends – friends who are far better to me than I deserve. You see, yesterday was my birthday and I was overwhelmed with the number of well wishers who sent me a little note.  I’m so, so lucky – something I should never forget.

It was also an illustrative guide to technology adoption and technology is and isn’t impacting my life.

I was struck by the way people got in touch with me. I’m a heavy twitter user and so I don’t spend a lot of time on facebook but yesterday was a huge reminder of how much in the minority I am. While I received maybe 10 mentions or DMs wishing me a happy birthday via twitter (all deeply appreciated) I received somewhere around 100 wall postings and/or facebook messages. Good old email came in at around 15-20 messages. Facebook is simply just big. Huge even. I know that on an intellectual level, but it is great to have these visceral reminders every once in a while. They hit home much harder.

Of course, the results are not a perfect metric of adoption. One thing facebook has going for it that email and twitter don’t is it reminds you of your friends birthdays on its landing page. This is just plain smart of facebook’s part. But it is also interesting in that, knowing this face had no impact on how happy or grateful I was to get messages from people. The fact that technology reminded people – and so they weren’t simply remembering on their own – didn’t matter a lick in how happy I was to hear from them. Indeed, it was wonderful to hear from people – such as old high school friends – I haven’t seen or heard of in ages.

All of this is to say, I continue to read how social media sites and social networks specifically are creating more superficial connections and reducing the quality or intensity of who is a “friend.” My birthday was a great reminder of how ridiculous this talk is. My close friends still reached out, and I got to spend a great day on the weekend with a number of them. Facebook has not displaced them. What it has done however is kept me connected with people who can’t always be close to me, either because of the constraints of geography, or because the evolution of time. Ultimately, these technologies don’t create binary choices between having close intimate friends or lots of weak ties, they are complimentary. My close friends who move away can stay connected to me, and those with whom I form “loose” ties, migrate into my strong ties.

In both cases – for those I get to see frequently and those I don’t – I’m grateful to have them in my life, and that Facebook, twitter and email makes this easier has frankly, made my life richer.

How to Unsuck Canada’s Internet – creating the right incentives

This week at the Mesh conference in Toronto (where I’ll be talking Open Data) the always thoughtful Jesse Brown, of TVO’s Search Engine will be running a session title How to Unsuck Canada’s Internet.

As part of the lead up to the session he asked me if I could write him a sentence or two about my thoughts on how to unsuck our internet. In his words:

The idea is to take a practical approach to fixing Canada's lousy
Internet (policies/infrastructure/open data/culture- interpret the
suck as you will).

So my first thought is that we should prevent anyone who owns any telecommunications infrastructure from owning content. Period. Delivery mechanisms should compete with delivery mechanisms and content should compete with content. But don’t let them mix, cause it screws up all the incentives.

A second thought would be to allocate the freed up broadcast spectrum to new internet providers (which is really what all the cell phone providers are about to become anyways). I’m actually deeply confident that we may be 5 years away from this problem becoming moot in the main urban areas. Once our internet access is freed from cables and the last mile, then all bets are off. That won’t help rural areas, but it may end up transforming urban access and costs. Just like cities clustered around seaports and key places nodes along trade networks, cities (and workers) will cluster around better telecommunication access.

But the longer thought comes from some reflections over the timely recent release of OpenMedia.ca/CIPPIC’s second submission to the CRTC’s proceedings on usage-based billing (UBB) which I think is actually fairly aligned with the piece I wrote back in February on titled Why the CRTC was right about User Based Billing (please read the piece and the comments below before freaking out).

Here, I think our goal shouldn’t be punitive (that will only encourage the telco’s to do “just enough” to comply. What we need to do is get the incentives right (which is, again, why they shouldn’t be allowed to own content, but I digress).

An important part of getting the incentives right is understanding what the actual constraints on internet access. One of the main problems is that people often get confused about what is scarce and what is abundant when talking about the internet. I think what everyone realizes is that content is abundant. There are probably over a trillion websites out there, billions of videos and god knows what else. There is no scarcity there.

This is why any description of access that uses an image like the one below will, in my mind, fail.

Charging per byte shouldn’t be permitted if the pipe has infinite capacity (or at least it wouldn’t make sense in a truly competitive market). What should happen is that companies would be able to charge the cost of the infrastructure plus a reasonable rate of return.

But while the pipe may have infinite capacity over time, at any given moment it does not. The issue isn’t about how many bytes you consume, it’s about the capacity to deliver those bytes in a given moment when you have lots of competing users. This is why it isn’t the “where the data is coming from/going to” that matters, but rather how much of it is in the pipe at a given moment. What matters is not the cable, but the it’s cross section.

A cable that is empty or only at 40% capacity should deliver rip-roaring internet to anyone who wants it. My understanding is that the problem is when the cable is at 100% or more capacity. Then users start crowding each other out and performance (for everyone) suffers.

 

 

 

 

 

 

 

Indeed this is where the OpenMeida/CIPPIC document left me confused. On the one hand they correctly argue that the internet’s content is not a limited resource (such as natural gas). But they seem to be arguing that the network capacity is not a finite resource (sections 21 and 22) while at the same time accepting that there may be constraints on capacity during peak hours (sections 27 and 30 where they seem to accept that off peak users should not be subsidizing peak time users and again in the conclusion where they state “As noted in far greater detail above, ISP provisioning costs are driven primarily by peak period usage.” If you have peak period usage then, by definition, you have scarcity). The last two points seem to be in conflict. The network capacity cannot be both infinite and constrained during peak hours? Can it?

Now, it may be that there is more network capacity in Canada then there is demand – even at peak times – at which point, any modicum of sympathy I might have felt for the telcos disappears immediately. However, if there is a peak consumption period that does stress the network’s capacity, I’d be relatively comfortable adopting a pricing mechanism that allocates the “scarce” amount of broadband pie. Maybe there are users – especially many BitTorrenters – whose activities are not time sensitive. Having a system in place that encourages them to bittorrent during off-peak hours would create a network that was better utilized.

So the OpenMedia piece seems to be open to the idea of peak usage pricing (which was what I was getting at in my UBB piece) so I think we are actually aligned (which is good since I like the people at OpenMedia.ca).

The question is, does this create the right incentives for the telco’s to invest more in capacity? My hope would be yes, that competition would cause users to migrate to networks that provided high speeds and competitive low and/or peak usage time fees. But I’m open to the possibility that it wouldn’t. It’s a complicated problem and I don’t pretend to think that I’ve solved it in one blog post. Just trying to work it though in my head.

 

 

 

Applications and Hardware Already Running On Open Data

Yesterday, Gerry T shared a photo he snapped at the University of Alberta in Edmonton of a “departure board” in the university’s Student Union building that uses open transportation data from the city’s website.

Essentially the display board is composed of a simply application, displayed over a large flat screen TV turned vertically.

TransitApp_BusDepartures-217x300It’s exactly the kind of thing that I imagine University Students in many cities around the world wish they had – especially if you are on a campus that is cold and/or wet. Wouldn’t it be nice to wait inside that warm student union building rather than at the bus stop?

Of course in Boston they’ve gone further than just providing the schedule online. They provide real-time data on bus locations which some students and engineers have used to create $350 LED signs in coffee houses to let users know when the next bus is coming.

It’s the kind of simple innovations you wish you’d see in more places: government’s letting people help themselves at making their lives a little easier. Yes, this isn’t changing the world, but its a start, and an example of what more could happen.

Mostly it’s nice to see innovators in Canada like playing with the technology. Hopefully governments will catch up and let the even bigger ideas students around the country have be more than just visions in their heads.

Not sure who at the University created this, but nice work.

New York release road map to becoming a digital city

Yesterday, New York City released its “Road Map for the Digital City: Achieving New York City’s Digital Future.” For those who missed the announcement, especially those concerned about the digital economy, the future of government and citizen services, the document is definitely worth downloading and scanning.

At the heart of the document sits a road map which I’ve ripped from the executive summary and pasted below.What makes me particularly interested in it is how the Open Government section is not uniquely driven by the desire for transparency but with the goal of spurring innovation and increasing access to services. Of course, the devil is in the details but I’m increasingly convinced that open initiatives will be more successful when the government of the day has some specific policy objectives (beyond just transparency) it wishes to drive home, with open data as part of the mix (more on this in a post coming soon).

As such, “government as platform” works best when the government also builds atop the platform. It itself must be a consumer and stakeholder. This is why section 3 is so important and interesting. Essentially section 2 and 3 have parts that are strikingly similar, its just that section 2 outlines the platform and lays out that the government hopes others will build on top of it whereas parts of section 3 outline what the government intends to build atop of it. Of course section 3 goes further and talks as well about gathering information and data from the public which is the big thing in the Gov 2.0 space that many governments have not gotten around to doing effectively – so this will be worth watching more closely. All of this is great news and exactly what governments should be thinking about.

It is great when a big city comes out with a document like this because while New York is not the first to be thinking these ideas, but its profile means that others will start devoting resources to pursue these ideas more aggressively.

Exciting times.

1. Access

The City of New York ensures that all New Yorkers can access the Internet and take advan- tage of public training sessions to use it effectively. It will support more vendor choices to New Yorkers, and introduce Wi-Fi in more public areas.

  1. Connect high needs individuals through federally funded nyc Connected initiatives
  2. Launch outreach and education efforts to increase broadband Internet adoption
  3. Support more broadband choices citywide
  4. Introduce Wi-Fi in more public spaces, including parks

2. Open Government

By unlocking important public information and supporting policies of Open Government, New York City will further expand access to services, enable innovation that improves the lives of New Yorkers, and increase transparency and efficiency.

  1. Develop nyc Platform, an Open Government framework featuring APIs for City data
  2. Launch a central hub for engaging and cultivating feedback from the developer community
  3. Introduce visualization tools that make data more accessible to the public
  4. Launch App Wishlists to support a needs-based ecosystem of innovation
  5. Launch an official New York City Apps hub

3. Engagement

The City will improve digital tools including nyc.gov and 311 online to streamline service and enable citizen-centric, collaborative government. It will expand social media engagement, implement new internal coordination measures, and continue to solicit community input in the following ways:

  1. Relaunch nyc.gov to make the City’s website more usable, accessible, and intuitive
  2. Expand 311 Online through smartphone apps, Twitter and live chat
  3. Implement a custom bit.ly url redirection service on nyc.gov to encourage sharing and transparency
  4. Launch official Facebook presence to engage New Yorkers and customize experience
  5. Launch @nycgov, a central Twitter account and one-stop shop of crucial news and services
  6. Launch a New York City Tumblr vertical, featuring content and commentary on City stories
  7. Launch a Foursquare badge that encourages use of New York City’s free public places
  8. Integrate crowdsourcing tools for emergency situations
  9. Introduce digital Citizen Toolkits for engaging with New York City government online
  10. Introduce smart, a team of the City’s social media leaders
  11. Host New York City’s first hackathon: Reinventing nyc.gov
  12. Launch an ongoing listening sessions across the five boroughs to encourage input

4. Industry

New York City government, led by the New York City Economic Development Corporation, will continue to support a vibrant digital media sector through a wide array of programs, including workforce development, the establishment of a new engineering institution, and a more stream- lined path to do business.

  1. Expand workforce development programs to support growth and diversity in the digital sector
  2. Support technology startup infrastructure needs
  3. Continue to recruit more engineering talent and teams to New York City
  4. Promote and celebrate nyc’s digital sector through events and awards
  5. Pursue a new .nyc top-level domain, led by DOITT

 

Just a Click Away Keynote Slides

A little over two months ago I gave a keynote at the Just a Click Away Conference in Vancouver. The conference was a gathering for legal information and education experts – for example the excellent people that provide legal aid. My central challenge to them was thinking about how they could further collapse the transaction costs around getting legal assistance and/or completing common legal transactions.

I had a great time at the event and it was a real pleasure to meet Allan Seckel – the former head of British Columbia’s public service. I was deeply impressed by his comments and commitment to both effective and open government. As one of the key forces behind the Citizens at the Centre report he’s pushed a number of ideas forward that I think other governments should be paying attention to.

So, back to the presentation… I’ve been promising to get my slides from the event up and so here they are: