Tag Archives: google maps

How Yelp Could Help Save Millions in Health Care Costs

Okay, before I dive in, a few things.

1) Sorry for the lack of posts last week. Life’s been hectic. Between Code for America, a number of projects and a few articles I’m trying to get through, the blogging slipped. Sorry.

2) I’m presenting on Open Data and Open Government to the Canadian Parliament Access to Information, Privacy and Ethics Committee today – more on that later this week

3) I’m excited about this post

When it comes to opening up government data many of us focus on Governments: we cajole, we pressure, we try to persuade them to open up their data. It’s approach we will continue to have to take for a great deal of the data our tax dollars pay to collect and that government’s continue to not share. There is however another model.

Consider transit data. This data is sought after, intensely useful, and probably the category of data most experimented with by developers. Why is this? Because it has been standardized. Why has it been standardized. Because local government’s (responding to citizen demand) have been desperate to get their transit data integrated with Google Maps (See image).

It turns out, to get your transit data into Google Maps, Google insists that you submit to them the transit data in a single structured format. Something that has come to be known as the General Transit Feed Specification (GTFS). The great thing about the GTFS is that it isn’t just google that can use it. Anyone can play with data converted into the GTFS. Better still, because the data structure us standardized an application someone develops, or analysis they conduct, can be ported to other cities that share their transit data in a GTFS format (like, say, my home town of Vancouver).

In short, what we have here is a powerful model both for creating open data and standardizing this data across thousands of jurisdictions.

So what does this have to do with Yelp! and Health Care Costs?

For those not in the know Yelp! is a mobile phone location based rating service. I’m particularly a fan of its restaurant locator: it will show you what is nearby and how it has been rated by other users. Handy stuff.

But think bigger.

Most cities in North America inspect restaurants for health violations. This is important stuff. Restaurants with more violations are more likely to transmit diseases and food born illnesses, give people food poisoning and god knows what else. Sadly, in most cases the results of these tests are posted in the most useless place imaginable. The local authorities website.

I’m willing to wager almost anything that the only time anyone visits a food inspection website is after they have been food poisoned. Why? Because they want to know if the jerks have already been cited.

No one checks these agencies websites before choosing a restaurant. Consequently, one of the biggest benefits of the inspection data – shifting market demand to more sanitary options – is lost. And of course, there is real evidence that shows restaurants will improve their sanitation, and people will discriminate against restaurants that get poor ratings from inspectors, when the data is conveniently available. Indeed, in the book Full Disclosure: The Perils and Promise of Transparency Fung, Graham and Weil noted that after Los Angeles required restaurants to post food inspection results, that “Researchers found significant effects in the form of revenue increases for restaurants with high grades and revenue decreases for C-graded (poorly rated) restaurants.” More importantly, the study Fung, Graham and Weil reference also suggested that making the rating system public positively impacted healthcare costs. Again, after inspection results in Los Angeles were posted on restaurant doors (not on some never visited website), the county experienced a reduction in emergency room visits, the most expensive point of contact in the system. As the study notes these were:

an 18.6 percent decline in 1998 (the first year of program operation), a 4.8 percent decline in 1999, and a 5.4 per- cent decline in 2000. This pattern was not observed in the rest of the state.

This is a stunning result.

So, now imagine that rather than just giving contributor generated reviews of restaurants Yelp! actually shared real food inspection data! Think of the impact this would have on the restaurant industry. Suddenly, everyone with a mobile phone and Yelp! (it’s free) could make an informed decision not just about the quality of a restaurant’s food, but also based on its sanitation. Think of the millions (100s of millions?) that could be saved in the United States alone.

All that needs to happen is for a simple first step, Yelp! needs approach one major city – say a New York, or a San Francisco – and work with them to develop a sensible way to share food inspection data. This is what happened with Google Maps and the GTSF, it all started with one city. Once Yelp! develops the feed, call it something generic, like the General Restaurant Inspection Data Feed (GRIDF) and tell the world you are looking for other cities to share the data in that format. If they do, you promise to include it in your platform. I’m willing to bet anything that once one major city has it, other cities will start to clamber to get their food inspection data shared in the GRIDF format. What makes it better still is that it wouldn’t just be Yelp! that could use the data. Any restaurant review website or phone app could use the data – be it Urban Spoon or the New York Times.

The opportunity here is huge. It’s also a win for everyone: Consumers, Health Insurers, Hospitals, Yelp!, Restaurant Inspection Agencies, even responsible Restaurant Owners. It would also be a huge win for Government as platform and open data. Hey Yelp. Call me if you are interested.

The Three Laws of Open Government Data

Yesterday, at the Right To Know Week panel discussion – Conference for Parliamentarians: Transparency in the Digital Era – organized by the Office of the Information Commissioner I shared three laws for Open Government Data that I’d devised on the flight from Vancouver.

The Three Laws of Open Government Data:

  1. If it can’t be spidered or indexed, it doesn’t exist
  2. If it isn’t available in open and machine readable format, it can’t engage
  3. If a legal framework doesn’t allow it to be repurposed, it doesn’t empower

To explain, (1) basically means: Can I find it? If Google (and/or other search engines) can’t find it, it essentially doesn’t exist for most citizens. So you’d better ensure that you are optimized to be crawled by all sorts of search engine spiders.

After I’ve found it, (2) notes that, to be useful, I need to be able to play with the data. Consequently, I need to be able to pull or download it in a useful format (e.g. an API, subscription feed, or a documented file). Citizens need data in a form that lets them mash it up with Google Maps or other data sets, or analyze in Excel. This is essentially the difference between VanMaps (look, but don’t play) and the Vancouver Data Portal, (look, take and play!). Citizens who can’t play with information are citizens who are disengaged/marginalized from the discussion.

Finally, even if I can find it and play with it, (3) highlights that I need a legal framework that allows me to share what I’ve created, to mobilize other citizens, provide a new service or just point out an interesting fact. This is the difference between Canada’s House of Parliament’s information (which, due to crown copyright, you can take, play with, but don’t you dare share or re-publish) and say, Whitehouse.gov which “pursuant to federal law, government-produced materials appearing on this site are not copyright protected.”

Find, Play and Share. That’s want we want.

Of course, a brief scan of the internet has revealed that others have also been thinking about this as well. There is this excellent 8 Principle of Open Government Data that are more detailed, and admittedly better, especially for a CIO level and lower conversation.  But for talking to politicians (or Deputy Ministers or CEOs), like those in attendance during yesterday’s panel or, later that afternoon, the Speaker of the House, I found the simplicity of three resonated more strongly; it is a simpler list they can remember and demand.

Creating a City of Vancouver that thinks like the web

Last November my friend Mark Surman – Executive Director of the Mozilla Foundation – gave this wonderful speech entitled “A City that Thinks Like the Web” as a lunchtime keynote for 300 councillors, tech staff and agency heads at the City of Toronto’s internal Web 2.0 Summit.

During the talk the Mayor of Toronto took notes and blackberried his staff to find out what had been done and what was still possible and committed the City of Toronto to follow Mark’s call to:

  1. Open our data. transit. library catalogs. community centre schedules. maps. 311. expose it all so the people of Toronto can use it to make a better city. do it now.
  2. Crowdsource info gathering that helps the city.  somebody would have FixMyStreet.to up and running in a week if the Mayor promised to listen. encourage it.
  3. Ask for help creating a city that thinks like the web. copy Washington, DC’s contest strategy. launch it at BarCamp.

The fact is every major city can and should think like the web. The first step is to get local governments to share (our) data. We, collectively as a community, own this data and could do amazing things with it, if we were allowed. Think of how Google Maps is now able to use Translink data to show us where bus stops are, what buses stop there and when the next two are coming!

Google Map Transit YVR

Imagine if anyone could create such a map, mashing up a myriad of data from local governments, provincial ministries, StatsCan? Imagine the services that could be created, the efficiencies gained, the research that would be possible. The long tail of public policy analysis could flourish with citizen coders, bloggers, non-profits and companies creating ideas, services, and solutions the government has neither the means nor the time to address.

If the data is the basic food source of such an online ecosystem then having it categorized, structured and known is essential. The second step is making it available as APIs. Interestingly the City of Vancouver appears to have taken that first step. VanMaps is a fascinating project undertaken by the City of Vancouver and I encourage people to check it out. It is VERY exciting that the city has done this work and more importantly, made it visible to the public. This is forward thinking stuff. The upside is that, in order to create VanMaps all the data has been organized. The downside is that – as far as I can tell – the public is restricted to looking at, but not accessing, the data. That means integrating these data sets with Google maps, or mashing it up with other data sets is not possible (please correct me if I’ve got it wrong).

Indeed, in VanMaps Terms of Use suggests that even if the data were accessible, you aren’t allowed to use it.

VanMaps EULA

Item 4 is worth noting. VanMap may only be used for internal business or personal purposes. My interpretation of this is that any Mashups using VanMap data is verboten.

But let’s not focus on that for the moment. The key point is that creating a Vancouver that thinks like the web is possible. Above all, it increasingly looks like the IT infrastructure to make it happen may already be in place.

Google Walk

No it is not the swagger of a recently bought out start-up founder, it is the very cool new feature google map just threw in.

Normally, when you get directions on google maps it assumes you are in a car, so shows you the fastest route as if you are driving. This means that it takes detours around one way roads and the like.

Now, there is a “walking” function so google maps computes the fastest route as though you are on foot. Very cool. Now, what would be really nice is if it “balanced” distance with vertical height so you could pick the flattest walking route. I tend to gravitate to railway tracks. Cool thing about railways is that they can never exceed a 3.5 degree grade (or so I read somewhere once) so I always like walking tracks cause it means I know I’ll never hit too steep a hill.

Very excited to try this feature out. For an avid walker like me having this feature in my blackberry is key. Very psyched.

H/T to Jeremy V for emailing me the link.

Urban Public Transit Done Right

Metronauts, eat your hearts out. :)

Was back in Vancouver yesterday. It was a glorious day – the kind that you write in your blog about. Anyway, rode the bus downtown for several meetings and noticed this sign:

text a bus sched

In short, you can now text “33333” + the identifying number found on every bus stop in Vancouver and… the arrival times of the next 6 scheduled buses will be texted to you.

Now this schedule is probably static and does not adjust for the fact that specific buses may be running late, caught in traffic, blown a tire, etc… But it is a start.

Anything that gives transit users more information is a good thing, especially if that means it will raise their expectations around the timeliness and predictability of service (as I suspect this will). A traffic that is more demanding of its public transport is more invested in its public transport.

I can already see the logical next step… Imagine a transit user sends a text to find out when the next bus will arrive. When that bus (and possibly the subsequent bus) fails to show up he/she starts looking for a complaints or information line to call. Their expectation is going to be that the person on the other end of the line can answer the question: “Where is my bus.” The obvious conclusion to this scenario – take the GPS emitters that are on every bus and open up their API’s so that we can all see where they are. It is going to rock transit users’ worlds when they can open up google maps on their phones and search “Vancouver, Transit, 22” and see the current location of all the 22 buses.

Translink you’ve opened a pandora’s box of expectations for this user. It is a good first step.

[BTW: Transit geeks in Vancouver should already be reading this blog, which, of course, was on the case long before me. Long live the long tail of blogs.]