Tag Archives: public policy

Minister Moore and the Myth of Market Forces

Last week was a bad week for the government on the copyright front. The government recently tabled legislation to reform copyright and the man in charge of the file, Heritage Minister James Moore, gave a speech at the International Chamber of Commerce in which he decried those who questioned the bill as “radical extremists.” The comment was a none-too-veiled attack at people like University of Ottawa Professor Michael Geist who have championed for reasonable copyright reform and who, like many Canadians, are concerned about some aspects of the proposed bill.

Unfortunately for the Minister, things got worse from there.

First, the Minister denied making the comment in messages to two different individuals who inquired about it:

Still worse, the Minister got into a online debate with Cory Doctorow, a bestselling writer (he won the Ontario White Pine Award for best book last year and his current novel For the Win is on the Canadian bestseller lists) and the type of person whose interests the Heritage Minister is supposed to engage and advocate on behalf of, not get into fights with.

In a confusing 140 character back and forth that lasted a few minutes, the minister oddly defended Apple and insulted Google (I’ve captured the whole debate here thanks to the excellent people at bettween). But unnoticed in the debate is an astonishing fact: the Minister seems unaware of both the task at hand and the implications of the legislation.

The following innocuous tweet summed up his position:

Indeed, in the Minister’s 22 tweets in the conversation he uses the term “market forces” six times and the theme of “letting the market or consumers decide” is in over half his tweets.

I too believe that consumers should choose what they want. But if the Minister were a true free market advocate he wouldn’t believe in copyright reform. Indeed, he wouldn’t believe in copyright at all. In a true free market, there’d be no copyright legislation because the market would decide how to deal with intellectual property.

Copyright law exists in order to regulate and shape a market because we don’t think market forces work. In short, the Minister’s legislation is creating the marketplace. Normally I would celebrate his claims of being in favour of “letting consumers decide” since this legislation will determine what these choices will and won’t be. However, the Twitter debate should leave Canadians concerned since this legislation limits consumer choices long before products reach the shelves.

Indeed, as Doctorow points out, the proposed legislation actually kills concepts created by the marketplace – like Creative Commons – that give creators control over how their works can be shared and re-used:

But advocates like Cory Doctorow and Michael Geist aren’t just concerned about the Minister’s internal contradictions in defending his own legislation. They have practical concerns that the bill narrows the choice for both consumers and creators.

Specifically, they are concerned with the legislation’s handling of what are called “digital locks.” Digital locks are software embedded into a DVD of your favourite movie or a music file you buy from iTunes that prevents you from making a copy. Previously it was legal for you to make a backup copy of your favourite tape or CD, but with a digital lock, this not only becomes practically more difficult, it becomes illegal.

Cory Doctorow outlines his concerns with digital locks in this excellent blog post:

They [digital locks] transfer power to technology firms at the expense of copyright holders. The proposed Canadian rules on digital locks mirror the US version in that they ban breaking a digital lock for virtually any reason. So even if you’re trying to do something legal (say, ripping a CD to put it on your MP3 player), you’re still on the wrong side of the law if you break a digital lock to do it.

But it gets worse. Digital locks don’t just harm content consumers (the very people people Minister Moore says he is trying to provide with “choice”); they harm content creators even more:

Here’s what that means for creators: if Apple, or Microsoft, or Google, or TiVo, or any other tech company happens to sell my works with a digital lock, only they can give you permission to take the digital lock off. The person who created the work and the company that published it have no say in the matter.

So that’s Minister Moore’s version of “author’s rights” — any tech company that happens to load my books on their device or in their software ends up usurping my copyrights. I may have written the book, sweated over it, poured my heart into it — but all my rights are as nothing alongside the rights that Apple, Microsoft, Sony and the other DRM tech-giants get merely by assembling some electronics in a Chinese sweatshop.

That’s the “creativity” that the new Canadian copyright law rewards: writing an ebook reader, designing a tablet, building a phone. Those “creators” get more say in the destiny of Canadian artists’ copyrights than the artists themselves.

In short, the digital lock provisions reward neither consumers nor creators. Instead, they give the greatest rights and rewards to the one group of people in the equation whose rights are least important: distributors.

That a Heritage Minister doesn’t understand this is troubling. That he would accuse those who seek to point out this fact and raise awareness to it as “radical extremists” is scandalous. Canadians have entrusted in this person the responsibility for creating a marketplace that rewards creativity, content creation and innovation while protecting the rights of consumers. At the moment, we have a minister who shuts out the very two groups he claims to protect while wrapping himself in a false cloak of the “free market.” It is an ominous start for the debate over copyright reform and the minister has only himself to blame.

The Myth of the Open Data Mob: a response to Mike Ananny

I recently discovered that Mike Ananny wrote this response to a piece I initially posted here and then on The Mark titled Let Us Audit Parliament’s Books. I encourage you to read both my piece and Ananny’s thoughtful response. And, in the spirit of dialogue, I have two thoughts in response.

First, Ananny misrepresents the thrust of my argument. He suggests that I only want crowds and that my goal is to replace public institutions with amorphous “crowds.” Nothing could be further from the truth. Indeed, I say, at the end of the article, that the Auditor General should do her own audit – using the same information that is available to everyone. I’m not in favour of replacing institutions with crowds, or democracy with populism. What I am in favour of is ensuring their are checks on institutions.

Second, Ananny creates a straw man of my arguments painting the picture of a single monolithic crowd. These misrepresentation can be found in lines such as this from his piece:

It’s okay that we do this. But in the kind of crowd-sourced audit Eaves describes, who are the “others” that we trust to discover on our behalf and teach us what they learn? At least we know who the auditor general is and how – cumbersome as it might be – she and the government can be replaced.

This is certainly not what I sought to describe nor is what I think I did, but as an author I share responsibility in being clear.

Do I believe there will be no single amorphous crowd? No. I believe there will be the public much like today. And it will discern the debate in the same way it currently does. What does this mean? I suspect that if the expensess were public there might be numerous audits, and that those will find it easiest to earn the public’s trust will be those conducted by “others” who first and foremost declare who they are. The most obvious candidate for this would be the Globe and Mail. (Wouldn’t it be nice if they had access to MP expenses)? Of course, the Globe may not have the resources to go through every line of every MP’s expenses so they may ask people to flag lines that seem to be of particular importance. This is, of course, how  The Guardian newspaper in the UK exposed some of the most problematic expenses in their MP expense scandal. In short, this isn’t a single faceless mob, this is about allowing numerous people, from public institutions to the media to self interested private citizens. Some will self-organize, others will not. But there will be a diversity of perspectives.

Second, and more importantly, is that these competing audits would be good for democracy and for public institutions. I completely agree with Ananny’s quote from Bentham. A perfectly knowledgeable public is a myth. Yes, most of us, on most issues, knowingly or not, do delegate responsibility for forming our beliefs to others. The challenge is, to whom to delegate? Ananny seems confident he knows exactly who it should be (an AG who, actually, only has the power to shame). He wants us to place our faith in a crowd of one – the AG – who no one gets to choose and who herself has no oversight.  I’m interested in a different outcome. We live in a world where it is easier to allow more than one resource to which citizens can delegate their trust. More importantly, by sharing the expenses different parties can assess how others conduct their audit – biases, different assumptions, flaws and more clear comparisons – in short a public debate, could take place. Giving everyone access to MP expenses will, admittedly, be messy, but then so is democracy. The point is you either believe in public debate or you don’t.

Encouragingly, this is ultimately what Ananny seems to want as well, as he states:

I know we don’t have to choose between crowds or experts – I want both – but if it’s a question of emphasis, I’d much rather be the constituent of an AG who can be legally reprimanded and dramatically fired than an unwilling patron of a crowd that may or may not know what it’s doing.

I want both as well. I’d also love to see a supportive infrastructure that helps people contribute to audits. Indeed, this was the thrust of my June 10th piece Learning from Libraries: The Literacy Challenge of Open Data. But you don’t create that infrastructure by not sharing the the accounts openly. As my libraries piece argues, sharing is a precondition to developing such an infrastructure.

So if, as suggested, this is a question of emphasis, why did Ananny choose to use my piece as a launchpad for his own? We seem to be on the same page (we both appear to want to improve public institutions and public debates). I think the ultimate reason lies in this last point. Ananny’s examples refer to crowds or institutions that are deemed expert by somebody. But the public’s trust in an institution or resource or even a crowd isn’t granted or ordained, it is earned. Ananny’s solutions keep returning to the notion that we need to ordain trust and delegate whereas mine is that we need to enable emergent systems so that many actors can attempt to earn trust and we can debate. This is why I agree that the AG’s office should, as he suggests, provide a program to help people learn how to do audits. But I also I think society will be best served when a diversity (of particularly emergent) approaches are possible, perhaps involving actors like accounting firms and universities. This would allow others to be a check on the AG which will enhance, not destroy confidence. But again, this is only possible if we all have access to the information.

And that ultimately is my point. Access to information is a precondition that enables us to engage in better debates, foster systems that support alternative perspectives and also provides a check on public institutions. It is these checks and debate, not blind delegation, that will improve confidence.

Gun Registry vs. "Truth in Sentencing" – when policy is divorced from evidence

So, this morning Kevin Page, Parliamentary Budget Officer, reports that the “Truth in Sentencing” legislation will cost Canadians $5.1B to implement between now and 2016. Essentially, Canadians will pay an extra billion dollars a year on a program that most experts agree will do nothing to reduce crime or make Canadians safer.

It seems worth looking back at this point at the Gun Registry, which the conservatives bemoaned as been an ineffective boondoggle, but of course it also ended up costing an additional billion dollars a year to manage, and it is equally unclear if it has made Canadians any safer (that said it has helped police officers who use it 10,000 times a day, but whether this help is worth $1B or if that money could have been better spent elsewhere is unclear).

The point here is fear and ideology can do terrible things to budgets (not to mention social outcomes, a billion dollars – not to mention TWO billion dollars – a year on certain social programs could do a lot to prevent crime). In both cases we have policies that appeal to values of a base (tough on guns vs. tough on crime) when in reality this is merely posturing (lets spend money to look like we are stopping gun violence vs. lets spend money to look like we are stopping crime). Tracking guns and locking away criminals feels like it has a direct impact so it must be effective whereas offering more drug and alcohol rehab programs or after school programs is indirect so must be less effective. But notice its what we feel, not what is actually empirically demonstrable. So should we lose all hope in the ability of politicians to pursue effective public policy?

I say no…

While billion dollar lessons are painful to learn (Ontario, e-health and proprietary software is another one that comes to mind) they can be salutary in getting everybody to refocus on what’s effective which means getting back to the evidence. What will be interesting is to see if the Conservative government can adapt. For this government the challenge will be greater as ideology has trumped evidence for most of the past few years. Remember this is the government that has decried Insite, the supervised injection, despite the evidence that it works and even had a Minister (Clement when he was at Health) rant about how the Canadian Medical Association and its doctors were unethical for supporting it. They were of course, just supporting the best medical practice and outcome. So, this could be a long road to travel.

Of course, if they fight the tide, I suspect that the prison boondoggle could turn into their version of the gun registry (especially coming on the heels of the G20 fake lake). But I suspect that ideological fervor in the face of budget realities has a much shorter road… to opposition. So structurally, they’ll be pushed down the road whether they want it or not. Either way, it will make for some interesting political and policy watching.

On Policy Alpha geeks, network thinking and foreign policy

In the past few weeks the Liberal Party of Canada (LPC) and the Canadian International Council (CIC) both launched new visions for Canada’s foreign policy. Reading each, I’m struck by how much overlap both documents have with Middle to Model Power, the Canada25 report written 5 years ago by over 500 young Canadians from across the country and around the world.

With Middle to Model Power, a group of young people largely self-organized to lay out a vision and selection of ideas around how Canada could rethink its foreign policy. Take a look at this selection from its executive summary, including an overview and the first recommendation:

We submit that Canada should cease assessing its influence on the basis of its size or position within an obsolete global hierarchy. Instead, Canada25 calls on Canadians to look at the world as a network, where influence is based on the capacity of an individual, company, non-governmental organization (NGO) or country to innovate and collaborate. Building on this perspective, we propose that Canada become a Model Power—a country whose influence is linked to its ability to innovate, experiment, and partner; a country that, by presenting itself as a model, invites the world to assess, challenge, borrow from, and contribute to, its efforts.
In pursuit of our vision of Canada as a Model Power, we outline three priorities for action. These, accompanied by some of our recommendations, include:

MAKE CANADA A NETWORK NODE. Enhance the ability of Canadians to create, nurture, and tap into international networks:
• Issue five-year work visas to foreign graduates of Canadian universities • Reach out to Canada’s expatriate community by creating an international network of
Canadian leaders…

You can download the full report here, but you get the idea. Remember this is a group of 23-35 year-olds writing in 2005.

Now, quickly compare this to the summary’s of both the LPC and CIC’s new reports.

The LPC report, called a Global Networks Strategy opens by stating:

Networks define how the world works today, as hierarchies did in the past. Influence is gained through connectedness, and by being at the centre of networks. That is good news for Canada, because we have a reputation for being able to work with others, we have shaped many multilateral organizations, and our population today reflects the diversity of the world. The Global Networks Strategy is designed to leverage these assets. It sets priority areas in which the federal government must collaborate with the full range of players who contribute vigorously – and most often in networks – to Canada’s presence in the world: other governments, non-governmental organizations, the private sector, young Canadians, academia, faith- based groups, artists and others.

And in the CIC report, titled Open Canada: A Global Positioning Strategy for a Networked World, has as one of its opening paragraphs:

Canada will never be the most powerful nation on Earth. But we live in a digital age, where might is measured in knowledge rather than muscularity. If we keep building on our openness—attracting the best and the brightest citizens, generating and exchanging new ideas and new ways of doing things and welcoming investment in our economy—Canada can position itself at the centre of the networked world that is emerging in the 21st century.

And, unsurprisingly, the deeper details of the reports offer many similar prescriptions.

So how, on a shoestring budget, can a group of young Canadians many of whom were not foreign policy experts, write a report that identifies an organizing principle that 5 years both a major political party and one of the country’s newest and best funded think tanks would put at the hearts of their own reports?

A few ideas come to mind:

1) The Medium is the Message: Middle to Model Power was not written on a wiki (in 2005 none of us knew what a wiki was!) but it was written over email. The authors were scattered across the country and the process of organizing local events was relatively decentralized. People raised whatever topics that mattered to them, and during the drafting phase they simple sent me their ideas and we batted them around. There was structure, but were were a pretty flat organization and… we were very connected. For Canada25 a network wasn’t just an idea that emerged out of the process, it was the process. It should hardly be surprising that the way we saw the world reflected how we organized ourselves. (When I say that Canada’s digital economy strategy will fail unless written on GCPEDIA this is part of what I’m hinting at). The medium is the message. It’s hard (but not impossible) to write about networks deep in hierarchy.

2) Look for Policy Alpha Geeks in resource poor environments: So why did Canada25 think in terms of networks? How was it that before Wikinomics or GPS or pretty much most other things I’ve seen, did Canada25 organize itself this way?  Well, it wasn’t because we were strategic or young. It was because we had very little money. We couldn’t afford to organize any other way. To get 500 Canadians around the world to think about foreign policy we had to let them self-organize – we didn’t have an org structure or facilitators to do it for them. We had to take the cheapest tools (email) and over use them. Don’t get me wrong, Canada25 was not poor. Our members were generally very well educated, we had access to computers and the internet and access to interesting people to interview and draw ideas from. But the raw infrastructure we had at our disposal was not significant and it forced us to adopt what I now see were disruptive technologies and processes. We became Policy Alpha Geeks because we had to innovate not to be relevant, but to ensure the project survived.

3) It’s not about the youth: People presume that our thinking emerged because we were young. This is not entirely correct. Again, I submit that we got to thinking about networks because we were operating in a resource weak environment and had exposure to new tools (email) and a risk tolerance to try using them in an ambitious way. This isn’t about age, it just happens that generally it is young people who don’t have lots of resources and are willing to experiment with new tools. Older people, who frequently have more senior titles, generally have access to more resources and so can rely on more established, but more resource intensive tools and processes. But again, this is about mindset, not about age. Indeed, it is really about the innovators’ dilemma in policy making. Don’t believe me? Well, as lead author of Middle to Model Power I can tell you that the most influential book on my thinking was Alvin Toffler’s Future Shock which I read in the month preceding the drafting of the report. It was written in 1970 by an author who was, at the time, 42. In sum, young people can be a good guide, but it is environmental factors that you can replicate, not intrinsic qualities of being young, that allow you to innovate.

Both the LPC and the CIC’s documents are good and indeed, more up to date than Middle to Model Power. But in terms of core organizing principles the three documents are similar. So if you are genuinely interested in this take a look at all three documents. I do think they put forward what could become an emerging centrist consensus regarding organizing principles for Canadian foreign policy. Certainly that was the ambition back in 2005.

Learning from Libraries: The Literacy Challenge of Open Data

We didn’t build libraries for a literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have public policy literate citizens, we build them so that citizens may become literate in public policy.

Yesterday, in a brilliant article on The Guardian website, Charles Arthur argued that a global flood of government data is being opened up to the public (sadly, not in Canada) and that we are going to need an army of people to make it understandable.

I agree. We need a data-literate citizenry, not just a small elite of hackers and policy wonks. And the best way to cultivate that broad-based literacy is not to release in small or measured quantities, but to flood us with data. To provide thousands of niches that will interest people in learning, playing and working with open data. But more than this we also need to think about cultivating communities where citizens can exchange ideas as well as involve educators to help provide support and increase people’s ability to move up the learning curve.

Interestingly, this is not new territory.  We have a model for how to make this happen – one from which we can draw lessons or foresee problems. What model? Consider a process similar in scale and scope that happened just over a century ago: the library revolution.

In the late 19th and early 20th century, governments and philanthropists across the western world suddenly became obsessed with building libraries – lots of them. Everything from large ones like the New York Main Library to small ones like the thousands of tiny, one-room county libraries that dot the countryside. Big or small, these institutions quickly became treasured and important parts of any city or town. At the core of this project was that literate citizens would be both more productive and more effective citizens.

But like open data, this project was not without controversy. It is worth noting that at the time some people argued libraries were dangerous. Libraries could spread subversive ideas – especially about sexuality and politics – and that giving citizens access to knowledge out of context would render them dangerous to themselves and society at large.  Remember, ideas are a dangerous thing. And libraries are full of them.

Cora McAndrews Moellendick, a Masters of Library Studies student who draws on the work of Geller sums up the challenge beautifully:

…for a period of time, censorship was a key responsibility of the librarian, along with trying to persuade the public that reading was not frivolous or harmful… many were concerned that this money could have been used elsewhere to better serve people. Lord Rodenberry claimed that “reading would destroy independent thinking.” Librarians were also coming under attack because they could not prove that libraries were having any impact on reducing crime, improving happiness, or assisting economic growth, areas of keen importance during this period… (Geller, 1984)

Today when I talk to public servants, think tank leaders and others, most grasp the benefit of “open data” – of having the government sharing the data it collects. A few however, talk about the problem of just handing data over to the public. Some questions whether the activity is “frivolous or harmful.” They ask “what will people do with the data?” “They might misunderstand it” or “They might misuse it.” Ultimately they argue we can only release this data “in context”. Data after all, is a dangerous thing. And governments produce a lot of it.

As in the 19th century, these arguments must not prevail. Indeed, we must do the exact opposite. Charges of “frivolousness” or a desire to ensure data is only released “in context” are code to obstruct or shape data portals to ensure that they only support what public institutions or politicians deem “acceptable”. Again, we need a flood of data, not only because it is good for democracy and government, but because it increases the likelihood of more people taking interest and becoming literate.

It is worth remembering: We didn’t build libraries for an already literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have a data or public policy literate citizenry, we build them so that citizens may become literate in data, visualization, coding and public policy.

This is why coders in cities like Vancouver and Ottawa come together for open data hackathons, to share ideas and skills on how to use and engage with open data.

But smart governments should not only rely on small groups of developers to make use of open data. Forward-looking governments – those that want an engaged citizenry, a 21st-century workforce and a creative, knowledge-based economy in their jurisdiction – will reach out to universities, colleges and schools and encourage them to get their students using, visualizing, writing about and generally engaging with open data. Not only to help others understand its significance, but to foster a sense of empowerment and sense of opportunity among a generation that could create the public policy hacks that will save lives, make public resources more efficient and effective and make communities more livable and fun. The recent paper published by the University of British Columbia students who used open data to analyze graffiti trends in Vancouver is a perfect early example of this phenomenon.

When we think of libraries, we often just think of a building with books.  But 19th century mattered not only because they had books, but because they offered literacy programs, books clubs, and other resources to help citizens become literate and thus, more engaged and productive. Open data catalogs need to learn the same lesson. While they won’t require the same centralized and costly approach as the 19th century, governments that help foster communities around open data, that encourage their school system to use it as a basis for teaching, and then support their citizens’ efforts to write and suggest their own public policy ideas will, I suspect, benefit from happier and more engaged citizens, along with better services and stronger economies.

So what is your government/university/community doing to create its citizen army of open data analysts?

Apps for Climate Action Update – Lessons and some new sexy data

ttl_A4CAOkay, so I’ll be the first to say that the Apps4Climate Action data catalog has not always been the easiest to navigate and some of the data sets have not been machine readable, or even data at all.

That however, is starting to change.

Indeed, the good news is three fold.

First, the data catalog has been tweaked and has better search and an improved capacity to sort out non-machine readable data sets. A great example of a government starting to think like the web, iterating and learning as the program progresses.

Second, and more importantly, new and better sets are starting to be added to the catalog. Most recently the Community Energy and Emissions Inventories were released in an excel format. This data shows carbon emissions for all sorts of activities and infrastructure at a very granular level. Want to compare the GHG emissions of a duplex in Vancouver versus a duplex in Prince George? Now you can.

Moreover, this is the first time any government has released this type of data at all, not to mention making it machine readable. So not only have the app possibilities (how green is your neighborhood, rate my city, calculate my GHG emissions) all become much more realizable, but any app using this data will be among the first in the world.

Finally, probably one of the most positive outcomes of the app competition to date is largely hidden from the public. The fact that members of the public have been asking for better data or even for data sets at all(!) has made a number of public servants realize the value of making this information public.

Prior to the competition making data public was a compliance problem, something you did but you figured no one would ever look at or read it. Now, for a growing number of public servants, it is an innovation opportunity. Someone may take what the government produces and do something interesting with it. Even if they don’t, someone is nonetheless taking interest in your work – something that has rewards in of itself. This, of course, doesn’t mean that things will improve over night, but it does help advance the goal of getting government to share more machine readable data.

Better still, the government is reaching out to stakeholders in the development community and soliciting advice on how to improve the site and the program, all in a cost-effective manner.

So even within the Apps4Climate Action project we see some of the changes the promise of Government 2.0 holds for us:

  • Feedback from community participants driving the project to adapt
  • Iterations of development conducted “on the fly” during a project or program
  • Success and failures resulting in queries in quick improvements (release of more data, better website)
  • Shifting culture around disclosure and cross sector innovation
  • All on a timeline that can be measured in weeks

Once this project is over I’ll write more on it, but wanted to update people, especially given some of the new data sets that have become available.

And if you are a developer or someone who would like to do a cool visualization with the data, check out the Apps4Climate Action website or drop me an email, happy to talk you through your idea.

Saving Millions: Why Cities should Fork the Kuali Foundation

For those interested in my writing on open source, municipal issues and technology, I want to be blunt: I consider this to be one of the most important posts I’ll write this year.

A few months ago I wrote an article and blog post about “Muniforge,” an idea based on a speech I’d given at a conference in 2009 in which I advocated that cities with common needs should band together and co-develop software to reduce procurement costs and better meet requirements. I continued to believe in the idea, but have recognized that cultural barriers would likely mean it would be difficult to realize.

Last month that all changed. While at Northern Voice I ended up talking to Jens Haeusser an IT strategist at the University of British Columbia and confirmed something I’d long suspected: that some people much smarter than me had already had the same idea and had made it a reality… not among cities but among academic institutions.

The result? The Kuali foundation. “…A growing community of universities, colleges, businesses, and other organizations that have partnered to build and sustain open-source administrative software for higher education, by higher education.”

In other words for the past 5 years over 35 universities in the United States, Canada, Australia and South Africa have been successfully co-developing software.

For cities everywhere interested in controlling spending or reducing costs, this should be an earth shattering revelation – a wake up call – for several reasons:

  • First, a viable working model for muniforge has existed for 5 years and has been a demonstrable success, both in creating high quality software and in saving the participating institutions significant money. Devising a methodology to calculate how much a city could save by co-developing software with an open source license is probably very, very easy.
  • Second, what is also great about universities is that they suffer from many of the challenges of cities. Both have: conservative bureaucracies, limited budgets, and significant legacy systems. In addition, neither have IT as the core competency and both are frequently concerned with licenses, liability and the “owning” intellectual property.
  • Which thirdly, leads to possibly the best part. The Kuali Foundation has already addressed all the critical obstacles to such an endeavour and has developed licensing agreements, policies, decision-making structures, and work flows processes that address necessary for success. Moreover, all of this legal, policy and work infrastructure is itself available to be copied. For free. Right now.
  • Fourth, the Kuali foundation is not a bunch of free-software hippies that depend on the kindness of strangers to patch their software (a stereotype that really must end). Quite the opposite. The Kuali foundation has helped spawn 10 different companies that specialize in implementing and supporting (through SLAs) the software the foundation develops. In other words, the universities have created a group of competing firms dedicated to serving their niche market. Think about that. Rather than deal with vendors who specialize in serving large multinationals and who’ve tweaked their software to (somewhat) work for cities, the foundation has fostered competing service providers (to say it again) within the higher education niche.

As a result, I believe a group of forwarding thinking cities – perhaps starting with those in North America – should fork the Kuali Foundation. That is, they should copy Kuali’s bylaws, it structure, its licenses and pretty much everything else – possibly even the source code for some of its projects – and create a Kuali for cities. Call it Muniforge, or Communiforge or CivicHub or whatever… but create it.

We can radically reduce the costs of software to cities, improve support by creating the right market incentive to help foster companies whose interests are directly aligned with cities and create better software that meets cities unique needs. The question is… will we? All that is required is for CIO’s to being networking and for a few to discover some common needs. One I idea I have immediately is for the City of Nanaimo to apply the Kuali modified Apache license to its council monitoring software package it developed in house, and to upload it to GitHub. That would be a great start – one that could collectively save cities millions.

If you are a city CIO/CTO/Technology Director and are interested in this idea, please check out these links:

The Kuali Foundation homepage

Open Source Collaboration in Higher Education: Guidelines and Report of the Licensing and Policy Framework Summit for Software Sharing in Higher Education by Brad Wheeler and Daniel Greenstein (key architects behind Kuali)

Open Source 2010: Reflections on 2007 by Brad Wheeler (a must read, lots of great tips in here)

Heck, I suggest looking at all of Brad Wheeler’s articles and presentations.

Another overview article on Kuali by University Business

Phillip Ashlock of Open Plans has an overview article of where some cities are heading re open source.

And again, my original article on Muniforge.

If you aren’t already, consider reading the OpenSF blog – these guys are leaders and one way or another will be part of the mix.

Also, if you’re on twitter, consider following Jay Nath and Philip Ashlock.

Articles I'm digesting – 25/5/2010

Been a while since I’ve done one of these. A couple of good ones ranging from the last few months. Big thank you’s to those who sent me these pieces. Always enjoy.

The Meaning of Open by Jonathan Rosenberg, Senior Vice President, Product Management

Went back and re-read this. Every company makes mistakes and Google is no exception (privacy settings on Buzz being everyone’s favourite) but this statement, along with Google’s DataLiberartion.org (which unlike Facebook is designed to ensure you can extract your information from Google’s services) shows why Google enjoys greater confidence than Facebook, Apple or any number of its competitors. If you’re in government, the private or the non-profit sector, read this post. This is how successful 21st century organizations think.

Local Governments Offer Data to Software Tinkerers by Claire Cain Miller (via David Nauman & Andrew Medd)

Another oldie (December 2009 is old?) but a goodie. Describes a little bit of the emerging eco-system for open local government data along with some of the tensions it is creating. Best head in the sand line:

Paul J. Browne, a deputy commissioner of the New York City Police Department, said it releases information about individual accidents to journalists and others who request it, but would not provide software developers with a regularly updated feed. “We provide public information, not data flow for entrepreneurs,” he said.

So… if I understand correctly, the NYPD will only give data to people who ask and prefer to tie up valuable resources filling out individual requests rather than just provide a constant feed that anyone can use. Got it. Uh, and just for the record, those “entrepreneurs” are the next generation of journalists and the people who will make the public information useful. The NYPD’s “public information” is effectively useless, much like that my home town police department offers. Does anyone actually looks at PDF’s and pictures of crimes? That you can only get on a weekly basis? Really? In an era of spreadsheets and google maps… no.

Didacticism in Game Design by Clint Hocking (via Lauren Bacon)

eaves.ca readers meet Clint Hocking. My main sadness in introducing you is that you’ll discover how a truly fantastic, smart blog reads. The only good news for me us that you are hopefully more interested in public policy, open source and things I dwell on than video games, so Clint won’t steal you all away. Just many of you.

A dash of a long post post that is worth reading

As McLuhan says: the medium is the message. When canned, discrete moral choices are rendered in games with such simplicity and lack of humanity, the message we are sending is not the message specific to the content in question (the message in the canned content might be quite beautiful – but it’s not a ludic message) – it is the message inherent in the form in which we’ve presented it: it effectively says that ‘being moral is easy and only takes a moment out of each hour’. To me, this is almost the opposite of the deeper appreciation of humanity we might aim to engender in our audience.

Clint takes video games seriously. And so should you.

The Analytic Mode by David Brooks (via David Brock)

These four lines alone make this piece worth reading. Great lessons for students of policy and politics:

  • The first fiction was that government is a contest between truth and error. In reality, government is usually a contest between competing, unequal truths.
  • The second fiction was that to support a policy is to make it happen. In fact, in government power is exercised through other people. It is only by coaxing, prodding and compromise that presidents [or anyone!] actually get anything done.
  • The third fiction was that we can begin the world anew. In fact, all problems and policies have already been worked by a thousand hands and the clay is mostly dry. Presidents are compelled to work with the material they have before them.
  • The fourth fiction was that leaders know the path ahead. In fact, they have general goals, but the way ahead is pathless and everything is shrouded by uncertainty

The case against non-profit news sites by Bill Wyman (via Andrew Potter)

Yes, much better that news organizations be beholden to a rich elite than paying readers… Finally someone takes on the idea that a bunch of enlightened rich people or better, rich corporate donors, are going to save “the news.” Sometimes it feels like media organizations are willing to do anything they can to avoid actually having to deal with paying customers. Be it using advertisers and relying on rich people to subsidize them, anything appears to be better than actually fighting for customers.

That’s what I love about Demand Media. Some people decry them as creating tons of cheap content, but at least they looked at the market place and said: This is a business model that will work. Moreover, they are responding to a real customer demand – searches in google.

Wyman’s piece also serves as a good counterpoint to the recent Walrus advertising campaign which essentially boiled down to: Canada needs the Walrus and so you should support it. The danger here is that people at the Walrus believe this line: That they are of value and essential to Canada even if no one (or very few people) bought them or read them. I think people should buy The Walrus not because it would be good for the country but because it is engaging, informative and interesting to Canadians (or citizens of any country). I think the Walrus can have great stories (Gary Stephen Ross’s piece A Tale of Two Cities is a case in point), but if you have a 1 year lead time for an article, that’s going to hard to pull off in the internet era, foundation or no foundation. I hope the Walrus stays with us, but Wyman’s article serves up some arguments worth contemplating.

Open Data: An Example of the Long Tail of Public Policy at Work

VancouverGraffiti_AnalysisAs many readers know, Vancouver passed what has locally been termed the Open3 motion a year ago and has had a open data portal up and running for several months.

Around the world much of the focus of open data initiatives have focused on the development of applications like Vancouver’s Vantrash, Washington DC’s Stumble Safely or Toronto’s Childcare locator. But the other use of data portals is to actually better understand and analyze phenomena in a city – all of which can potentially lead to a broader diversity of perspectives, better public policy and a more informed public and/or decision makers.

I was thus pleased to find out about another example of what I’ve been calling the Long Tail of Public Policy when I received an email from Victor Ngo, a student at the University of British Columbia who just completed his 2nd year in the Human Geography program with an Urban Studies focus (He’s also a co-op student looking for a summer job – nudge to the City of Vancouver).

It turns out that last month, he and two classmates did a project on graffiti occurrence and its relationship to land use, crime rates, and socio-economic variables. As Victor shared with me:

It was a group project I did with two other members in March/April. It was for an introductory GIS class and given our knowledge, our analysis was certainly not as robust and refined as it could have been. But having been responsible for GIS analysis part of the project, I’m proud of what we accomplished.

The “Graffiti sites” shapefile was very instrumental to my project. I’m a big fan of the site and I’ll be using it more in the future as I continue my studies.

So here we have University students in Vancouver using real city data to work on projects that could provide some insights, all while learning. This is another small example of why open data matters. This is the future of public policy development. Today Victor may be a student, less certain about the quality of his work (don’t underestimate yourself, Victor) but tomorrow he could be working for government, a think tank, a consulting firm, an insurance company or a citizen advocacy group. But wherever he is, the open data portal will be a resource he will want to turn to.

With Victor’s permission I’ve uploaded his report, Graffiti in the Urban Everyday – Comparing Graffiti Occurrence with Crime Rates, Land Use, and Socio-Economic Indicators in Vancouver, to my site so anyone can download it. Victor has said he’d love to get people’s feedback on it.

And what was the main drawback of using the open data? There wasn’t enough of it.

…one thing I would have liked was better crime statistics, in particular, the data for the actual location of crime occurrence. It would have certainly made our analysis more refined. The weekly Crime Maps that the VPD publishes is an example of what I mean:

http://vancouver.ca/police/CrimeMaps/index.htm

You’re able to see the actual location where the crime was committed. We had to tabulate data from summary tables found at:

http://vancouver.ca/police/organization/planning-research-audit/neighbourhood-statistics.html

To translate: essentially the city releases this information in a non-machine-readable format, meaning that citizens, public servants at other levels of government and (I’m willing to wager) City of Vancouver public servants outside the police department have to recreate the data in a digital format. What a colossal waste of time and energy. Why not just share the data in a structured digital way? The city already makes it public, why not make it useful as well? This is what Washington DC (search crime) and San Francisco have done.

I hope that more apps get created in Vancouver, but as a public policy geek, I’m also hoping that more reports like these (and the one Bing Thom architects published on the future of Vancouver also using data from the open data catalog) get published. Ultimately, more people learning, thinking, writing and seeking solutions to our challenges will create a smarter, more vibrant and more successful city. Isn’t that what you’d want your city government (or any government, really…) to do?

Canada 3.0 & The Collapse of Complex Business Models

If you haven’t already, I strongly encourage everyone to go read Clay Shirky’s The Collapse of Complex Business Models. I just read it while finishing up this piece and it articulates much of what underpins it in the usual brilliant Shirky manner.

I’ve been reflecting a lot on Canada 3.0 (think SXSWi meets government and big business) since the conference’s end. I want to open by saying there were a number of positive highlights. I came away with renewed respect and confidence in the CRTC. My sense is net neutrality and other core internet issues are well understood and respected by the people I spoke with. Moreover, I was encouraged by what some public servants had to say regarding their vision for Canada’s digital economy. In many corners there were some key people who seemed to understand what policy, legal and physical infrastructure needs to be in place to ensure Canada’s future success.

But these moments aside, the more I reflect on the conference the more troubled I feel. I can’t claim to have attended every session but I did attend a number and my main conclusion is striking: Canada 3.0 was not a conference primarily about Canada’s digital future. Canada 3.0 was a conference about Canada’s digital commercial future. Worse, this meant the conference failed on two levels. Firstly, it failed because people weren’t trying to imagine a digital future that would serve Canadians as creators, citizens and contributors to the internet and what this would mean to commerce, democracy and technology. Instead, my sense was that the digital future largely being contemplated was one where Canadians consumed services over the internet. This, frankly, is the least important and interesting part of the internet. Designing a digital strategy for companies is very different than designing one for Canadians.

But, secondly, even when judged in commercial terms, the conference, in my mind, failed. This is not because the wrong people were there, or that the organizers and participants were not well-intentioned. Far from it. Many good and many necessary people were in attendance (at least as one could expect when hosting it in Stratford).

No, the conference’s main problem was that, at the core of many conversations lay an untested assumption: That we can manage the transition of broadcast media (by this I mean movies, books, newspaper & magazines, television) as well as other industries from an (a) broadcast economy to a (b) networked/digital economy. Consequently, the central business and policy challenge is how do we help these businesses survive this transitionary period and get “b” happening asap so that the new business models work.

But the key assumption is that the institutions – private and public – that were relevant in the broadcast economy can transition. Or that the future will allow for a media industry that we could even recognize. While I’m open to the possibility that some entities may make it, I’m more convinced that most will not. Indeed, it isn’t even clear that a single traditional business model, even radically adapted, can adjust to a network world.

What no one wants to suggest is that we may not be managing a transition. We may be managing death.

The result: a conference that doesn’t let those who have let go of the past roam freely. Instead they must lug around all the old modes like a ball and chain.

Indeed, one case in point was listening to managers of the Government of Canada’s multimedia fund share how, to get funding, a creator would need to partner with a traditional broadcaster. To be clear, if you want to kill content, give it to a broadcaster, they’ll play it once or twice, then put it in a vault and one will ever see it again. Furthermore, a broadcaster has all the infrastructure, processes and overhead that make them unworkable and unprofitable in the online era. Why saddle someone new with all this? Ultimately this is a program designed to create failures and worse, pollute the minds of emerging multimedia artists with all sorts of broadcast baggage. All in the belief that it will help bridge the transition. It won’t.

The ugly truth is that just like the big horse buggy makers didn’t survive the transition to the automobile, or that many of the creators of large complex mainframe computers didn’t survive the arrival of the personal computer, our traditional media environment is loaded with the walking dead. Letting them control the conversation, influence policy and shape the agenda is akin to asking horse drawn carriage makers write the rules for the automobile era. But this is exactly what we are doing. The copyright law, the pillar of this next economy, is being written not by the PMO, but by the losers of the last economy. Expect it to slow our development down dramatically.

And that’s why Canada 3.0 isn’t about planning for 3.0 at all. More like trying to save 1.0.