Thursday, June 5th, 2008
Let’s face it, every city likes to know where it places in the league tables. And there is no end of survey data to satisfy this appetite. Hardly a week goes by that somebody doesn’t issue a report comparing cities on various factors. These frequently generate some media buzz. But media reports are almost always just a recitation of the survey’s conclusions. What I find more interesting is to dig behind the rankings and see the methods these groups actually use to calculate their ratings. Believe it or not it is often very difficult to get an understanding of the survey methodology based on the published reports or the publisher’s web site. And often there is no raw data available either. But you can usually get enough to make some judgments.
Let’s have fun with a couple of examples. The first is the recently issued “American Fitness Index“, that rated the top 15 metro areas in the country plus Indianapolis as to their general health and fitness. No reports are yet loaded on the web site, but the Indianapolis Star had the data sheet for Indianapolis [dead link], which ranked #12 out of 16 metros in the survey. This sheet reveals that there are three portions of the ranking: personal health indicators (11 items), community/environmental factors (15 items), and health care providers (1 item). It’s not clear how these are weighted, but the preponderance of the items are in the community/environmental factors section, which looks suspect to me. It focuses almost exclusively on government parks and recreation facilities. There’s nothing about, for example, air pollution. Some of the items would appear to have a tenuous relationship to health and fitness. For example, public transit commuters per capita and dog parks per capita. I like transit. Transit might be good. But why does sitting on the bus make you so much more fit that it deserves to be on this survey? And dog parks? The health care providers index just listed primary care physicians per capita. There are certainly other measures that would be useful. Additionally, this is a quantity not a quality measure. And it isn’t clear to me whether more doctors is a good thing or a bad thing. It might mean greater access to health care or it might mean more sick people.
The other survey I was looking at was the metro competitiveness survey published by the Beacon Hill Institute at Suffolk University. This was inspired by the Global Competitiveness Report published by the World Economic Forum. Their scorecard includes criteria in several
criteria grouped under the following headings:
- Government and Fiscal Policies
- Human Resources
- Business Incubation
- Environmental Policy
I think this is a pretty solid list of categories. The only possible quibble is Environmental Policy. Does having a stricter environmental protection policy help your competitiveness or hurt it? Having very lax environmental laws appears to be one thing that has really made China very competitive – at least in the short term. We’ll see, but in my view the jury is still out.
It is below the category level that some of the individual ratings start to look suspect. The first thing that pops out at me is the method of calculation. They rate the factors on a scale of 0-10 using a method of setting the mean to 5, the standard deviation to one, and the range to 0-10. Then there is further normalizing and combining. This sounds impressive, but why do it? The use of an enforced normal distribution means that outliers are heavily rewarded or punished. To me the very word “index” implies a simple arithmetic calculation, not a fancy statistical one. The report did not provide any rationale for this type of calculation.
Then we consider the actual measures, some of which are good and some not so good. Consider “toxic releases per 1000 sq mi” in the environmental section. That’s a good measure of the environment, except that MSA’s are made up of counties and vary hugely in sides. Some have large counties that extend for many miles into the countryside, and can skew the calculations. A better measure would be emissions per sq. mi. of the urbanized area, a more restricted measure. Thought it is probably impossible to get data sliced that way. The Bond Rating looks like a nice figure to list under fiscal policy. But I notice that Indianapolis, which has a AAA rating from S&P, doesn’t have that listed as one of its competitive advantages. That doesn’t seem right. “High speed lines per 1000” wouldn’t crack my top five infrastructure items, though it does for these guys. Nor would “NIH support to institutions” be on my top five technology list.
The point is not to dismiss these surveys as worthless. They actually contain a lot of good data. But they also are limited by their methods. The key is not to pay too much attention to the headline ratings, but to dig into the details to find out where you really stand on the things that matter, or where a complete picture might not being drawn.
For an example of what I consider a really great city comparison report, check out the Columbus “Benchmarking Central Ohio 2008” survey, which has wealth of great data about that city and its competitors.