Thursday, June 5th, 2008

City Rankings: Behind the Surveys

Let’s face it, every city likes to know where it places in the league tables. And there is no end of survey data to satisfy this appetite. Hardly a week goes by that somebody doesn’t issue a report comparing cities on various factors. These frequently generate some media buzz. But media reports are almost always just a recitation of the survey’s conclusions. What I find more interesting is to dig behind the rankings and see the methods these groups actually use to calculate their ratings. Believe it or not it is often very difficult to get an understanding of the survey methodology based on the published reports or the publisher’s web site. And often there is no raw data available either. But you can usually get enough to make some judgments.

Let’s have fun with a couple of examples. The first is the recently issued “American Fitness Index“, that rated the top 15 metro areas in the country plus Indianapolis as to their general health and fitness. No reports are yet loaded on the web site, but the Indianapolis Star had the data sheet for Indianapolis [dead link], which ranked #12 out of 16 metros in the survey. This sheet reveals that there are three portions of the ranking: personal health indicators (11 items), community/environmental factors (15 items), and health care providers (1 item). It’s not clear how these are weighted, but the preponderance of the items are in the community/environmental factors section, which looks suspect to me. It focuses almost exclusively on government parks and recreation facilities. There’s nothing about, for example, air pollution. Some of the items would appear to have a tenuous relationship to health and fitness. For example, public transit commuters per capita and dog parks per capita. I like transit. Transit might be good. But why does sitting on the bus make you so much more fit that it deserves to be on this survey? And dog parks? The health care providers index just listed primary care physicians per capita. There are certainly other measures that would be useful. Additionally, this is a quantity not a quality measure. And it isn’t clear to me whether more doctors is a good thing or a bad thing. It might mean greater access to health care or it might mean more sick people.

The other survey I was looking at was the metro competitiveness survey published by the Beacon Hill Institute at Suffolk University. This was inspired by the Global Competitiveness Report published by the World Economic Forum. Their scorecard includes criteria in several
criteria grouped under the following headings:

  • Government and Fiscal Policies
  • Security
  • Infrastructure
  • Human Resources
  • Technology
  • Business Incubation
  • Environmental Policy
  • Openness

I think this is a pretty solid list of categories. The only possible quibble is Environmental Policy. Does having a stricter environmental protection policy help your competitiveness or hurt it? Having very lax environmental laws appears to be one thing that has really made China very competitive – at least in the short term. We’ll see, but in my view the jury is still out.

It is below the category level that some of the individual ratings start to look suspect. The first thing that pops out at me is the method of calculation. They rate the factors on a scale of 0-10 using a method of setting the mean to 5, the standard deviation to one, and the range to 0-10. Then there is further normalizing and combining. This sounds impressive, but why do it? The use of an enforced normal distribution means that outliers are heavily rewarded or punished. To me the very word “index” implies a simple arithmetic calculation, not a fancy statistical one. The report did not provide any rationale for this type of calculation.

Then we consider the actual measures, some of which are good and some not so good. Consider “toxic releases per 1000 sq mi” in the environmental section. That’s a good measure of the environment, except that MSA’s are made up of counties and vary hugely in sides. Some have large counties that extend for many miles into the countryside, and can skew the calculations. A better measure would be emissions per sq. mi. of the urbanized area, a more restricted measure. Thought it is probably impossible to get data sliced that way. The Bond Rating looks like a nice figure to list under fiscal policy. But I notice that Indianapolis, which has a AAA rating from S&P, doesn’t have that listed as one of its competitive advantages. That doesn’t seem right. “High speed lines per 1000″ wouldn’t crack my top five infrastructure items, though it does for these guys. Nor would “NIH support to institutions” be on my top five technology list.

The point is not to dismiss these surveys as worthless. They actually contain a lot of good data. But they also are limited by their methods. The key is not to pay too much attention to the headline ratings, but to dig into the details to find out where you really stand on the things that matter, or where a complete picture might not being drawn.

For an example of what I consider a really great city comparison report, check out the Columbus “Benchmarking Central Ohio 2008” survey, which has wealth of great data about that city and its competitors.

3 Comments
Topics: Economic Development, Sustainability

3 Responses to “City Rankings: Behind the Surveys”

  1. Anonymous says:

    “Statistics, damm statisitics”

    - do agree these rankings need to always be read with a discerning eye; do also agree the Columbus study appears to be pretty straight forward, however, stats can be interpreted in lots of different ways…the more granular the better otherwise you can draw incorrect conclusions

  2. Anonymous says:

    73% of all surveys are inaccurate.

  3. Lynn Stevens says:

    Chicago was just named Fast Company’s U.S. (Fast) City of the Year. The editor willingly admits to “fast” being a loose term of creativity, vibrancy and economic dynamism. Happily the article included quirky neighborhood highlights including Hot Doug’s in the Avondale community and the Hideout, a truly hidden gem, and I loved the imagery of author Alex Kotlowitz dancing to house music. That kind of picture is worth a thousand stats.

The Urban State of Mind: Meditations on the City is the first Urbanophile e-book, featuring provocative essays on the key issues facing our cities, including innovation, talent attraction and brain drain, global soft power, sustainability, economic development, and localism. Included are 28 carefully curated essays out of nearly 1,200 posts in the first seven years of the Urbanophile, plus 9 original pieces. It's great for anyone who cares about our cities.

Telestrian Data Terminal

about

A production of the Urbanophile, Telestrian is the fastest, easiest, and best way to access public data about cities and regions, with totally unique features like the ability to create thematic maps with no technical knowledge and easy to use place to place migration data. It's a great way to support the Urbanophile, but more importantly it can save you tons of time and deliver huge value and capabilities to you and your organization.

Try It For 30 Days Free!

About the Urbanophile

about

Aaron M. Renn is an opinion-leading urban analyst, consultant, speaker, and writer on a mission to help America’s cities thrive and find sustainable success in the 21st century.

Full Bio

Contact

Please email before connecting with me on LinkedIn if we don't already know each other.

 

Copyright © 2006-2014 Urbanophile, LLC, All Rights Reserved - Copyright Information