From Atlantic Cities:

Besides its coverage of a wide spectrum of urban issues—from mystifying crosswalks to the false economics of sprawl—TheAtlanticCities.com aims to share ideas about best practices. And for those who think about cities, one important way to do that is to find innovative ways to rank certain urban performance indicators as a means of determining what makes some regions succeed. A pertinent recent example: the Global Creativity Index, developed by our own Richard Florida and his team at the Martin Prosperity Institute at the University of Toronto.

Academics who do this work know that reliable and current data is crucial but elusive. For example, national or state/provincial level economic data may not be sufficiently granular to reveal trends at the urban or municipal level. There are also boundary issues: for the purposes of collecting statistics to generate indicators, does “the city” constitute the political entity, the census metropolitan area, or, in the case of large urbanized regions, the agglomeration of several CMAs?

The Global City Indicators Facility, a research unit at the University of Toronto founded by director Patricia McCarney in 2009, is building a city-by-city database of up-to-date, meticulously delineated statistical information on a wide range of urban metrics—everything from crime rates to building permits—that can be used by municipal officials to benchmark their decisions. The project, overseen by a board representing member cities, traces its roots to concerns from World Bank officials about sketchy data on urban regions.

The University of Loughborough’s Globalization and World Cities Research Network adopts yet another approach, measuring a broad range of economic and private sector indicators to determine the commercial interconnectedness between large cities as a means of assessing the global hierarchy of urban regions.

Despite such efforts to inject academic rigor to the business of measuring urban performance, the rankings that tend to get the most public attention often suffer from serious methodological shortcomings, according to Zack Taylor, a Toronto-based planning consultant and Ph.D. candidate who has written a trenchant study (“Lies, Damn Lies and Statistics: A Critical Examination of City Ranking Studies”) on the shortcomings of highly publicized city rankings produced by private organizations and extensively promoted in the media.

His 78-page report, commissioned by Canada’s Intergovernmental Committee on Economic and Labour Force Development, aims to shine an analytical light on the growing number of privately-commissioned urban rankings, including those produced by Mercer Consulting, Mastercard, KPMG, UBS, PriceWaterhouse Coopers and others. These studies attract wide-spread media attention, and find their way into the promotional material produced by municipal and regional economic development agencies and local chambers of commerce.

Taylor, founder of Metapolis Consulting, points out that the ostensible purpose of such rankings is widely overlooked: the sponsors aim to sell their proprietary data to multi-national corporations as a means of helping HR teams establish cost-of-living allowances for senior managers and executives posted to off-shore offices. “They can’t tell the story of the actual experience of the people who live there because they are meant to evaluate the cities from the perspective of outsiders,” says Taylor, who describes that discovery as a “revelation.”

The means by which these studies are promoted to the media also suggest the sponsoring firms clearly recognize the brand bounce associated with the inevitable coverage.

The problems aren’t just limited to mandate. After examining several leading studies over a period of several years, Taylor discovered that annual scores can shift unpredictably due to methodological issues, such as fluctuations in global currency markets, changes in evaluation criteria, incomplete data, inconsistent definitions of what constitutes a particular urban region (i.e., the census metropolitan area, political boundaries, etc.), and the choice of cities being evaluated.

In the report, he also notes there’s an important but often unacknowledged distinction between a city’s score and its ranking: due to very small shifts in the data, a city’s overall ranking may slip relative to the others on the list, thereby magnifying the impression that something serious has gone amiss, even though a city’s overall score may have barely budged.

So when it comes to publicizing such findings, Taylor has this bit of sage advice for journalists, readers, and economic development officials alike: “Be skeptical.”