State Economic Rankings Don’t Measure Up
This post was co-written with Yasuyuki Motoyama and Jared Konczal.
A few weeks ago, a new ranking was published measuring “opportunity” on a state-by-state basis. Vermont came out on top as the friendliest state for upward mobility, scoring well on both the Community and Education metrics.
Someone from Vermont e-mailed us here at the Kauffman Foundation, wondering how this could be in light of Vermont’s demographics. And indeed, the state’s demographic direction does not scream opportunity. By median age, Vermont is four years older than the nation as a whole. Today, residents over the age of 65 comprise 15 percent of the population—by 2030, it will be 20 percent and rising. This person observed many young people abandoning the state, but Vermont still has a high rate of self-employment—maybe the report envisioned the opportunities created for everyone who stays?
Checking other state rankings, it’s difficult to get a grasp on how Vermont, or any other state, truly measures up. In a ranking of personal and economic freedom, Vermont ranks 43rd overall, with black marks for regulatory restrictions. On another ranking of top states “for business” (otherwise undefined), Vermont comes in 39th. Yet it ranks third in quality of life and fourth in education in that report, two qualities that would seem to be pretty important for business growth. In fact, if one sorts these “top states for business” rankings using a second dimension, education, a strange picture emerges. Of the top 10 states for education, the highest rank achieved overall for business friendliness is 17th (Wisconsin).
We don’t mean to pick on Vermont—it scores well in several categories of the State New Economy Index from the Information Technology and Innovation Foundation (ITIF)—and was the first state in the nation to adopt digital firm formation (creating and operating your company in the cloud), something the Kauffman Foundation has helped promote.
But these state (or city or metropolitan) rankings are generally not helpful—and in some cases, are quite unhelpful—in gauging a state’s economic position. At worst, because they masquerade as policy-relevant, they can distort policy discussions and decisions.
We recently assembled a set of indicators specific to entrepreneurship and innovation and ran 1,000 simulations, randomly changing the weights that different indicators received (many rankings assign different weights). Across these simulations, 22 different states could claim to be a “top 10” state depending on the variation. Five different states could claim the mantle of “number one” in entrepreneurship and innovation: Colorado, Oregon, Florida, New Hampshire, and Montana.
These are not differences of degree, either—in many instances, depending on the importance assigned to different indicators, states can zoom up or down the rankings. Depending on what they want to see, policymakers and promoters and others can, quite literally, create whatever ranking they choose.
The reasons for this rankings-o-matic madness aren’t hard to identify. For one thing, even though any given ranking purports to be based on data, the data themselves are always shaped by assumptions. These assumptions are usually pretty opaque to everyone else. Such assumptions, moreover, are usually determined by the organization constructing the ranking—ideological bias is rife in the rankings industry. One need not read a ranking or report to know what it says: You just need to know who produced it.
All this means that rankings can be arbitrary. Several different scholarly analyses have found that state and local rankings generally do not correspond with various measures of economic performance, including job creation and business creation. If you’re a policymaker or citizen or company basing decisions on these rankings, that’s a problem. Actually, it’s a feature of the rankings. When it comes to rankings, there is no such thing as, “that’s what the data say.” Instead, it’s, “that’s what I made my data say.”
Well, so what? What can or should be done about this?
We favor a scorecard approach, rather than an overall rank. A scorecard, with a distribution of grades or other scores across a variety of indicators, allows rankings consumers to balance different elements, rather than accepting a third-party conclusion that “my state is bad for business,” or “my state lacks opportunity.”
Perhaps, too, a more useful tool for states and communities would be a dynamic collection of data indicators, allowing policymakers and residents to choose their own weights. Maybe one state places a larger emphasis on entrepreneurship than its neighbor, which might prefer to support more established companies. Such a “choose your own ranking” approach does not solve the political problem: Anyone could still use the data for his/her own narrative. This would require a better level of transparency for knowing where the data came from and what assumptions shape even the data collection in the first place. Not all data are created equal, so simply making them available won’t, unfortunately, fix the problem.
This still might be better than an arbitrary rank assignment from a third party claiming to be independent. We need to work on assembling data and research to get to a more informed understanding of what drives economic dynamism and growth at the state and local levels. Take entrepreneurship data: When we say that one state or city has more new businesses than another, we still don’t have the empirical ability to describe the composition of those new businesses or identify the economic effect they have in terms of the types of jobs being created, the incomes being generated and their effect on other businesses.
This might be the next state ranking to be conducted: Which state does the best in pioneering new modes of data collection and measurement? Perhaps Vermont will top that list as well.