Category: Econometrics

Why avoid panel data in examining social mobility?

Last week at Politics and Prose, I had the opportunity to hear Robert Putnam’s book talk for Our Kids: The American Dream in Crisis. In his book, he focuses on data that purport to show a growing divide in income inequality and social unraveling.

Putnam told a personal anecdote about his deteriorating home town in the Rust Belt. If Putnam were able to prove that the shrinking economy from the loss of manufacturing jobs in the Rust Belt proportionately reflected the larger economy in all other sectors, he might have a strong data point, but a personal anecdote is not enough. The availability heuristic is not strong evidence.

So much of the talk concerning income inequality pertains to an unstated premise about social mobility. The widespread fear is not just that the rich are getting richer, but that the rich are getting richer at the expense of the poor. The mental model assumes some fixed share of wealth that exists in the world should be divvied up fairly so as to avoid predation by the strong on the weak. Often, the evidence presented for a “fixed pie” theory is to show the shrinking share of income among the lower quintiles and the growing share of income among the higher quintiles. The problem with this methodology is that it doesn’t actually account for social mobility. To prove that capital is flowing from individuals in the bottom quintile to individuals in the top quintile, we need panel data.

If we don’t analyze with panel data, we might observe the top quintile, profiting from some entirely new high-tech sector, drastically increasing their income by 20% while lower quintiles still increase at 2%. More wealth generated at the top wouldn’t imply material loss for the bottom quintile.

In the Q&A, I pressed Professor Putnam on his methodology, specifically, to what extent he used panel data to show decreased social mobility. After his book signing, he elaborated for me.

Putnam claimed that there was some good panel data for income, but that it couldn’t be used to show current trends in social mobility.

We might suppose that the relevant panel data to measure social mobility would include income, y_{it} for i=Poor, i=Rich, t=20, t=40.

At t=40 or so, we might expect people to be generating the most amount of income for their life.

The methodological issue Putnam pointed out was that individual incomes over a lifetime are highly nonlinear. If you were to track a random sample of individuals starting at t=20, very different kinds of individuals would look very similar, but both would appear in the bottom quintile. Specifically, y_{Rich,20} could be -$100,000 for a Harvard pre-law student who’s taking out student loans, and y_{Poor,20} could be $16,000 for a minimum wage job. However, y_{Rich,40} might be $450,000, while y_{Poor,40} might be something like $25,000.

Putnam pointed out that because of how this panel data is measured, the data is always intrinsically 30-40 years out of date. Wait, is this a cop out? Is this methodological laziness?

Just because any one particular study requires 40 years doesn’t mean that we couldn’t observe multiple concurrent staggered studies, with different individuals to show panel data over time. We can imagine Study A starting in 1945 with a batch of individuals at t=0, Study B that tracks t=0 at 1950, Study C that tracks t=0 at 1955, and so on and so forth. Then, despite nonlinearity in lifetime earnings, we would still be able to see trends in how individuals are or aren’t moving up, out of their birth quintiles.

Advertisements

Invasive monitoring for discounted health insurance policies

What is often called “health insurance” in the United States often isn’t actually health insurance, but a kind of imperfect prepayment plan for medical services.

If “insurance” companies were ever again to become actual insurance companies, seeking profit by assessing and pricing risks of payouts, how much producer and consumer surplus might be available through invasive health monitoring? If insurance companies could more comprehensively and invasively monitor their customers’ risk factors by, for instance, requiring monthly blood tests, or requiring shared access to a 23andMe profile, how much economic surplus might be available?

Surely there’s potential producer surplus, because insurance companies would be able to keep more money if they knew certain kinds of healthy customers would require fewer expenditures. Surely there’s potential consumer surplus, because healthy customers would be rewarded with lower prices for their good health. Pricing could even be dynamic, depending on the particular monitoring technology.

Aside from gains in producer and consumer surplus, there would be an even greater benefit. Prices would serve as a kind of check on biased medical research. Medical academics politicking for research money might continue to make wild and untrue claims about different pathologies, but insurance companies would have skin in the game to evaluate medical research.

As far as I know, privacy regulations and price regulations make this idea completely impossible today.

Direct democracy aggravates anti-immigration policy in Switzerland

Brett Gall recently alerted me to Jens Hainmueller and Dominik Hangartner’s fascinating study of Swiss naturalization decisions. They used changes of laws in different Swiss municipalities for a fairly sound regression discontinuity design to assess the effects of direct democracy on immigration.

Foreigners don’t simply apply to live in Switzerland. They apply to live in specific Swiss municipalities. Historically, some municipalities used representative democracy to admit immigrants, but some managed immigration requests via direct democracy, by actually sending the resumes of all applicants to all citizens of the municipality to approve or reject.

Hainmueller and Hangartner’s methodology is stunningly beautiful. There was a series of rulings by the Swiss Federal Court from 2003 to 2005 that required different municipalities to transition from direct democracy to representative democracy for their immigration decisions. The immigration application process takes about 4-5 years, so applicants weren’t able to anticipate any institutional changes to the municipalities to which they had applied.

Hainmueller and Hangartner collected panel data of different Swiss municipalities forced to transition, and used a regression discontinuity design to compare similar applicants whose immigration applications were processed, by the same municipalities, at almost the same time, via direct democracy or representative democracy.

The least xenophobic municipalities that switched didn’t see any change in immigration patterns. The most xenophobic municipalities that switched drastically increased their amount of immigration.

Why does this happen? Xenophobic citizens voting privately don’t need to justify their xenophobia to anyone. In contrast, public servants facing public scrutiny hesitate to reject applications on the basis of poorly reasoned xenophobia.