Senin, 30 Mei 2016

The incredible miracle in poor country development


The amazing improvement in the quality of life of the world's poor people should be common knowledge by now. For example, you have the now-famous "elephant graph", by Branko Milanovic, showing recent income growth at various levels of the global income distribution:


This graph shows that over the last three decades or so, the global poor and middle class reaped huge gains, the rich-country middle-class stagnated, and the rich-country rich also did quite well for themselves.

You also have the poverty data from Max Roser, showing how absolute poverty has absolutely collapsed in the last couple of decades, both in percentage terms and in raw numbers of humans suffering under its lash:


This is incredible - nothing short of a miracle. Nothing like this has ever happened before in recorded history. With the plunge in global poverty has come a precipitous drop in global child mortality and hunger. The gains have not been even - China has been a stellar outperformer in poverty reduction - but they have been happening worldwide:


The fall in poverty has been so spectacular and swift that you'd think it would be a stylized fact - the kind of thing that everyone knows is happening, and everyone tries to explain. But on Twitter, David Rosnick strongly challenged the very existence of a rapid recent drop in poverty outside of China. At first he declared that the poor-country boom was purely a China phenomenon. That is, of course, false, as the graphs above clearly show. 

But Rosnick insisted that poor-country development has slowed in recent years, rather than accelerated, and insisted that I read a paper he co-authored for the think tank CEPR, purporting to show this. Unfortunately this paper is from 2006, and hence is now a decade out of date. Fortunately, Rosnick also pointed me to a second CEPR paper from 2011, by Mark Weisbrot and Rebecca Ray, that acknowledges how good the 21st century has been for poor countries:
The paper finds that after a sharp slowdown in economic growth and in progress on social indicators during the second (1980-2000) period, there has been a recovery on both economic growth and, for many countries, a rebound in progress on social indicators (including life expectancy, adult, infant, and child mortality, and education) during the past decade. 
Weisbrot and Ray, averaging growth across country quintiles, find the following:


By their measure, the 2000-2010 decade exceeds or ties the supposed golden age of the 60s and 70s, for all but the top income quintile.

I'm tempted to just stop there. First of all, because Weisbrot and Ray are averaging across countries, rather than across people (as Milanovic does), China is merely one single data point among hundreds in their graph above. So the graph clearly shows that Rosnick is wrong, and the recent unprecedented progress of global poor countries is not just a China story. Case closed.

But I'm not going to stop there, because I think even Weisbrot and Ray are giving the miracle short shrift, especially when it comes to the 1980s and 1990s. 

See, as I mentioned, Weisbrot and Ray weight across countries, not across people:
Finally, the unit of analysis for this method is the country—there is no weighting by population or GDP. A small country such as Iceland, with 300,000 people, counts the same in the averages calculated as does China, with 1.3 billion people and the world’s second largest economy. The reason for this method is that the individual country government is the level of decision-making for economic policy. 
Making Iceland equal to China might allow a better analysis of policy differences (I have my doubts, since countries are all so different). But it certainly gives a very distorted picture of the progress of humankind. Together, India and China contain well over a third of humanity, and almost half of the entire developing world. 

And when you look at the 1980s and 1990s, you see that the supergiant countries of India and China did extremely well during those supposedly disastrous decades. Here's Indian real per capita GDP:


As you can see, during the 1960s, India's GDP increased by a little less than a third - a solid if unspectacular performance. From 1970 to 1980 it increased by perhaps a 10th - near-total stagnation. In the 1980s, it increased by a third - back to the same performance as the 60s. In the 1990s, it did even better, increasing by around 40 percent. And of course, in the 2000s, it has zoomed ahead at an even faster rate.

So India had solid gains in the 80s and 90s - only the new century has seen more progress in material living standards. Along with Indian growth, as you might expect, has come a huge drop in poverty (and here's yet more data on that).

Now China:


The 60s were a disaster for China, with GDP essentially not increasing at all. It's hard to see from the graph, but the 70s were actually great, with China's income nearly doubling. The 80s, however, were even better, with GDP more than doubling. The 90s were similarly spectacular. And of course, rapid progress has continues in the new century.

So for both of the supergiant countries, the 80s and 90s were good years - better than the 60s and 70s. Collapsing these billions of people into two data points, as Weisbrot and Ray do, turns these miracles into a seeming disaster, but the truth is that as go India and China, so goes the developing world. 

Now let's talk about policy.

My prior is that the 80s and 90s look bad for poor countries in Weisbrot and Ray's dataset - and the 70s look great - because of natural resource prices. Metals prices rose steadily in the 60s, surged in the 70s, then collapsed in the 80s and 90s:

Oil prices didn't rise in the 60s, but boy did they soar in the 70s and collapse in the 80s and 90s:


The same story is roughly true of other commodities.

My prior is that the developing world contains a large number of small countries whose main industry is the export of natural resources, and whose economic fortunes rise and fall with commodity prices. Just look at this map of commodity importers and exporters, via The Economist:


Yep. Most of the countries in the developing world are commodity exporters...with the huge, notable exceptions of China and India.

So I strongly suspect that Weisbrot and Ray's growth numbers are mostly just reflections of rising and falling commodity prices. Averaging across countries, rather than people, essentially guarantees that this will be the case.

Do Weisbrot and Ray recognize this serious weakness in their method? The authors mention commodity prices as an explanation for the fast developing-country growth of 2000-2010, but completely fail to bring it up as an explanation for the growth during 1960-1980. In fact, here is what they say:
[T]he period 1960-1980 is a reasonable benchmark. While the 1960s were a period of very good economic growth, the 1970s suffered from two major oil shocks that led to world recessions—first in 1974-1975, and then at the end of the decade. So using this period as a benchmark is not setting the bar too high. 
But the oil shocks, and the general sharp rise in commodity prices, should have helped most developing countries hugely, not hurt them! Weisbrot and Ray totally ignore this key fact about their "benchmark" historical period.

So I think the Weisbrot and Ray paper is seriously flawed. It claims to be able to make big, sweeping inferences about policy by averaging across countries and comparing across decades, but the confounding factor of global commodity prices basically makes a hash of this approach. (And I'm sure that's not the only big confound, either. Rich-country recessions and booms, spillovers from China and India themselves, etc. etc.)

As I see it, here is what has happened with poor-country development over the last 55 years:

1. Start-and-stop growth in China and India in the 60s and 70s, followed by steady, rapid, even accelerating growth following 1980.

2. Seesawing growth in commodity exporters as commodity prices rose and fell over the decades.

Of course, this means that some of the miraculous growth we've seen in the developing world since 2000 is also on shaky ground! Commodity prices have fallen dramatically in the last year or two, and if they stay low, this spells trouble for countries in Africa, Latin America, and the Middle East. Their recent gains were real, but they may not be repeated in the years to come.

But the staggering development of China and India - 37 percent of the human race - seems more like a repeat of the industrialization accomplished by Europe, Japan, Korea, etc. And although China is now slowing somewhat, India's growth has remained steady or possibly even accelerated.

So the miracle is real, and - for now, at least - it is continuing. 

Kamis, 26 Mei 2016

101ism, overtime pay edition


John Cochrane wrote a blog post criticizing the Obama administration's new rule extending overtime pay to low-paid salaried employees. Cochrane thinks about overtime in the context of an Econ 101 type model of labor supply and demand. I'm not going to defend the overtime rule, but I think Cochrane's analysis is an example of what I've been calling "101ism".

One red flag indicating that 101 models are being abused here is that Cochrane applies the same model in two different ways. First, he models overtime pay as a wage floor:


Then he alternatively models it as a negative labor demand shock:


Well, which is it? A wage floor, or a negative labor demand shock? The former makes wages go up, while the latter makes wages go down, so the answer is clearly important. If using the 101 model gives you two different, contradictory answers, it's a clue that you shouldn't be using the 101 model.

In fact, overtime rules are not quite like either wage floors or negative labor demand shocks. Overtime rules stipulate not a wage level, but a ratio between base wages and wages paid on hours worked per worker above a certain amount.

In the Econ 101 model of labor supply and demand, there's no distinction between the extensive and the intensive margin - hiring the same number of employees for fewer hours each is exactly the same as hiring fewer employees for the same number of hours each. But with overtime rules, those two are obviously not the same. For a given base wage, under overtime rules, hiring 100 workers for 40 hours each is cheaper than hiring 40 workers for 100 hours each, even though the total number of labor hours is the same. That breaks the 101 model.

With overtime rules, weird things can happen. First of all, base wages can fall while keeping employment the same, even if labor demand is elastic. Why? Because if companies fix the hours that their employees work, they can just set the base wage lower so that overall compensation stays the same, leading to the exact same equilibrium as before.

Overtime rules can also raise the level of employment. Suppose a firm is initially indifferent between A) hiring a very productive worker for 60 hours a week at $50 an hour, and B) hiring a very productive worker for 40 hours a week at $50 an hour, and hiring 2 less productive workers at 40 hours a week each for $25 an hour. Overtime rules immediately change that calculation, making option (B) cheaper. In general equilibrium, in a model with nonzero unemployment (because of reservation wages, or demand shortages, etc.), overtime rules should cut hours for productive workers and draw some less-productive workers into employment. In fact, this is exactly what Goldman Sachs expects to happen.

Now, to understand the true impact of overtime rules, we probably have to include more complicated stuff, like unobservable effort (what if people work longer but less hard?), laws regarding number of work hours, unobservable hours (since the new rule is for salaried employees), sticky wages, etc. But even if we want to think about the very most simple case, we can't use the basic 101 model, since the essence of overtime rules is to force firms to optimize over 2 different margins, and S-D graphs represent optimization over only 1 margin.

Using 101 models where they clearly don't apply is 101ism!

Minggu, 22 Mei 2016

Theory vs. Evidence: Unemployment Insurance edition


The argument over "theory vs. evidence" is usually oversimplified and silly, since you need both to understand the world. But there is a sense in which I think evidence really does "beat" theory most of the time, at least in econ. Basically, I think empirical work without much theory is usually more credible than the reverse.

To show what I mean, let's take an example. Suppose I was going to try to persuade you that extended unemployment insurance has big negative effects on employment. But suppose I could only show you one academic paper to make my case. Which of these two papers, on its own, would be more convincing?


Paper 1: "Optimal unemployment insurance in an equilibrium business-cycle model", by Kurt Mitman and Stanislav Rabinovitch

Abstract:
The optimal cyclical behavior of unemployment insurance is characterized in an equilibrium search model with risk-averse workers. Contrary to the current US policy, the path of optimal unemployment benefits is pro-cyclical – positively correlated with productivity and employment. Furthermore, optimal unemployment benefits react nonmonotonically to a productivity shock: in response to a fall in productivity, they rise on impact but then fall significantly below their pre-recession level during the recovery. As compared to the current US unemployment insurance policy, the optimal state-contingent unemployment benefits smooth cyclical fluctuations in unemployment and deliver substantial welfare gains.

Some excerpts:
The model is a Diamond–Mortensen–Pissarides model with aggregate productivity shocks. Time is discrete and the time horizon is infinite. The economy is populated by a unit measure of workers and a larger continuum of firms...Firms are risk-neutral and maximize profits. Workers and firms have the same discount factor β...Existing matches [i.e., jobs] are exogenously destroyed with a constant job separation probability δ...All worker–firm matches are identical: the only shocks to labor productivity are aggregate shocks...[A]ggregate labor productivity...follows an AR(1) process...The government can insure against aggregate shocks by buying and selling claims contingent on the aggregate state...The government levies a constant lump sum tax τ on firm profits and uses its tax revenues to finance unemployment benefits...The government is allowed to choose both the level of benefits and the rate at which they expire. Benefit expiration is stochastic...


Paper 2: "The Impact of Unemployment Benefit Extensions on Employment: The 2014 Employment Miracle?", by Marcus Hagedorn, Iourii Manovskii, and Kurt Mitman

Abstract:
We measure the aggregate effect of unemployment benefit duration on employment and the labor force. We exploit the variation induced by Congress' failure in December 2013 to reauthorize the unprecedented benefit extensions introduced during the Great Recession. Federal benefit extensions that ranged from 0 to 47 weeks across U.S. states were abruptly cut to zero. To achieve identification we use the fact that this policy change was exogenous to cross-sectional differences across U.S. states and we exploit a policy discontinuity at state borders. Our baseline estimates reveal that a 1% drop in benefit duration leads to a statistically significant increase of employment by 0.019 log points. In levels, 2.1 million individuals secured employment in 2014 due to the benefit cut. More than 1.1 million of these workers would not have participated in the labor market had benefit extensions been reauthorized.

Some excerpts:
[W]e exploit the fact that, at the end of 2013, federal unemployment benefit extensions available to workers ranged from 0 to 47 weeks across U.S. states. As the decision to abruptly eliminate all federal extensions applied to all states, it was exogenous to economic conditions of individual states. In particular, states did not choose to cut benefits based on, e.g. their employment in 2013 or expected employment growth in 2014. This allows us to exploit the vast heterogeneity of the decline in benefit duration across states to identify the labor market implication of unemployment benefit extensions. Note, however, that the benefit durations prior to the cut, and, consequently, the magnitudes of the cut, likely depended on economic conditions in individual states. Thus, the key challenge to measuring the effect of the cut in benefit durations on employment and the labor force is the inference on labor market trends that various locations would have experienced without a cut in benefits. Much of the analysis in the paper is devoted to the modeling and measurement of these trends. 
The primary focus of the formal analysis in the paper is on measuring the counterfactual trends in labor force and employment that border counties would have experienced without a cut in benefits...The first one...allows for permanent (over the estimation window) differences in employment across border counties which could be induced by the differences in other policies (e.g., taxes or regulations) between the states these counties belong to. Moreover, employment in each county is allowed to follow a distinct deterministic time trend. The model also includes aggregate time effects and controls for the effects of unemployment benefit durations in the pre-reform period...The second and third models...reflect the systematic response of underlying economic conditions across counties with different benefit durations to various aggregate shocks and the heterogeneity is induced by differential exposure of counties to these aggregate disturbances. 

These two papers have results that agree with each other. Both conclude that extended unemployment insurance causes unemployment to go up by a lot. But suppose I only showed you one of these papers. Which one, on its own, would be more effective in convincing you that extended UI raises U a lot?

I submit that the second paper would be a lot more convincing. 

Why? Because the first paper is mostly "theory" and the second paper is mostly "evidence". That's not totally the case, of course. The first paper does have some evidence, since it calibrates its parameters using real data. The second paper does have some theory, since it relies on a bunch of assumptions about how state-level employment trends work, as well as having a regression model. But the first paper has a huge number of very restrictive structural assumptions, while the second one has relatively few. That's really the key.

The first paper doesn't test the theory rigorously against the evidence. If it did, it would easily fail all but the most gentle first-pass tests. The assumptions are just too restrictive. Do we really think the government levies a lump-sum tax on business profits? Do we really think unemployment insurance benefits expire randomly? No, these are all obviously counterfactual assumptions. Do those false assumptions severely impact the model's ability to match the relevant features of reality? They probably do, but no one is going to bother to check, because theory papers like this are used to "organize our thinking" instead of to predict reality.

The second paper, on the other hand, doesn't need much of a structural theory in order to be believable. Unemployment insurance discourages people from working, eh? Duh, you're paying people not to work! You don't need a million goofy structural assumptions and a Diamond-Mortensen-Pissarides search model to come up with a convincing individual-behavior-level explanation for the empirical findings in the second paper.

Of course, even the second paper isn't 100% convincing - it doesn't settle the matter. Other mostly-empirical papers find different results. And it'll take a long debate before people agree which methodology is better. 

But I think this pair of papers shows why, very loosely speaking, evidence is often more powerful than theory in economics. Humans are wired to be scientists - we punish model complexity and reward goodness-of-fit. We have little information criteria in our heads.


Update: Looks like I'm not the only one that had this thought... :-)

Also, Kurt has a new discussion paper with Hagedorn and Manovskii, criticizing the methodology of some empirical papers that find only a small effect of extended UI. In my opinion, Kurt's team is winning this one - the method of identifying causal effects of UI on unemployment using data revisions seems seriously flawed.

Kamis, 19 Mei 2016

What's the difference between macro and micro economics?


Are Jews for Jesus actually Jews? If you ask them, they'll surely say yes. But go ask some other Jews, and you're likely to hear the opposite answer. A similar dynamic tends to prevail with microeconomists and macroeconomists. Here is labor economist Dan Hamermesh on the subject:
The economics profession is not in disrepute. Macroeconomics is in disrepute. The micro stuff that people like myself and most of us do has contributed tremendously and continues to contribute. Our thoughts have had enormous influence. It just happens that macroeconomics, firstly, has been done terribly and, secondly, in terms of academic macroeconomics, these guys are absolutely useless, most of them.
Ouch. But not too different from lots of other opinions I've head. "I went to a macro conference recently," a distinguished game theorist confided a couple of years back, sounding guilty about the fact. "I couldn't believe what these guys were doing." A decision theorist at Michigan once asked me "What's the oldest model macro guys still use?" I offered the Solow model, but what he was really claiming is that macro, unlike other fields, is driven by fads and fashions rather than, presumably, hard data. Macro folks, meanwhile, often insist rather acerbically that there's actually no difference between their field and the rest of econ. Ed Prescott famously refuses to even use the word "macro", stubbornly insisting on calling his field "aggregate economics".

So who's right? What's the actual distinction between macro and "micro"? The obvious difference is the subject matter - macro is about business cycles and growth. But are the methods used actually any different? The boundary is obviously going to be fuzzy, and any exact hyperplane of demarcation will necessarily be arbitrary, but here are some of what I see as the relevant differences.


1. General Equilibrium vs. Game Theory and Partial Equilibrium

In labor, public, IO, and micro theory, you see a lot of Nash equilibria. In papers about business cycles, you rarely do - it's almost all competitive equilibrium. Karthik Athreya explains this in his book, Big Ideas in Macroeconomics:
Nearly any specification of interactions between individually negligible market participants leads almost inevitably to Walrasian outcomes...The reader will likely find the non-technical review provided in Mas-Colell (1984) very useful. The author refers to the need for large numbers as the negligibility hypothesis[.]
Macro people generally assume that there are too many companies, many consumers, etc. in the economy for strategic interactions to matter. Makes sense, right? Macro = big. Of course there are some exceptions, like in search-and-matching models of labor markets, where the surplus of a match is usually divided up by Nash bargaining. But overall, Athreya is right.

You also rarely see partial equilibrium in macro papers, at least these days. Robert Solow complained about this back in 2009. You do, however, see it somewhat in other fields, like tax and finance (and probably others).


2. Time-Series vs. Cross-Section and Panel

You see time-series methods in a lot of fields, but only in two areas - macro and finance - is it really the core empirical method. Look in a business cycle paper, and you'll see a lot of time-series moments - the covariance of investment and GDP, etc. Chris Sims, one of the leading empirical macroeconomists, won a Nobel mainly for pioneering the use of SVARs in macro. The original RBC model was compared to data (loosely) by comparing its simulated time-series moments side by side with the empirical moments - that technique still pops up in many macro papers, but not elsewhere. 

Why are time-series methods so central to macro? It's just the nature of the beast. Macro deals with intertemporal responses at the aggregate level, so for a lot of things, you just can't look at cross-sectional variation - everyone is responding to the same big things, all at once. You can't get independent observations in cross section. You can look at cross-country comparisons, but countries' business cycles are often correlated (and good luck with omitted variables, too). 

As an illustration, think about empirical papers looking at the effect of the 2009 ARRA stimulus. Nakamura and Steinsson - the best in the business - looked at this question by comparing different states, and seeing how the amount of money a state got from the stimulus affected its economy. They find a large effect - states that got more stimulus money did better, and the causation probably runs in the right direction. Nakamura and Steinsson conclude that the fiscal multiplier is relatively large - about 1.5. But as John Cochrane pointed out, this result might have happened because stimulus represents a redistribution of real resources between states - states that get more money today will not have to pay more taxes tomorrow, to cover the resulting debt (assuming the govt pays back the debt). So Nakamura and Steinsson's conclusion of a large fiscal multiplier is still dependent on a general equilibrium model of intertemporal optimization, which itself can only be validated with...time-series data.

In many "micro" fields, in contrast, you can probably control for aggregate effects, as when people studying the impact of a surge of immigrants on local labor markets use methods like synthetic controls to control for business cycle confounds. Micro stuff gets affected by macro stuff, but a lot of times you can plausibly control for it.


3. Few Natural Experiments, No RCTs

In many "micro" fields, you now see a lot of natural experiments (also called quasi-experiments). This is where you exploit a plausibly exogenous event, like Fidel Castro suddenly deciding to send a ton of refugees to Miami, to identify causality. There are few events that A) have big enough effects to affect business cycles or growth, and B) are plausibly unrelated to any of the other big events going on in the world at the time. That doesn't mean there are none - a big oil discovery, or an earthquake, probably does qualify. But they're very rare. 

Chris Sims basically made this point in a comment on the "Credibility Revolution" being trumpeted by Angrist and Pischke. The archetypical example of a "natural experiment" used to identify the impact of monetary policy shocks - cited by Angrist and Pischke - is Romer & Romer (1989), which looks at changes in macro variables after Fed announcements. But Sims argues, persuasively, that these "Romer dates" might not be exogenous to other stuff going on in the economy at the time. Hence, using them to identify monetary policy shocks requires a lot of additional assumptions, and thus they are not true natural experiments (though that doesn't mean they're useless!). 

Also, in many fields of econ, you now see randomized controlled trials. These are especially popular in development econ and in education policy econ. In macro, doing an RCT is not just prohibitively difficult, but ethically dubious as well.


So there we have three big - but not hard-and-fast - differences between macro and micro methods. Note that they all have to do with macro being "big" in some way - either lots of actors (#1), shocks that affect lots of people (#2), or lots of confounds (#3). As I see it, these differences explain why definitive answers are less common in macro than elsewhere - and why macro is therefore more naturally vulnerable to fads, groupthink, politicization, and the disproportionate influence of people with forceful, aggressive personalities.

Of course, the boundary is blurry, and it might be getting blurrier. I've been hearing about more and more people working on "macro-focused micro," i.e. trying to understand the sources of shocks and frictions instead of simply modeling the response of the economy to those shocks and frictions. The first time I heard that exact phrase was in connection with this paper by Decker et al. on business dynamism. Another example might be the people who try to look at price changes to tell how much sticky prices matter. Another might be studies of differences in labor market outcomes between different types of workers during recessions. I'd say the study of bubbles in finance also qualifies. This kind of thing isn't new, and it will never totally replace the need for "big" macro methods, but hopefully more people will work on this sort of thing now (and hopefully they'll continue to take market share from "yet another DSGE business cycle model" type papers at macro conferences). As "macro-focused micro" becomes more common, things like game theory, partial equilibrium, cross-sectional analysis, natural experiments, and even RCTs may become more common tools in the quest to understand business cycles and growth. 

Senin, 16 Mei 2016

Images of Japanese People

Images of Japanese People

As reported in the Softbank Group's online news paper (ITmedia, 2012), the Japanese advertising and public relations company, Dentsu asked 3772 non-Japanese in 16 regions what they thought of the Japanese.

ソフトバンのオンライン新聞(ITmedia, 2012)電通が16地域の3772人の外国人に日本人に対して持たれているイメージを調査しました。

The results given above show that words associated with the Japanese are predominantly positive despite the fact, it seems to me, the Japanese pay little attention to public relations. Positive associations, such as "creative" were strongest in South East Asian countries, where, I presume, there is especially high consumption of Japanese produced media, such as popular music and manga. Despite the fact that these positive images I rather fear that the Japanese are going to throw all this love away.

上の図に示されている結果では、日本はそれほどPRに注意を払っていないのに、主に肯定的な言葉が思い浮かばれた。「想像的」など肯定的なイメージは、JPOPや漫画など日本のマスメディアが消費されている東南アジアでもっとも高い。こんなに肯定的なイメージがもたれているのに、日本人がこのような日本に対する愛を台無しにするという恐れがあります。

I would like to rate "meek" as positive but I have kept to the positive / negative ratings in the original article. Several of the items, such as "kodawari no aru" which I have translated as "discriminating," were rather difficult to translate. "Solidarity" is short for "have a sense or solidarity" which may be coextensive with words with a negative connotation in the West, such as "conformist" or "a herd." I wonder at how the items were selected and fear perhaps they were selected by the Japanese.

「おとなしい」を肯定的な項目として分類したいですが、元々の記事にあった肯否判断を尊重しました。"discriminating"と英訳してみた「こどなわりのある」などの項目は英訳しにくかった。項目選びはどのように行われていたかはわからず、日本人が選んだ恐れがあると思われます。「連帯間のある」は(Have a sense of)"Solidarity"と英訳してみましたが、「右へ倣え」「群れている」など少なくとも欧米では否定的なイメージのある言葉と重複するとも考えられる。

The Japanese are very interested in what people from other countries think about them since it seems to me, and as argued by Mori (1995), the Japanese lack a linguistic Other and so find it difficult to narrate themselves objectively. The Japanese have instead an autoscopic, mirror-mind, so they know what they look like and that Japan is beautiful.
日本人は一般的に海外でどのように思われているかについての関心が高い。それは森(1995)が論じるように、日本人は言語的な《他者》をもっておらず、自分らを客観的に物語ることが難しいからだと思われる。日本人は反省・自己視的な、鏡としての心をもっているので、自分らがどのように見えるかは日本が美しいということはご存知だと思います。

The original Dentsu News Release can be downloaded from the Dentsu website in pdf form here (in Japanese). The translations are mine.

元々の電通の調査はここからPDFファイルとしてダウンロードできます. 筆者訳.

お取り下げ後希望でありましたら、コメント欄・nihonbunka.comのメールリンクまでご連絡いただければ幸いです。

Mori, 森, 有正. (1999). 森有正エッセー集成〈5〉. 筑摩書房.

How bigoted are Trump supporters?


Jason McDaniel and Sean McElwee have been doing great work analyzing the political movement behind Donald Trump. For example, they've showed pretty conclusively that Trump support is driven at least in part by what they call "racial resentment" - the notion that the government unfairly helps nonwhites.

But "racial resentment" is not the same thing as outright bigotry. Believing that the government unfairly helps black people doesn't necessarily mean you dislike black people. So McDaniel and McElwee did another survey asking about people's attitudes toward various groups. Here's a graph summarizing their basic findings:



So, from this graph, I gather:

1. Trump supporters, on average, say they like Blacks, Hispanics, Scientists, Whites, and Police. On average, they say they dislike Muslims, Transgenders, Gays, and Feminists.

2. Trump supporters, on average, say they like Whites a bit more than average, Muslims a lot less, and Transgenders a bit less. They also might say they like Hispanics, Gays, and Feminists somewhat less, though the statistical significance is borderline.

Now here's how Sean McElwee interpreted this same graph:



This interpretation doesn't appear to be supported by Sean's own data. In fact, his data appear to support the opposite of what he claims.

Now, the main caveat to all this is that surveys like this almost certainly don't do a good job of measuring people's real attitudes toward other groups. When someone calls you on the phone (or hands you a piece of paper) and asks you if you like Hispanics, whether you say "yes" or "no" is probably much more dependent on what you think you ought to say than what you really feel. So this survey is probably mainly just measuring differences in how Trump supporters feel they ought to answer surveys.

But even if this survey really did measure people's true attitudes, it still wouldn't tell us what Sean claims it does. Trump supporters, overall, say they like Blacks. And the degree to which they say they like Blacks is not statistically significant from the national average. Only when it comes to Muslims and Transgender folks do Trump supporters appear clearly more bigoted than the national average.

But going back to the main problem with surveys like this, it might be that Trump supporters are simply more willing to express their dislike of Muslims and Transgender people in a survey. This may just reflect their general lack of education. More educated people are plugged into the mass media culture, which generally discourages overt expressions of bigotry toward any group. Less educated folks are less likely to have gotten the message that you're not supposed to say bad things about Muslims and Trans people.

So in conclusion, this survey doesn't seem to support the narrative that Trump supporters are driven by bigotry. That narrative might still be true, of course - there are certainly some very loud and visible bigots within Trump's support base (and within his organization). But after looking at this data, my priors, which were pretty ambivalent about that narrative to begin with, haven't really been moved at all.

Minggu, 15 Mei 2016

Russ Roberts on politicization, humility, and evidence


The Wall Street Journal has a very interesting interview with Russ Roberts about economics and politicization. Lots of good stuff in there, and one thing I disagree with. Let's go through it piece by piece!

1. Russ complains about politicization of macroeconomic projections:
He cites the Congressional Budget Office reports calculating the effect of the stimulus package...The CBO gnomes simply went back to their earlier stimulus prediction and plugged the latest figures into the model. “They had of course forecast the number of jobs that the stimulus would create based on the amount of spending,” Mr. Roberts says. “They just redid the estimate. They just redid the forecast."
I wouldn't be quite so hard on the CBO. It's their job to forecast the effect of policy. They have to choose a model in order to do that. And it's their job to evaluate the impact of policy. They have to choose a model to do that. And of course they're going to choose the same model, even if that makes the evaluation job just a repeat of the forecasting job. I do wish, however, that the CBO would try some various alternative models, and show the differing estimates for the different models. That would be better than what they currently do.

I think a better example of politicization of policy projections was given not by Russ, but by Kyle Peterson, who wrote up the interview for the WSJ. Peterson cited Gerald Friedman's projection of the impact of Bernie Sanders' spending plans. Friedman also could have incorporated model uncertainty, and explored the sensitivity of his projections to his key modeling assumptions. And unlike the CBO, he didn't have a deadline, and no one made him come up with a single point estimate to feed to the media. And some of the people who defended Friedman's paper from criticism definitely turned it into a political issue

So I think Russ is on point here. There's lots of politicization of policy projections.


2. Peterson (the interviewer) cites a recent survey by Haidt and Randazzo, showing politicization of economists' policy views. This is really interesting. Similar surveys I've seen in the past haven't shown a lot of politicization. A more rigorous analysis found a statistically significant amount of politicization, though the size of the effect didn't look that large to me. So I'd like to see the numbers Haidt and Randazzo get. Anyway, it's an interesting ongoing debate.


3. Russ highlights the continuing intellectual stalemate in macroeconomics:
The old saw in science is that progress comes one funeral at a time, as disciples of old theories die off. Economics doesn’t work that way. “There’s still Keynesians. There’s still monetarists. There’s still Austrians. Still arguing about it. And the worst part to me is that everybody looks at the other side and goes ‘What a moron!’ ” Mr. Roberts says. “That’s not how you debate science.”
Russ is right. But it's very important to draw a distinction between macroeconomics and other fields here. The main difference isn't in the methods used (although there are some differences there too), it's in the type of data used to validate the models. Unlike most econ fields, macro relies mostly on time-series and cross-country data, both of which are notoriously unreliable. And it's very hard, if not impossible, to find natural experiments in macro. That's why none of the main "schools" of macro thought have been killed off yet. In other areas of econ, there's much more data-driven consensus, especially recently. 

I think it's important to always make this distinction in the media. Macro is econ's glamour division, unfortunately, so it's important to remind people that the bulk of econ is in a very different place.


4. Russ makes a great point about econ and the media:
If economists can’t even agree about the past, why are they so eager to predict the future? “All the incentives push us toward overconfidence and to ignore humility—to ignore the buts and the what-ifs and the caveats,” Mr. Roberts says. “You want to be on the front page of The Wall Street Journal? Of course you do. So you make a bold claim.” Being a skeptic gets you on page A9.
Absolutely right. The media usually hypes bold claims. It also likes to report arguments, even where none should exist. This is known as "opinions on the shape of the Earth differ" journalism. This happens in fields like physics - people love to write articles with headlines like "Do we need to rewrite general relativity?". But in physics that's harmless and fun, because the people who make GPS systems are going to keep on using general relativity. In econ, it might not be so harmless, because policy is probably more influenced by public opinion, and public opinion can be swayed by the news.


5. Russ makes another good point about specification search:
Modern computers spit out statistical regressions so fast that researchers can fit some conclusion around whatever figures they happen to have. “When you run lots of regressions instead of just doing one, the assumptions of classical statistics don’t hold anymore,” Mr. Roberts says. “If there’s a 1 in 20 chance you’ll find something by pure randomness, and you run 20 regressions, you can find one—and you’ll convince yourself that that’s the one that’s true.”...“You don’t know how many times I did statistical analysis desperately trying to find an effect,” Mr. Roberts says. “Because if I didn’t find an effect I tossed the paper in the garbage.”
Yep. This is a big problem, and probably a lot bigger than in the past, thanks to technology. Most of science, not just econ, is grappling with this problem. It's not just social science, either - bio is having similar issues


6. Russ calls for more humility on the part of economists:
Roberts is saying that economists ought to be humble about what they know—and forthright about what they don’t...When the White House calls to ask how many jobs its agenda will create, what should the humble economist say? “One answer,” Mr. Roberts suggests, “is to say, ‘Well we can’t answer those questions. But here are some things we think could happen, and here’s our best guess of what the likelihood is.” That wouldn’t lend itself to partisan point-scoring. The advantage is it might be honest.
I agree completely. People are really good at understanding point estimates, but bad at understanding confidence intervals, and really bad at understanding confidence intervals that arise from model uncertainty. "Humility" is just a way of saying that economists should express more uncertainty in public pronouncements, even if their political ideologies push them toward presenting an attitude of confident certainty. A "one-handed economist" is exactly what we have too much of these days. Dang it, Harry Truman!


7. Russ does say one thing I disagree with pretty strongly:
Economists also look for natural experiments—instances when some variable is changed by an external event. A famous example is the 1990 study concluding that the influx of Cubans from the Mariel boatlift didn’t hurt prospects for Miami’s native workers. Yet researchers still must make subjective choices, such as which cities to use as a control group. 
Harvard’s George Borjas re-examined the Mariel data last year and insisted that the original findings were wrong. Then Giovanni Peri and Vasil Yasenov of the University of California, Davis retorted that Mr. Borjas’s rebuttal was flawed. The war of attrition continues. To Mr. Roberts, this indicates something deeper than detached analysis at work. “There’s no way George Borjas or Peri are going to do a study and find the opposite of what they found over the last 10 years,” he says. “It’s just not going to happen. Doesn’t happen. That’s not a knock on them.”
It might be fun and eyeball-grabbing to report that "opinions on the shape of the Earth differ," but that doesn't mean it's a good thing. Yes, it's always possible to find That One Guy who loudly and consistently disagrees with the empirical consensus. That doesn't mean there's no consensus. In the case of immigration, That One Guy is Borjas, but just because he's outspoken and consistent doesn't mean that we need to give his opinion or his papers anywhere close to the same weight we give to the copious researchers and studies who find the opposite. 


Anyway, it's a great interview write-up, and I'd like to see the full transcript. Overall, I'm in agreement with Russ, but I'll continue to try to convince him of the power of empirical research!

Jumat, 13 Mei 2016

Review: Ben Bernanke's "The Courage to Act"


I wrote a review of Ben Bernanke's book, The Courage to Act, for the Council on Foreign Relations. Here's an excerpt:
Basically, Bernanke wants the world to understand why he did what he did, and in order to understand we have to know everything.  
And the book succeeds. Those who are willing to wade through 600 pages of history, and who know something about the economic theories and the political actors involved, will come away from this book thinking that Ben Bernanke is a good guy who did a good job in a tight spot. 
But along the way, the book reveals a lot more than that. The most interesting lessons of The Courage to Act are not about Bernanke himself, but about the system in which he operated. The key revelation is that the way that the U.S. deals with macroeconomic challenges, and with monetary policy, is fundamentally flawed. In both academia and in politics, old ideas and prejudices are firmly entrenched, and not even the disasters of crisis and depression were enough to dislodge them.
The main points I make in the review are:

1. Bernanke was the right person in the right place at the right time. He was almost providentially well-suited to the task of steering America through both the financial crisis and the Great Recession that followed. A lot of that had to do with his unwillingness to downplay the significance of the Great Depression (as Robert Lucas and others did), and with his unwillingness to ignore the financial sector (as other New Keynesians did).

2. However, the institutional, cultural, and intellectual barriers against easy monetary policy that were created in the 1980s, as a reaction the inflation of the 70s, held firm, preventing Bernanke and the Fed from taking more dramatic steps to boost employment, and preventing a thorough rethink of conventional macroeconomic wisdom.

3. Fiscal Keynesianism, however, has also survived, despite generations of efforts by monetarists, New Classicals, Austrians, and others to kill it off. Deep down, Americans still believe that stimulus works.

4. The political radicalism of the Republican party was a big impediment to Bernanke's efforts to revive the economy. Anti-Fed populism, from both the right (Ron Paul) and the left (Bernie Sanders) also interfered with the goal of putting Americans back to work.


You can read the whole thing here!

Michael Strain and James Kwak debate Econ 101


Very interesting debate over Econ 101, between Michael Strain and James Kwak. Strain attempts to defend Econ 101 from the likes of Paul Krugman and Yours Truly. He especially criticizes my call for more empirics in 101:
Critics suggest that introductory textbooks should emphasize empirical studies over these models. There are many problems with this suggestion, not the least of which that economists’ empirical studies don’t agree on many important policy issues. For example, it is ridiculous to suggest that economists have reached consensus that raising the minimum wage won’t reduce employment. Some studies find non-trivial employment losses; others don’t. The debates often hinge on one’s preferred statistical methods. And deciding which methods you prefer is way beyond the scope of an introductory course. 
As you might predict, I have some problems with this. First of all, I don't like the idea that if the empirics aren't conclusively settled, we should just teach theories and forget about the facts. I agree with Kwak, who writes:
I don’t understand this argument. The minimum wage may or may not increase unemployment, depending on a host of other factors. The fact that economists don’t agree reflects the messiness of the world. That’s a feature, not a bug.
Totally! This clearly seems like the intellectually honest thing to do. It seems bad to give kids too strong of a false sense of certainty about the way the world works. When a debate is unresolved, I think you shouldn't simply ignore the evidence in favor of a theory that supports one side of the debate.

As a side note, I think the evidence on short-term employment effects of minimum wage is more conclusive than Strain believes, though also more nuanced than is often reported in the media and in casual discussions.

Strain also writes this, which I disagree with even more:
Even more problematic, some of the empirical research most celebrated by critics of economics 101 contradicts itself about the basic structure of the labor market. The famous “Mariel boatlift paper” finds that a large increase in immigrant workers doesn’t lower the wages of native workers. The famous “New Jersey-Pennsylvania minimum wage paper” finds that an increase in the minimum wage doesn’t reduce employment. If labor supply increases and wages stay constant — the Mariel paper — then the labor demand curve must be flat. But if the minimum wage increases and employment stays constant — New Jersey-Pennsylvania — then the labor demand curve must be vertical. Reconciling these studies is, again, way beyond the scope of an intro course. (emphasis mine)
Strain is using the simplest, most basic Econ 101 theory - a single S-D graph applying to all labor markets - to try to understand multiple results at once. He finds that this super-simple theory can't simultaneously explain two different empirical stylized facts, and concludes that we should respond by not teaching intro students about one or both of the empirical stylized facts.

But what if super-simple theory is just not powerful enough to describe both these situations at once? What if there isn't just one labor demand curve that applies to all labor markets at once? Maybe in the case of minimum wage, monopsony models are better than good old supply-and-demand. Maybe in the case of immigration, general equilibrium effects are important. Maybe search frictions are a big deal. There are lots of other possibilities too.

Strain's implicit assumption - that there's just one labor demand curve - seems like an example of what I call "101ism". A good 101 class, in my opinion, should teach monopoly models, and at least give a brief mention of general equilibrium and search frictions. And even more importantly, a good 101 class should stress that models are situational tools, not Theories of Everything. Assuming that there's one single labor demand curve that applies to all labor markets is a way of taking a simple model and trying to make it function as a Theory of Everything; no one should be surprised when that attempt fails. And our response to that failure shouldn't be to just not teach the empirics. It should be to rethink the way we use the theory.

Anyway, I agree with what Kwak says here:
People like Krugman and Smith (and me) aren’t saying that Economics 101 is useless; we all think that it teaches some incredibly useful analytical tools. The problem is that many people believe (or act as if they believe) that those models are a complete description of reality from which you can draw policy conclusions [without looking at evidence].
Exactly.

Selasa, 10 Mei 2016

Mirrors Make Westerners Want to Kill Themselves

Mirrors Make Westerners Want to Kill Themselves

According to the central theory of this blog, the "comforter" of the self in Japan and the West observes the self in a different media. In either case the comforter is based upon the mother. Before we become aware of our mirror image or our names we identify our subjectivity with that of our mother and later learn to see (Japanese) or hear (Westerners) ourselves as an object of her subjectivity or in her frame of reference. In Japanese, more matriarchal society the mother looks and the self is seen. In the West the mother is more passive, she listens. Self expression in the dominant medium of self expression is enjoyable and enhanced, and self expression in the non dominant medium is in each case fraught. Japanese enjoy taking pictures of themselves, posing, and "making (visual corporeal) things". Westerners love expressing themselves, shooting their mouth off, and (as I am doing now) making up theories.

本部論の主たる理論によれば、日本や欧米における自己の慰安者は異なる媒体において自己を傍観している。いずれの場合でも慰安者は母に由来する。自分を物語り、鏡の中で確認できる前に自らの主観を母と同感同一視し、後に母の主観・母の間のなかで自分らを見て(日本)、あるいは聞く(欧米)。より母性原理の強い日本においては母は観る、自己は観られる。欧米での母はより受動的で聞くものとされる(Freud)。主たる優位にある媒体における自己表現は快楽的ですが、それぞれの文化の劣位にある媒体における自己表現は居心地悪い。日本人は自撮りやポーズをとることが好きで、(視覚的な)物づくりを得意とする。欧米人は言語的自己表現や偉そうに話すことも、理論を作ることがすきです。


Japanese have a problem with linguistic self expression and Westerners have a problem with mirrors. Baumeister (1990) theorised that mirrors would increase the desire to escape from self in the most drastic way: suicide. Selimbegović & Chatard(2013) demonstrated this greater tendency towards suicide by testing the amount of time before subjects were able to recognise suicide related words (suicide, rope, wrist, hang, and attempt from nonsense words) in front of a mirror and in a control condition. It was found that subjects became quicker at recognising these words in front of the mirror and that this effect increased when the subjects were encouraged to think about how far they were from their own ideals.
一方、日本人は言語的自己表現が億劫で、欧米人は鏡が嫌い。Baumeister(1990)は鏡を見る欧米人は、「自己から逃げる」最も極端な手法である自殺に走らせると論じた。Selimbegović & Chatard(2013)は、自殺関連語(自殺・紐・手首・[首]吊り・未遂)をナンセンス語から見分けるまでの時間欧米の被験者において鏡の前で早くなるということで、自殺願望への傾向向上を示した。自我と自我理想のギャップを石示唆せられた被験者において鏡の効果がさらにつよくなりました。

In Japan mirrors are used on train platforms to prevent suicide. In the West telephones are used in same way but, as Butler (Butler, Lee, & Gross, 2009) demonstrates, linguistic expression tends to make Japanese MORE stressed.

日本では駅のホームでは鏡は自殺予防に一約を担っていますが、アメリカの自殺名所では電話が設置されます。Butler (Butler, Lee, & Gross, 2009) が示すように、日本が言語的表現をさせられると逆にストレスが増えます。


The recent use of telephones at Japanese suicide spots and Abe's initiative to make all 100 million Japanese verbalise their objectives and become "active" like Westerners (!) is from this perspective, a very bad idea. 日本の自殺名所での電話の設置や、日本人が自分らの行動を言語化し活躍する一億人総括という安倍首相の政策は、上述の立場から考えれば、名案だとは言えないと思われます。

www.flickr.com/photos/nihonbunka/11959516956

Bibliography
Baumeister, R. F. (1990). Suicide as escape from self. Psychological review, 97(1), 90.
Butler, E. A., Lee, T. L., & Gross, J. J. (2009). Does expressing your emotions raise or lower your blood pressure? The answer depends on cultural context. Journal of Cross-Cultural Psychology, 40(3), 510-517.
Selimbegović, L., & Chatard, A. (2013). The mirror effect: Self-awareness alone increases suicide thought accessibility. Consciousness and cognition, 22(3), 756-764.

Minggu, 08 Mei 2016

Regulation and growth


As long as we're on the topic of regulation and growth, check out this post I recently wrote for Bloomberg View:
I’m very sympathetic to the idea that regulation holds back growth. It’s easy to look around and find examples of regulations that protect incumbent businesses at the expense of the consumer -- for example, the laws that forbid car companies from selling directly to consumers, creating a vast industry of middlemen. You can also find clear examples of careless bureaucratic overreach and inertia, like the total ban on sonic booms over the U.S. and its territorial water (as opposed to noise limits). These inefficient constraints on perfectly healthy economic activity must reduce the size of our economy by some amount, acting like sand in the gears of productive activity. 
The question is how much...If regulation is less harmful than the free-marketers would have us believe, we risk concentrating our attention and effort on a red herring... 
[F]ocusing too much on deregulation might actually hurt our economy. Many government rules, such as prohibitions on pollution, tainted meat, false advertising or abusive labor practices, are things that the public would probably like to keep in place. And reckless deregulation, like the loosening of restrictions on the financial industry in the period before the 2008 credit crisis, can hurt economic growth in ways not captured by most economic models. Although burdensome regulation is certainly a worry, a sensible approach would be to proceed cautiously, focusing on the most obviously useless and harmful regulations first (this is the approach championed by my Bloomberg View colleague Cass Sunstein). We don’t necessarily want to use a flamethrower just to cut a bit of red tape.

Also, on Twitter I wrote a "tweetstorm" (series of threaded tweets) about the regulation debate. Here are the tweets:















The regulation issue is really a very multifaceted, complex, and important series of different issue. It's an important area of policy debate, but can't be boiled down to one simple graph - and shouldn't be boiled down to one simple slogan.

Jumat, 06 Mei 2016

Fun Ramen Restaurant


Tanoshii Ramen San (楽しいラーメンやさん)is a Japanese confectionary which allow children to make soda and cola flavoured Chinese pasties (gyoyza) and extruded gelatinous "ramen noodles" in a soda "soy sauce" soup. There is even a candy "naruto fish paste twirl." This is one of a series including cake and candy sushi making sets. It is educational especially in its 'hidden curriculum' that teaches children that food preparation is fun, meritorious and to be praised and respected. This and all the other 'activity foods' for adults suggests to me, even if my wife did not make it non-verbally clear, that mother rules, and implicitly that men are the "second sex". http://flic.kr/p/GMmm4E

Brad DeLong pulpifies a Cochrane graph


When Bob Lucas, Tom Sargent, and Ed Prescott remade macroeconomics in the 70s and 80s, what they were rebelling against was reduced-form macro. So you think you have a "law" about how GDP affects consumption? You had better be able to justify that with an optimization problem, said Lucas et al. Otherwise, your "law" is liable to break down the minute you try to take advantage of it with government policy.

Lots of people are unhappy with what Lucas et al. invented to replace the "old macro". But few would argue that it needed replacing. Identifying correlations in aggregate data really doesn't tell you a lot about what you can accomplish with policy.

Because of this, I've always been highly skeptical of John Cochrane's claim that if we simply launched a massive deregulatory effort, it would make us many times richer than we are today. Cochrane typically shows a graph of the World Bank's "ease of doing business" rankings vs. GDP, and claims that this graph essentially represents a menu of policy options - that if we boost our World Bank ranking slightly past the (totally hypothetical) "frontier", we can make our country five times as rich as it currently is. This always seemed like the exact same fallacy that Lucas et al. pointed out with respect to the Phillips Curve. 

You can't just do a simple curve-fitting exercise and use it to make vast, sweeping changes to national policy. 

Brad DeLong, however, has done me one better. In a short yet magisterial blog post, DeLong shows that even if Cochrane is right that countries can move freely around the World Bank ranking graph, the policy conclusions are incredibly sensitive to the choice of functional form. 

Here is Cochrane's graph, unpacked from its log form so you can see how speculative it really is:


DeLong notes that this looks more than a little bit crazy, and decides to do his own curve-fitting exercise (which for some reason he buries at the bottom of his post). Instead of a linear model for log GDP, he fits a quadratic polynomial, a cubic polynomial, and a quartic polynomial. Here's what he gets:


Cochrane's conclusion disappears entirely! As soon as you add even a little curvature to the function, the data tell us that the U.S. is actually at or very near the optimal policy frontier. DeLong also posts his R code in case you want to play with it yourself. This is a dramatic pulpification of a type rarely seen these days. (And Greg Mankiw gets caught in the blast wave.)

DeLong shows that even if Cochrane is right that we can use his curve like macroeconomists thought we could use the Phillips Curve back in 1970, he's almost certainly using the wrong curve. You'd think Cochrane would care about this possibility enough to at least play around with slightly different functional forms before declaring in the Wall Street Journal that we can boost our per capita income to $400,000 per person by launching an all-out attack on the regulatory state. I mean, how much effort does it take? Not much.

And this is an important issue. An all-out attack on the regulatory state would inevitably destroy many regulations that have a net social benefit. The cost would be high. Economists shouldn't bend over backwards to try to show that the benefits would be even higher. That's just not good policy advice.

(Also, on a semi-related note, Cochrane's WSJ op-ed (paywalled) uses China's nominal growth as a measure of the rise in China's standard of living. That's just not right - he should have used real growth. If that's just an oversight, it should be corrected.)


Updates

Cochrane responds to DeLong. His basic responses are 1) drawing plots with log GDP is perfectly fine, and 2) communist regimes like North Korea prove that the relationship between regulation and growth is causal.

Point 1 is right. Log GDP on the y-axis might mislead 3 or 4 people out there, but those are people who have probably been so very misled by so very many things that this isn't going to make a difference.

Point 2 is not really right. Sure, if you go around shooting businesspeople with submachine guns, you can tank GDP by making it really hard to do business. No one doubts that. But that's a far, far, far cry from being able to boost GDP to $400k per person by slashing regulation and taxes. Cochrane's problem isn't just causality, it's out-of-sample extrapolation. DeLong shows that if you fit a cubic or quartic polynomial to the World Bank data, you find that too much "ease of doing business" is actually bad for your economy, and doing what Cochrane suggests would reduce our GDP substantially. Is that really true? Who knows. Really, what this exercise shows is that curve-fitting-and-extrapolation exercises like the one Cochrane does in the WSJ are silly sauce.

Anyway, if you're interested to read more stuff I wrote about regulation and growth, see this post.