Friday, December 20, 2013

Minimum morality: Walmart wages war

source

Lately it has become popular to talk about how Walmart pays its employees so little that they have to organize food drives for themselves and depend on public assistance, burdening taxpayers. Journalists have railed against Walmart's moral depravity for providing such low compensation. Set aside questions about how to define "low" wages (e.g., why do so many people apply for these low-wage jobs? How do they stack up in international context? etc.). Even if we all agree to call Walmart's wages "low," the system of morality that is motivating these attacks is puzzling. If paying low wages is a sin, it's a sin of omission--and we're all guilty.

Walmart employs 1.2 million people in the US, more than any other private firm. Why is Walmart any more obligated to pay high wages to 1.2 million people than you or I? Does Walmart's decision to provide jobs for these people automatically obligate them to provide pay above a certain level?

What makes this complicated is that you, these journalists, and I employ zero people (or close enough by comparison), which means we effectively pay 1.2 million people a wage of $0/hour.

Walmart critics embrace two moral standards: in the first, morality requires payment of high wages to 1.2 million people. In the second, morality can be achieved without employing anyone at all--that is, by paying zero wages. Most of us have chosen to live by the second standard, and from our lofty moral position we can criticize Walmart for not meeting the first standard. How convenient!

In other words, according to the system of morality embraced by the Walmart critics, Walmart could "rise" to our level of morality by either (a) raising pay to some arbitrary level preferred by the critics or (b) reducing the wages of their 1.2M employees to $0/hour, thus choosing the standard of morality that the rest of us prefer to apply to ourselves. Of course, option (b) means that those employees would leave Walmart--but that's the point. Then Walmart would be equivalent to us. Somehow I don't think the workers would be any better off, and it seems likely that even more costs would be passed to taxpayers as the ranks of the unemployed swell, but at least then the Walmart shareholders would no longer be the target of the critics and could instead join us in sanctimoniously raising awareness of some other huge employer's moral depravity.

It's a funny sort of logic that says that Walmart "transfers" poverty assistance costs to taxpayers by paying workers less than some journalist thinks they should be paid. On an employment-weighted basis, Walmart is less guilty of paying low wages than is anyone else on the planet. If taxpayers have an obligation to provide a safety net (and I think we do), then the system works exactly as it should: Walmart pays people an amount roughly requisite with their marginal contribution to the firm's revenue, keeping them off of unemployment and enabling the miracle that is the Walmart business model to deliver goods to the poor and middle class at prices lower than they would otherwise be. Meanwhile, the government picks up the residual of the workers' needs. It's just as accurate to say that Walmart is picking up part of the tab for the safety net (by providing jobs for the otherwise unemployed, not to mention low prices) as it is to say that taxpayers are picking up part of the tab for Walmart wage policies. The two statements describe the same reality.

This is the social safety net. This is the setup that progressives have demanded, yet they complain when it is used. Focus on making a more efficient and effective safety net, and let Walmart make its own factor payment decisions.

Thursday, December 19, 2013

Freshwater, saltwater, and the econ blogosphere

Update: Noah Smith was very kind and responded on Twitter; most of the conversation is here.

In this post I raised questions about the Kimball & Smith article on freshwater vs. saltwater macroeconomics. We had a follow-up conversation on Twitter (Kimball is always very gracious, and Smith indulges my complaining from time to time). People seem to still be talking about this, so it's worth another shot.

Here's the problem: This distinction does not exist in any meaningful way, even if it did 20 years ago. Eli Dourado suggests that freshwater critics probably can't pass an ideological Turing test, but the problem is worse than that. I still haven't seen anyone define the distinction in a way that is nontrivial. These are terms without definitions; if we can talk these people into providing definitions, chances are the whole argument will fall to pieces.

I'm going to explain what I mean, but first: This has relevance for larger issues. The caricature of the economics discipline that we read about in blogs and newspapers is neither accurate nor productive. Perhaps the reason this freshwater vs. saltwater debate has caught on in the blogosphere is that anyone can participate in it without having to engage directly with the literature. As long as we stick to politically loaded caricatures and straw men, anyone can have an opinion. Figure out which team you're on--based on political preferences--then start dishing out straw men. But the distinction falls apart once we talk about specific models.

In my Twitter conversation with Miles Kimball (read most of it here), he said two important things. First, their article was focusing on the "purist" end of the freshwater spectrum. Second, most macro work is done in "brackish" waters. This is one of the issues I was trying to raise in my other post.

The distinction only makes sense if you can clearly articulate modeling approaches that characterize each side of the debate and, therefore, come up with a list of papers or researchers that fall into each category most of the time. This can't really be done. Nobody is writing papers with bare-bones RBC models. People are incorporating all sorts of frictions into those models, and the RBC types are producing remarkable economic dynamics on just a few frictions. How do we decide whether these frictions are "saltwater" or "freshwater"? How do we categorize agency mechanisms, money/labor search, or factor adjustment costs? We can't very well just say that any kind of friction makes a model saltwater, or else there would be no freshwater papers and the debate would be irrelevant. But once we admit that everyone is working in the grey area, the debate becomes, well, irrelevant. This is not a rhetorical or conceptual dichotomy with any continuing usefulness.

Instead, we need to focus on how best to incorporate realistic frictions. On that issue, the Minnesota types have some important things to say. There is a difference between frictions arising from clearly defined behavioral or institutional constraints and frictions arising only from a desire to force the model to replicate the data. That said, curve fitting has its purposes. I got hooked on macro as an undergrad when Kerk Phillips, one of my awesome mentors, made me replicate CEE; and the New Keynesian model generally is a powerful source of intuition for monetary economics. Many models without nominal rigidities have severe limitations when it comes to monetary analysis. But the NK model has some big weaknesses. It can't do meaningful heterogeneity (you'd think the bloggers who are upset about inequality would find that relevant). It assumes nominal rigidities that have nothing to do with how prices are set, not to mention a bunch of other band-aids. It is almost always solved only locally. It by no means has a monopoly on shedding light on financial crises (see e.g., here or here). The point is that we have plenty of specific modeling issues to talk about. We don't have time to resort to cliches that lost their meaning more than a decade ago.

I think the most significant thing the econ blogosphere has done is convince a lot of people that macro is really easy. Yesterday my Twitter feed was full of people wondering why the Fed couldn't see the obvious solution of doing more QE (see also this), as if we have definitively mastered the theoretical and empirical task of understanding every aspect of policy at the ZLB. Presumably there are people who actually believe Krugman's nonsense about all the important macro advances happening in blogs now. I mention this stuff because the freshwater vs. saltwater heuristic is yet another way of giving people license to treat macro as a set of easy, obvious solutions to policy problems, with no adverse consequences, that can be applied in practice by anybody in the drivers seat.

This is a political heuristic, not an economic one, and it's a bad heuristic at that. I just don't find it very productive.

Tuesday, December 10, 2013

My five favorite books of 2013



Last year's list is here. I read these books in 2013, but I'll allow books published in 2012 (the best book I read this year was published in 2010, so I mention it below the list). In no order:

1. The New Geography of Jobs (Enrico Moretti, 2012). Cities matter. Mobility matters. Innovation jobs matter, and not just for the innovators. Manufacturing is overrated as a source of job "multipliers". Geography may explain a large portion of wage and employment puzzles, at least in a mechanical sense. There are secular trends in the US economy that complicate the standard growth vs. cycle dichotomy. This book has directed my thinking on several subjects and probably inspired this post and related ones. Read this book.

2. The Bankers' New Clothes (Anat Admati & Martin Hellwig, 2013). This book persuaded me that equity requirements are probably the best answer.

3. The Alchemists (Neil Irwin, 2013). A good narrative description of the crisis, with a focus on specific central bankers. My opinion of Bernanke was high before, and this did not change that. Irwin tells some mind-blowing stories, like this one. In the comments to that post you'll see Matt Nolan's complaint, which I think is fair. I recall having complaints about some of the economics in this book, but I don't remember what they were (I lost my notes).

4. Misunderstanding Financial Crises (Gary Gorton, 2012). Gorton understands banking; that is probably why Bernanke reads him. I learned much. Recall that Gorton's focus is on (a) inability of the private sector to produce riskless collateral, and (b) the financial crisis as a run on repo, prime broker balances, and asset-backed commercial paper. I am still digesting Gorton as it relates to Admati and Hellwig.

5. Stalin's Curse (Robert Gellately, 2013). Gellately attempts to counter the popular narrative that Stalin was not a committed communist. I do not know this history or literature well enough to evaluate the argument. I simply found the narrative engaging and informative, and it brought me back to issues I have not thought about for some time.


Truly my favorite reading this year was The Big Ditch (Noel Maurer & Carlos Yu, 2010), but it didn't fit my rules for publication date. When we decided to vacation in Panama, I of course downloaded the latest IMF Article IV review (as all tourists do) then went looking for Tyler Cowen's book recommendation (which I found here). After reading and visiting, I concur with Cowen that "people do not spend enough time thinking about the Panama Canal," and I would say Panama more generally (go read all of Cowen's Panama posts). I can't explain the degree to which my Panama experience--of which this book was a key component--has blown my mind this year.

Maurer and Yu are economic historians, which is not the same as historians who write about economics. Their approach to causal inference and measurement is economic and therefore much more satisfying than the hand-waving you find in many histories. Their purpose:

The specific question that this book seeks to shed light on, then, is whether a relatively modern democratic nation, operating inside the strictures of Westphalian sovereignty, was able to leverage its ability to impose military and economic sanctions into sustainable economic gains. Conversely, why did that selfsame nation decide to withdraw from its imperial commitment a few decades later? (5)

The authors show that the answer to the first question is yes ("imperialism paid"), and the answer to the second question is that it was no longer worth the cost. Read this book.

Panama City

Tuesday, November 26, 2013

What constitutes Freshwater macro?

Image source

UPDATE 12/19/13: A follow-up to this post is here.

By now everyone knows about events at the Minnesota Fed. Miles Kimball and Noah Smith have a note trying to explain the Freshwater/Saltwater divide (is that still a thing?) to outsiders. Paul Krugman piles on. For the most part, I don't really want to get into this fight, but I'm curious about a few things. This is probably due to my inability to identify the line between Freshwater and Saltwater economics. I use a shorthand of New Keynesian vs. non-New Keynesian. Maybe someone will help me out.

Kimball and Smith write,

In the case of Freshwater macroeconomics there is a large set of purity taboos prohibiting certain model elements (elements Saltwater economists think are necessary to explain the world).

What are we talking about here? The main difference between standard RBC and New Keynesian models is nominal rigidities. Calvo pricing, while useful, does not "explain" anything. It's an assumption, not a model outcome. It's an extremely useful assumption, but it is ad hoc. (There is another big difference--local vs. global solution methods, in many cases--and I don't think that one comes down in favor of the New Keynesians...). Maybe if it would be useful for "Freshwater" types to take "Saltwater" ideas more seriously, it would also be useful for New Keynesians to think hard about some of the weaknesses of that model (e.g., here--yes, I went there).

Maybe we're not talking about Calvo pricing.

There is also a conventional wisdom among many people that the Freshwater school has no ability to explain the Great Recession. K&S: "The Saltwater New Keynesians--who included Fed chairman Ben Bernanke--had an answer." Here is where I don't think the Freshwater vs. Saltwater divide is very useful. What kind of insights was Bernanke drawing on? New Keynesian ones? Bernanke's actions could easily have been motivated by Bernanke/Gertler/Gilchrist financial accelerator and/or Kiyotaki-Moore concerns. Are those ideas that "Freshwater" people would reject? I suppose I have no idea, but a lot of RBC models are incorporating these kinds of frictions.

I happen to have attended the Midwest Macro conference in Minnesota a few weeks ago. That by no means makes me an expert on the subject, but I saw lots of papers--including some by Minnesota people--that incorporated a variety of deviations from the bare bones RBC model that had a lot of explanatory power. Is Fisherian debt-deflation a Freshwater thing, or a Saltwater thing? The non-New Keynesian research agenda is very dynamic, while K&S suggest that the Freshwater "template is seldom reexamined."

Again, I probably just have no idea what Freshwater means.

Then we have Krugman, who mentions RBC inflation predictions in the Great Recession. But has inflation matched New Keynesian predictions? Doesn't the standard NK model say we should have had a lot more deflation? That's a tricky model. Oh, and by the way, are the events in Minnesota consistent with Krugman's complaints about practitioners of demand-side macro being oppressed in the discipline?

I'm just trying to articulate my confusion. I'm wondering if our assessment of the usefulness of "Freshwater" macro depends on how narrowly we define it. It might be helpful for Kimball and Smith to give us a more precise definition of the distinction.


Monday, October 14, 2013

GMM and the structural vs. reduced form debate

stolen from Wikipedia
Among this year's economics Nobel laureates is Lars Hansen, who pioneered the generalized method of moments approach for estimating economic models. Tyler Cowen gives a nice summary of Hansen's contributions to economics and finance, and Alex Tabarrok attempts to explain GMM to the layman.

I think that one underappreciated aspect of GMM is that is illustrates how silly the structural vs. reduced form debate is. GMM allows us to estimate equations derived from structural models at low computational cost and with minimal assumptions. We don't have to assume that the entire structural model is "true;" we only have to assume that the functional form of the estimated equation is meaningful relative to the parameters being estimated.

Vocal opponents of structural approaches seem to think that estimating a structural model requires far more heroic assumptions than estimating the typical linear model used in reduced form work. GMM shows that this is not necessarily the case.

Ultimately the difference between reduced form work and a lot of structural estimation work boils down to functional form. Structural approaches choose the functional form of the estimated equation based on a derivation from a structural model. Reduced form approaches choose based on treatment effect concerns, and they typically choose from within the universe of linear or nearly linear functional forms. They both fit the equation to data by minimizing error. I fail to see how one approach is more realistic than the other. It's not immediately obvious that a linear model of anything is a more or less accurate representation of the real world than any other functional form; rather, it likely depends on the research question and the items being measured. It's nice to approach the discipline with a variety of tools so we can find the right tool for each job.

Monday, October 7, 2013

The recent decline in employment dynamics

That is the title of an excellent paper by Henry Hyatt and James Spletzer, recently published here (ungated!). The abstract reads,

We document and attempt to explain the recent decline in employment dynamics in the U.S. We have four major empirical findings. First, each measure exhibits a “stair step” pattern, with the declines concentrated in recessions and little increase during subsequent expansions. Second, changes in the composition of workers and businesses can explain only a small amount of the decline. Third, any explanation for the decline in job creation and job destruction will account for no more than one-third of the decline in hires and separations. Fourth, the decline in hires and separations is driven by the disappearance of short-duration jobs.

A major contribution of this paper is the documentation of decline in measures of dynamics from four different datasets:

  • LEHD: A quarterly longitudinal household dataset based on state unemployment insurance records and the Quarterly Census of Employment and Wages
  • BED: The quarterly establishment-level administrative dataset based on the BLS business register (I used this dataset here)
  • JOLTS: The monthly BLS establishment survey data on hires and separations
  • CPS: The monthly BLS household survey on which monthly headline unemployment rate estimates are based

I've discussed the decline in job creation and job destruction rates many times. This paper also looks at hires and separations and job-to-job flows (see chart, which is Figure 3 in the paper). Note the stair-step pattern.

click for larger image


Composition effects do not explain the declines. The aging and increasingly educated workforce and the aging of the firm distribution have some explanatory power but are insufficient. In a related working paper, coauthors and I conduct a thorough investigation of job creation and destruction in the BDS (which I've described here and elsewhere). We likewise find insufficient explanatory power from composition effects. Factors such as the evolving sectoral and regional composition of employment or the changing racial makeup of the population do not have explanatory power and, if anything, act in the "wrong" direction. In short: the composition of the US economy has changed, but these changes cannot explain the decline in dynamics.

Further, H&S provide evidence that "the explanation for the decline in hires and separations will be different than the explanation for the decline in job creation and job destruction." So the puzzle is multifaceted.

What could be causing these trends? The authors discuss a few ideas, none of which provide a smoking gun:

  • Changes in employment adjustment costs, either technological or legal, could conceivably work in either direction. These could include the decline in unionization rates, discharge laws, cost of vacancy postings and job search, and others.
  • Changes in the nature of job matches could extend employment duration and have similar effects on other dynamics measures.
  • Changes in the level of uncertainty about match quality, productivity, future profits, etc., could matter. But this can be a tricky idea, and developing a theory of uncertainty that can explain both the secular and cyclical path of dynamics measures would be difficult.
  • Changes in the production process, such as an "outsourcing" of volatility to international labor markets or the rise of temporary help agencies, may have some explanatory power.
  • Job- and house-lock, in principle, could drive declining dynamics, but the authors note that evidence linking the two adequately has yet to be found.
Each of these potential explanations represent avenues for future research. It is nice to see that this topic is getting some attention, as it may provide clues to the arrival of "jobless recoveries" and other labor market anomalies. Ben Casselman at the Wall Street Journal is more aware of the issue than any other journalist and has a nice new note on the topic (gated; try Google News).


Thursday, September 26, 2013

How much is that in today's dollars?

Image source

Suppose a friend tells you that in 1945 their grandfather paid $5,000 for 150 acres of land in Telluride, Colorado. Then they ask you, “how much is that in today’s dollars?” I think to most people this is a really straightforward question. A quick Google search should give you the answer (I came up with this link). To most people, there is a single multiplier that allows you to convert 1945 dollars to 2013 dollars; you just have to look it up. The link I found tells us explicitly: “$1 worth of 1945 dollars is now worth $12.99.” But I find this question really difficult.

I recently read Titan, Ron Chernow’s excellent biography of John D. Rockefeller. The book has lots of passages like the following:

Rockefeller gave $100 million to the Rockefeller Foundation in its first year, bolstered by another $82.8 million by 1919. In current dollars, that would translate into a $2 billion gift during the foundation’s inaugural decade. (566)

Chernow does not explain how he makes this or other calculations. But every time I read a phrase like that I thought, what does that even mean? What does it mean to “translate” 1919 dollars to 2013 dollars? What we care about is the claim of money on real resources, but it’s rarely clear which resources are the relevant ones.

I had similar thoughts last week when Census released median income numbers for 2013. “Wonks” dashed off blog posts proclaiming that the middle class was better off in 1989 than now and that recent decades represent a “lost generation” of economic progress. These comparisons were based not only on ignorance of composition effects but also on the assumption that CPI-U-RS, which the Census Bureau uses to provide constant-dollar time series, is the correct way to “translate” 1989 dollars to 2013 dollars.

Ultimately, my problem is with how the question is posed. The wonks who abused the income data think they are answering a question about economic welfare; they are therefore assuming that CPI-U-RS can turn dollars into utils--measures of welfare. That’s a bad assumption, of course--welfare is about the benefits people get from things, not just the price they pay. No price index can capture changes in welfare over time. Most of our price indexes, to some degree, allow the consumption basket to change over time; so they measure the cost of buying the type of consumption basket people want to buy--or the market value of total output produced--at each point in time. If people get richer over time (in truly real terms), they are likely to change their consumption basket in a way that could make their “cost of living” go up even while their welfare increases. To get even an ordinal approximation of the change in welfare over time we must either fix the consumption basket in 1989 and measure its cost in 2013, or fix the consumption basket in 2013 and find its cost in 1989. This latter exercise will be impossible because many of the things we consume now did not exist in 1989--but that’s the point. It’s hard to compare welfare across time when many of the goods that give us utility now weren't in the choice set 25 years ago (or even five years ago). At the very least, these numbers should be converted into labor hours for the relevant worker, in the style of Don Boudreaux and others; but even that won’t tell the whole story.

To decide whether you really believe the wonks, ask yourself the Garett Jones question: if Doc Brown’s DeLorean shows up at your door and offers to take you to 1989 and give you a job and income that would put you in the same national percentile as your current income, would you take the offer? Was the quality of medical care, transportation equipment, computers, and communications technology in 1989 good enough to make you better off? I think the honest answer for most people--probably not all, but most--is no. And if the answer is no, then our price deflators aren't delivering welfare measures, at least over many decades (by the way, I played with some other deflators on the income data, and some do make the data look better--but none make the data look great).

What about the other questions--the land prices, or the 2013 value of Rockefeller’s charitable activities? Again, we must be more careful about the question.

Suppose I were to respond to my friend’s question about land by using a local land deflator. In some contexts, that would be the correct approach--but it would defeat the point of the question at hand, which is to illustrate that the old man got a great deal on the land since Telluride real estate is now out of reach for everyone but Tom Cruise. Maybe a wage denominator is the right tool here--how many hours did grandpa have to work to make the purchase, and how many hours would a person in the same occupation have to work to buy that same land now? The person asking the question needs to be more specific than just, “how many dollars is that today?”

The Rockefeller question is really tricky. For a time span that long, and for the transaction in question, I don’t think there is a standard deflator that will tell us anything useful. In fact, I don’t think there are many questions that can be answered here. A price index would tell us very little--we don’t have great price indexes going back that far, for starters, but also the consumption basket of 1920 is nothing like the consumption basket of 2013. Few of the jobs people do are even similar. And no, Ron Paul has no solutions for us here--even if the U.S. dollar had been on a constant gold peg this entire time, knowing how many pounds of gold Rockefeller donated would tell us almost nothing important.

Really the question we want to answer is, “is that a lot?” We can only answer that question in rough, vague terms. With Rockefeller-sized numbers, maybe we can put nominal GDP in the denominator and see how much of national output Rockefeller gave away. Maybe we could use an outcome-based result: Rockefeller’s charity led directly to several disease cures and universities; how much do we value those things now? (By the way, during his lifetime Rockefeller gave away much more money than just his Rockefeller Foundation endowment).

I don’t find any of these approaches satisfactory; at best they can hint at magnitude. Maybe we should just convert everything into apples or pounds of (constant-quality) beef. The point is that I don’t think dollar comparisons are very useful when they’re made over long periods of time. It depends on the question, of course--we can often deflate figures with hours of labor--but many questions don’t have answers. At the least, I think writers should be more aware of how their measurement of the cost of living affects their conclusions.

Thursday, August 22, 2013

A comment on math and economics

Well, it's that time again. Econ bloggers are debating the merits of math in economics, particularly macro; and--surprise!--the people who don't use math say it's not useful. I've noted my support for math before, and in the same place I suggested that a wide variety of methodological approaches should be encouraged. I have a few more quick thoughts.

Some of the criticisms of mathematical modeling in economics apply equally to narrative or heuristic modeling, such as this one by Arnold Kling. He says,

Macroeconomic equations are not proven and tested. They are instead tentative and speculative.

Yep. And this is also true of any other approach to macroeconomics (more below). So Kling isn't helping us sort out the relative merits of the two approaches. He's usually much more insightful. Another offering is from Noah Smith; of his grad school macro classes, he says:

We just assumed a bunch of equations and wrote them down. Then we threw them all together, got some kind of answer or result, and compared the result to some subset of real-world stuff that we had decided we were going to "explain".

This is not fair to typical macro modeling, which is not just throwing things together but rather consists of starting with a few assumptions about behavior and the structure of markets then drawing them out to a conclusion. That's no different from narrative or heuristic theorizing, except that in math everyone can clearly see the assumptions and proposed mechanisms so we can fight about them. (By the way, most of the Smith post is good; it's more about the difference between math in physics and econ rather than simply a criticism of econ math). I very much doubt that Smith's graduate macro professors were such poor teachers that the whole process looked like throwing a bunch of silly equations together.

Bryan Caplan has weighed in as well. He makes a false distinction between economic intuition and math, since each can usefully inform the other. He also focuses too much on explaining economics and not enough on developing new insights. Yes, a lot of insights provided by math can be explained without it; every good paper follows its math section with a narrative explaining the intuition derived from the math. This is part of the reason for Caplan's observation that many economists skip the theory section of papers and just read the conclusions; the conclusions typically expand the insights from the math into words. That doesn't mean the math was unnecessary. So Caplan's empirical test of this debate isn't enough; of course he can explain Krugman's math discoveries in words! That's what makes them great (though I still think that a few equations can profitably replace a lot of words when explaining economics--go look at an intertemporal first-order condition, as Claudia Sahm notes, then write it down in words, and see which is simpler to study). The other question is whether math facilitates new insights that would have required a lot more work if done in a heuristic or narrative fashion; and the answer to that question probably depends on the person doing the research (math has worked well for me). Caplan also assumes too much by calling math a cost/benefit failure; the costs of acquiring math skills probably depend on the individual, and in any case most macro papers can be understood if you have a basic grasp of differentiation and can conceptualize market clearing. The latter skill should be possessed even by economists who shun math. And is ability to take a derivative really that costly to obtain, given that it can then be used to read thousands of papers over the course of a career?

Also, as I've noted before, most people think in partial equilibrium. In macro, we often need general equilibrium, which is a concise way of saying that your theory should try to account for feedback effects and aggregate resource constraints. For me, at least, that's much easier to do with math than without it. Maybe keeping track of it all using words is a piece of cake for others, but I doubt it.

A lot of people seem to be deluding themselves into thinking that narrative/heuristic economics can be done without a lot of assumptions. You're lying to yourself if you think that. Macroeconomics is really, really difficult. You can't do it without simplifying assumptions. If you think you can, you're either arrogant or naive. Show me a narrative essay that accounts for all the heterogeneity and cognitive biases and informational issues and idiosyncratic market structures and time variation and legal framework that exists in the real world.

Using math is an act of intellectual humility: I admit that I cannot keep track of a lot of moving parts in my head, and I'm willing to subject my reasoning to the scorn of others, in a decent-sized package, so that my model can be easily criticized. That doesn't mean that discarding math is arrogant in and of itself, but it does mean that discarding math may force others to wade through a big narrative to isolate your assumptions and shortcuts. And that can be very costly.

If your main criticism of math is that it requires lots of assumptions, it's time for you to abandon economics; you can't do it without assumptions that are often unrealistic. And the thing about math is that it provides a really nice way to think about what happens when assumptions are relaxed one-by-one.

My bigger question is, why can't we all just get along? Does Caplan really think that the people who use math have Stockholm Syndrome? Is he really unwilling to consider the possibility that a lot of economists have found math to be really useful for providing the insights they need, and that those insights would have been costlier without the intellectual leverage and simplicity that math can provide? Why can't he just say that he doesn't find math useful, but that others do, so we should allow for diversity of approaches?

Yes, a lot of the math out there is just showing off; but the other side of that coin is that maybe some of the anti-math sentiment we see is driven by the complainers' unwillingness to learn the few math tools they would need to read papers [edit: I'm not suggesting that Caplan, Smith, or Kling fall into this category]. I see no reason that their laziness should be used to guilt math users into abandoning a useful tool.

If you find narrative style useful, do that. If you find math useful, do that (and explain your math afterwards). Don't assume your approach is without flaws, and don't pretend that your learning style is appropriate for everyone.


* I have to note that Caplan's observation about math for undergrads is myopic; my undergrad institution used math aplenty--here's a taste of what they're up to now.

Wednesday, July 10, 2013

Business dynamics and the US job market: Are secular trends slowing the recovery?

On this blog, I've been trying to describe what's going on in the US economy. This entails looking at two things: long-term secular trends and cyclical issues. Before you stop reading, just note that a lot of the current US policy debate relates to questions about whether our current problems are structural or cyclical. I'll have some comments on that at the end of this post (you can skip to that if you want).

The BED* provides data on job creation from entering, exiting, expanding, and contracting establishments.** Readers of this blog already know that entry is pretty important, since startups typically account for all net job creation. The rich BED data provide insights into the related issue of establishment dynamics.

Figure 1 plots employment flows for opening and closing establishments, respectively, since 1992 (the beginning of the BED). Figure 2 plots the number of establishments that opened or closed, respectively, by quarter. In both charts, the grey bars indicate NBER recession dates. Click either chart for a larger image.

Figure 1

Figure 2

In each chart, the blue line corresponds with opening establishments, and the red line corresponds with closing establishments. Speaking very roughly, in Figure 1 you can see the net employment flow for entry/exit as the blue line minus the red line; and in Figure 2 you can see the net change in the number of business establishments as blue minus red (I say "roughly" because each series is individually seasonally adjusted, and this can lead to very rough comparisons).

What do you see in the figures? The downward trend in job flows-related quantities that I've described in other posts can be clearly seen on Figure 1 in the post-1998 period (and it's interesting that the peak occurred before the 2001 recession). My best guess--based on BDS data--is that the 1990s data show a temporary leveling off of the flows data in the middle of a decades-long downward trend (BED data only go back to 1992).

Also in Figure 1, observe that the gap between opening and closing establishment job flows was large in the 1990s and did not again reach that magnitude in the 2000s or 2010s. In both Figures 1 and 2, we see that the post-Great Recession period has been an extended period of the tightest entry/exit gap of the last two decades. There has been no return to a world of much more entry than exit of businesses. In the post-Great Recession world so far, entry and exit basically offset each other, including in their job market contributions. And the long-term trends in both charts have been moving sideways for several years now.

Now we'll consider existing establishments that are expanding or contracting. Figure 3 shows job flows at existing expanding and contracting establishments, respectively. Figure 4 shows the number of expanding and contracting establishments, respectively. Click either chart for a larger image.

Figure 3

Figure 4

Again, blue line minus red line gives (roughly) net job creation (Figure 3) or net new establishments (Figure 4). In employment terms, the gap between growth and contraction during the 2000s is not so different from the gap during the 1990s (which is different from what we saw in entry/exit). Have net job flows recovered back to pre-Great Recession levels? Maybe. But we do see something in Figure 4: the net gap was larger in the 1990s than in the 2000s or 2010s. In Figure 3 we see a marked stair-step pattern in gross flows. It's as if the last two recessions have caused gross flows to drop to permanently lower levels without signs of linear growth or decline.

Figures 3 and 4 say a lot about the Great Recession. In particular, the Great Recession involved both a massive spike in contractions and a large drop in expansions. The post-Great Recession period, though, is more about a pullback in contractions than it is a recovery in expansions. In both figures, the expansion/contraction gap closed rapidly, then moved sideways. And we have that troubling final data point (3rd quarter 2012) where expansions and contractions are almost equal; let's hope that's just temporary.

In general, the four figures show the following: Gross flows have not recovered to pre-Great Recession levels (this is not surprising from other data we've seen). Contraction of establishments has healed to low levels again, but expansion has barely recovered. Establishment entry has generally been barely enough to stay ahead of establishment exit, leaving existing expanding establishments to do all of the work of expanding the labor market. The Great Recession recovery period is worse than the 2001 recession period in this respect.

Note also that recessions are characterized by the red and blue lines flipping places (so exit or contraction exceed entry or expansion). But note that the entry/exit margin flipped later than the expansion/contraction margin going into the Great Recession (which is somewhat different from the 2001 recession). Both margins re-flipped several months after the Great Recession ended, which is consistent with total employment numbers (that bottomed in late 2009/early 2010).

Some comments on the cyclical vs. structural debate

In light of these data and others I've shown on this blog, how credible is the claim that current troubles are totally cyclical in nature? I haven't done anything causal here, of course, but secular trends are highly suggestive that something structural is going on. I don't know what it is. I'm not trying to stake a strong claim in this debate--I'm persuaded that cyclical things are still playing a large role--but I do think the evidence is complicated.

The US economy is always changing. The standard approach in macro is to separate growth questions from business cycle questions, but growth isn't the only trending variable in an economy. Other variables have been undergoing big changes in recent decades: industrial composition, worker/job search and match relationships, entry and the age composition of firms, international stuff, the regulatory environment, geographic patterns of economic activity ([1], [2]), and others.

It's not surprising, then, that this recovery doesn't look like previous ones. We can think of some of the reasons for this, but we don't have satisfactory explanations for the specific trends I've talked about here. Why is dynamism declining? As I noted here, we don't even know if it's a good thing or a bad thing. Do we really need the high (but declining) pace of job and worker reallocation we observe? Or is the US economy just a Rube Goldberg machine, engaging in a bunch of wasteful churning that we'd be better off without? If so, we should applaud the decline we're seeing. But if that reallocation is essential to productivity and growth in living standards, then we should be worried. Knowing what's driving it is crucial. There are some good ideas bouncing around out there, but nothing conclusive yet. The answer will at least have to account for entry and exit dynamics, for both establishments and firms.

My reading of the data suggests that secular trends may be interacting with cyclical forces in a toxic way, but I don't have any great ideas about precise mechanisms or causal factors.

*The BED are quarterly data provided from the BLS based on state UI data. They are released with a lag of about 8 months. Like the BDS (the dataset I usually use here), the BED basically covers the universe of private nonfarm employers; unlike the BDS, the BED is available at higher frequency and is released more quickly. BED has other drawbacks compared to the BDS, such as a more limited ability to track firms.

**An establishment is a single business location. A firm is a collection of one or more establishments. Costco is a firm; your local Costco store is an establishment. Usually on this blog I talk about firms (that includes all of my posts on startups, which are new firms). A count of entering establishments in the BED includes both establishments of startup firms and new establishments of existing firms. The two are conceptually different in many ways, but the public BED data do not allow us to disaggregate them.

Sunday, June 23, 2013

The geography of startups II: Cities

Part I, in which I sliced the data by state, is here. The state-level approach has some nice intuitive appeal, but in general I don't think state lines are always great ways to divide economic activity (but they do have a few advantages since they make their own laws).

This time I'll look at Metropolitan Statistical Areas, or cities (data for these are now available in the newest vintage of the BDS; other city definitions, like commuting zones, are not). The nice thing about MSAs is that they can cross state lines; for example, Washington, DC includes parts of Virginia and Maryland. Philadelphia includes parts of Delaware and New Jersey. MSAs are county based; that is, an MSA is a collection of counties. The BDS uses 2009 definitions, provided here (the definitions can change over time, but for time series work we need constant definitions). I'll also briefly look at one region that does not map to a single MSA, Silicon Valley.

Figure 1

Figure 1 (click for larger image) plots startup rates by MSA (actually, the map template is for counties, so the MSA dividers aren't always ideal--sorry). I have used the average of the 10-year period of 1997-2007, hoping to avoid cyclical issues. Darker means higher startup rates, where the startup rate is the percent of all MSA firms that are startups (new firms). From my post on state-level startup rates, some of this is unsurprising (e.g., high startup rates in Utah and Florida). Note that this is a within-city measure of startup activity; it says nothing about how significant is the city's contribution to national startup activity, but it does say something about the nature of firm dynamics within a city. Figure 2 shows the top 10 cities for internal startup activity (click for larger image):

Figure 2

These are mostly small cities. Interestingly, Florida has five of the top ten (Palm Coast, Miami, Orlando, Cape Coral/Fort Myers, and Naples). Utah has three of the top ten (Provo/Orem, St. George, and Ogden). In all of the top ten, 12 to 15 percent of all firms are startups in a given year (on average for the 10-year period).

In my state-level post, I found that California, Florida, New York, and Texas are the top contributors to national startup activity. Are the top startup cities in those states? Figure 3 shows cities' share of national startups; that is, I sum all startups in the city for 1997-2007 then divide it by all startups for the US for 1997-2007 (click for larger image).

Figure 3

Few cities are significant contributors to national startup activity. Observe that, despite the state's large contribution to startup activity, Texas has no large contributing cities. The big contributors can be clearly seen on the map: New York City, Chicago, Miami, and Los Angeles (Atlanta shows up, but it is just barely above several other cities; I chose the cutoffs poorly). Figure 4 shows the top ten cities for contribution to national totals (click for larger image):

Figure 4

Despite New York only being the #3 state, New York City contributes an astounding 8 percent of startups in the US. This breakdown shows the value of looking at cities instead of states. In my state-level analysis, Texas earned its place high on this list--but here we learn that both California and Texas have lots of cities with moderate contributions (plus one big city for California), whereas New York City actually provides more startup activity than New York State. The chart also shows that Washington DC does its part for startup activity, but this could not show up in state-level analysis (treating DC as a state is always odd). In any case, the utility of examining cities that cross state lines is shown here.

An aside on Silicon Valley: According to this BLS document, Silicon Valley doesn't map to a specific MSA but instead includes counties in the San Francisco, San Jose, and Santa Cruz MSAs. Unfortunately, the BDS does not provide county-level data. An upper bound can be obtained by looking at all three MSAs as a group. Using this definition, I found that Silicon Valley accounts for 2.45 percent of national startups, placing it below Miami (2.69 percent) and above Atlanta (2.01 percent). That's a respectable quantity, but it may be smaller than some would expect. Of course, not all startups are created equal, and it's possible that growth-weighted data would look different.

Finally, as with my state analysis, my city analysis shows that some areas are increasing their startup activity faster than others. Figure 5 shows the change in share of national startup activity by MSA from the average of the 1987-89 period to the 2004-06 period (click for larger image):

Figure 5

Florida consistently shows itself to be a hotbed of startup activity with positive growth in its share, increasing its share of national activity by more than 1.5 percentage points. Many cities show positive growth, which is not surprising given the shift of economic activity from rural to urban areas. Figure 6 shows the share of startup activity accounted for by metropolitan areas (as opposed to non-metropolitan areas) since 1978 (click for larger image):

Figure 6

Metro areas accounted for around 80 percent of startups in 1980 but now account for more than 86 percent. This trend shows no signs of abating. 

I think the above analysis shows a few things. 

First, common perceptions about the concentration of startup activity may be flawed; I wonder who among startup enthusiasts would have predicted that Miami and Chicago produce more startups than Silicon Valley. But this is a very general definition of startup that includes everything from the new local dentist to the next Google; things may be different if we use a more fashionable definition.

Second, there is a difference between having startup activity that is quantitatively significant in national terms and having startup activity that is quantitatively significant in local terms. Both tell us something about cities. Some places are both highly dynamic in a local sense and important for national activity (e.g., Florida), while others may lack national significance but clearly have strong startup activity as a share of the local economy (e.g., several Utah cities). Still other regions contribute much to national activity by nature of the fact that they are large economies, but startups are not a large share of local activity.

Third, the way we slice geography matters; some of the results from this analysis of metropolitan areas are surprising in light of my previous analysis of states, and I suspect that dividing cities up in different ways (e.g., commuting zones) could provide other surprises as well. There are also strong implications of industry trends for this kind of analysis, since industry and geography are closely linked. The broader point is that the way we aggregate matters. 

Finally, it's important to keep in mind that this regional activity is occurring against the backdrop of a secular decline in startup activity (see charts here). Further, I have not here examined the job market implications of this; we know that startups basically account for all net job creation, and I haven't here looked at how that fact interacts with geography.




Wednesday, June 19, 2013

Is "voluntary unemployment" a myth?

I don't know, and this is not my area, but John Aziz is pretty sure that it's a myth under current circumstances. Hopefully I understand his argument. He uses as evidence this chart:



I'm not sure what data we're looking at here, but I assume its definition of Available Labor Supply is something like the CPS workforce definition. Aziz's argument makes a lot of sense in a representative agent or homogenous agent world. If there are 100 job applicants, and there are 50 job openings, then obviously none of the unemployment is voluntary because the job market wouldn't absorb any new job applicants even if they tried. This depends a bit on semantics, but it's good enough for me, to a first approximation.

But consider a model with two types of skillsets, carpentry and blogging. Suppose there are 50 people who count as unemployed (they are "seeking work"): 25 carpenters and 25 bloggers. Now suppose there are 45 job openings. By Aziz's logic, there is no voluntary unemployment. Now I tell you that 25 of the job openings are for carpenters, but only 20 of the carpenters are applying for those jobs while the other 5 carpenters are just "talking to friends about job opportunities" so they count as unemployed people. So it's possible that 5 carpenters are voluntarily unemployed (in anything but a semantic debate). And this argument can extend to geographic heterogeneity or anything else; it's pretty hard to pin down how much it matters.

Is this kind of mismatch happening now? I don't know, but if there is evidence that reservation wages are high or job-search intensity is low, I don't think we can rule it out. If you acknowledge heterogeneity and admit that transfers affect incentives to search for jobs (or start businesses), then I don't think you can be as confident as Aziz. We at least need more evidence about mismatch and other things.

The broader point is that too much aggregation can get you into trouble with some questions. Simply knowing aggregate numbers for job seeking and openings doesn't tell us all we need, since we don't know how many potential openings exist for every kind of worker. Further, we should keep in mind possible general equilibrium effects; what would happen to wages or entrpereneurship if people had a smaller safety net, and how would that affect labor demand? Again, this chart doesn't tell us.

I'm certainly not suggesting that transfers are the dominant driver of the current employment situation, but I think Aziz goes too far in suggesting that there's no involuntary unemployment out there.

Monday, June 17, 2013

The geography of startups I: States

Given the cross-industry heterogeneity in startup activity, it should not be a surprise that some regions see more startups than others. The importance of geography for economic activity does not seem to be going away, and startups are really important, so it's useful to look at state differences in new firm formation (I'll try to look at metropolitan areas later). As always, I define a startup as a new firm. Figure 1 shows startup rates by state, where the startup rate is the ratio of startups to total firms within a state (click for larger image). 


Figure 1

Here I have taken the 10-year average, roughly peak-to-peak on the business cycle. As expected, there is a pretty wide variety ranging from DC at 3.5 percent to Nevada at 7.3 percent (for those who don't like seeing DC treated as a state, North Dakota is next-lowest at 3.8 percent). Next, Figure 2 shows state startup shares--startup firms by state as a percent of the national total (click for larger image).

Figure 2

Again, I have used the 10-year average. Most states produce few startups. Figure 3 shows the top ten states (click for larger image):

Figure 3

Note that the top ten states account for more than half of economywide startup formation, with California alone providing more than 12 percent. But this is a snapshot in time, and things are on the move. Figure 4 considers the change in state shares of national startup activity. Here I have used the difference between three-year averages (1987-89 and 2004-06), capturing the business cycle peak-to-peak change (click for larger image).

Figure 4

Florida has increased its share of national startup activity by nearly 2 percentage points, while California has slightly declined. The "rust belt" and the northeast have declined as well, with many western states and Texas gaining. I haven't looked at whether these trends held up through the Great Recession.

Keeping in mind that geography can be tricky in the internet age, startup activity may be undergoing a major regional transition. It is likely driven in part by industry trends. Data can be noisy, and I have not here considered job creation quantities from startups, but these facts may be worth considering in the context of other discussions of geography and economics.

Friday, June 7, 2013

What's the Bundesbank counterfactual?

Neil Irwin describes Axel Weber as looking like Tony Soprano
Image source

On May 10, 2010, the ECB announced that it would buy peripheral sovereign debt, ostensibly to "restore an appropriate monetary policy transmission mechanism" but also to address the debt crises in Greece, Portugal, and Ireland. Axel Weber (then-president of the Bundesbank and, therefore, member of the ECB Governing Council) was strongly opposed to the program. The following occurred prior to the announcement but after the decision.

Shortly after the Governing Council meeting Sunday evening, Weber convened a conference call of the Bundesbank Executive Board. . . . Officially he wasn't supposed to tell anyone of what the Governing Council had just decided, but this was so momentous that he posed a quite serious question to the board members: Should we do it? Should the Bundesbank follow its marching orders from the ECB and buy billions of euros' worth of Greek and Portugese bonds, violating its long-cherished principle of not using the printing press to fund governments? . . .

Staring at that precipice, the Bundesbank concluded it was better to hold its nose and violate orthodoxy than to unleash such dangerous consequences.

This jarring revelation is from page 231 of Neil Irwin's book, The Alchemists, which I have very much enjoyed.

Friday, May 24, 2013

Monetary policy doesn't require borrowing, and macro is hard

From Nick Rowe:
Monetary policy does not work by increasing actual borrowing. That is not the causal channel of the monetary policy transmission mechanism. Monetary policy works by increasing spending, not borrowing. And one person's spending is another person's income, so people in aggregate do not need to borrow more in order to spend more. Their increased spending finances itself.
This is model specific, of course. Read the whole post, or at least section #1. This is a very nice example of the need for macro analysts to avoid the temptation to simply aggregate micro intuition. Feedback and resource constraints matter. General equilibrium matters. Says Rowe,
Yep. Macro is hard. You can't just sit back and think "how would I react if my rate of interest fell?" You have to think about how my reactions would affect others, and how their reactions would affect me, and so on.


Wednesday, May 15, 2013

More on housing and startups

Image source


People are starting to notice the epic collapse of startup activity of recent years. Via Arnold Kling, see this great note by Glenn Reynolds and a reader suggesting that the decline in housing collateral could be a large factor.

I think so too, as I've discussed before. I'm working on a paper that I hope will shed some light on this question. A few things to keep in mind:

1. Part of the decline is secular; I noted this here. This has coincided with a more general secular decline in business dynamism, and we still don't know what's driving it. The startup problem seems to matter for the broader dynamism decline, though. It's difficult to disentangle the secular component from the Great Recession component.

2. I note here that (a) national house price indices and home equity peak about the same time as startup activity (2006, before the "recession" started) and (b) residential investment peaks about that time as well despite other investment series peaking at least a year later. That's far from a smoking gun, but it is suggestive.

3. More formal empirical evidence for this link is emerging. I discussed one paper here (this one exploits the famous Saiz housing supply elasticity instrument). Another paper, this one exploiting time series and regional variation, obtains similar results. Both of these papers cast doubt on the notion that the Mian and Sufi channel (the household balance sheet channel) is sufficient for understanding the full consequences of house prices (in part because the two papers I mentioned find effects in tradeables in addition to nontradeables).

4. The full details of a housing collateral/startup channel require some unpacking. For example: you could build a really simple model with frictionless housing markets and housing collateral that would not give you a clear house price/startup relationship. To see this, suppose housing and nondurable consumption enter into utility as Cobb-Douglas, so expenditure shares are fixed. Then a house price decline just causes people to buy more houses. You need something more; lots of housing market frictions (which is reasonable) or a simultaneous decline in loan-to-value ratios will probably do it.

5. It would be nice to be able to quantitatively compare the consequences of the main channels through which housing collapse smashed the economy. These include this housing startup channel; the Mian and Sufi consumption channel; the standard residential investment/construction industry channel; and the bank balance sheet channel. Someone should write a paper about this... (working on it).

6. Figuring out the cause of the startup collapse is important since startups account for almost all net job creation.


Crazy prices on Deep Space Nine

Image source

I broadly agree with Matthew Yglesias about Star Trek. But his post reminded me of a complicated aspect of the series: its economy. Particularly when watching Deep Space Nine, it can be difficult to reconcile the apparent plenty provided by replicator technology with the "profits" obsession of the Ferengi. The Star Trek universe has a currency--typically gold-pressed latinum--and it appears to have value and uses even in an environment with little scarcity.

Also, some of the prices don't make any sense.

Memory Alpha provides this discussion of the currency with examples of prices mentioned during the series. I'm going to convert some of these prices to dollar values by using a very lucky mention--wages. "Quark pays his Bajoran employees one slip of latinum a day during the Cardassian Occupation." We can probably assume that these are low-skilled wages, and we can use what we know about low-skill wages in dollars to build exchange rates.

We're also given a set of conversion rates between latinum denominations: 1 bar = 20 strips = 2000 slips.

I don't know what sort of labor market is supposed to have existed during the Cardassian Occupation; maybe all employers had monopsony power in labor markets, or maybe labor was scarce. I'll try three different specifications:
  • Low wage conversion assumption: $1/hour or $8/day
  • Minimum wage conversion assumption: $7.25/hour or $58/day
  • High wage conversion assumption: $20/hour or $160/day
Of course, they may not be working 8-hour days, but I think these specifications cover reasonable scenarios. Now consider a few of the mentions of latinum and how they convert to dollars:


Slips Low wage conversion ($) Minimum wage conversion ($) High wage conversion ($)
Crate of root beer 10 80 580 1,600
Pajamas 300 2,400 17,400 48,000
Cadet's uniform 500 4,000 29,000 80,000
Dress 1,700 13,600 98,600 272,000
Wreckage of a ship 6,000 48,000 348,000 960,000
Nog's life savings 10,000 80,000 580,000 1,600,000
Quark's wager on Sisko vs. Q fight 10,000 80,000 580,000 1,600,000
A day's revenue at Quark's 10,000 80,000 580,000 1,600,000
Morica Bilby, shipping consultant, weekly wages 10,000 80,000 580,000 1,600,000
Someone's bar tab at Quark's 44,000 352,000 2,552,000 7,040,000
2,000 tons of Kohlanese barley 378,000 3,024,000 21,924,000 60,480,000
Quark's evacuation stash 1,200,000 9,600,000 69,600,000 192,000,000
Offer to buy Quark's bar 10,000,000 80,000,000 580,000,000 1,600,000,000

These are some pretty startling numbers. A cadet's uniform costs between $4,000 and $80,000. A dress is between $13,600 and $272,000. Quark is a very wealthy man (or else has a serious gambling problem); he wagers half a million dollars on a fight and keeps tens of millions in cash under his bed for emergencies.

I would say that these prices are pretty inconsistent. Note that this observation does not depend on my dollar conversion choices; just look at the "Slips" column and observe that pajamas cost 300 days of wages in the food service industry. Maybe the post-scarcity economy leads to strange preferences and relative prices. Or maybe the writers didn't think very hard about latinum mentions.

It almost makes Star Trek seem unrealistic!

Monday, May 13, 2013

Employment services and misclassification

Lately there has been some talk about temporary help services (see here and here). This industry, and the industry of employment services more generally, is interesting not only for its potential business cycle implications but also for its economic measurement implications.

In Census and BLS data, "employment services" is an industry category (4-digit NAICS 5613) that includes job placement services, temp agencies, and other services that allow businesses to outsource HR and other tasks. We have seen some interesting activity in employment services during the last 20 years (click for larger image):


Here I've plotted "employment services" employment as a share of "professional and business services" employment (red line). Observe that this ratio has risen by almost 5 percentage points since 1990. Since readers may know that services generally have made huge gains in employment during this time, I also provide "employment services" employment as a share of total private nonfarm employment (blue line).

What interests me is the fact that a lot of employees in this industry are misclassified by industry codes. People on the payrolls of temp agencies could actually be working in any industry. This may become a measurement problem if employment services resume their gains of recent decades; to the extent that these workers are misclassified, US data overstate the number of workers in these narrow services industries and understate the number of workers for the industries in which temporary employees are working.

Consider an example. If I have a manufacturing plant with a big HR department, but I decide to close the HR shop and pay an HR services firm to do that work, very little has actually changed in the industry composition of the US economy--but the data will record that the manufacturing sector shrank and the services sector grew.

Consider another example. Suppose a change occurs among retailers that makes them want to fill existing jobs with temporary, rather than permanent, workers, and they do this by contracting with temp agencies. Again, the actual industry composition of employment hasn't changed, but the data will indicate a smaller retail sector and a larger services sector.

Something to keep in mind for those watching the evolution of the US industry composition.

Monday, April 29, 2013

How to think like Eric Falkenstein

I've become less enamored with trying to change opinions, because ideas need a zeitgeist, and if that's not fertile nothing you say will matter. . . . I don't see lot of value to being an advocate, though I know someone needs to sit down and write thoughtful things to counteract all the instinctive first-order solutions people think are great ideas (poor? give them money!). The problem is that it's hard not to become a partisan hack if you write too much, to pick on the other side's worst arguments, which regardless of what side you are on, will be indefensible and so prove nothing.

The original post is here. This, along with the fact that the number of political opinions I have has been on a steady decline, is why I don't post about politics anymore.

Friday, April 26, 2013

Inside the GDP sausage factory

Image source

It's useful to be aware of the difficulty of measuring the US economy. Those interested in today's advance GDP report might do well to look through this, a handbook about the concepts and methods behind US income and product accounting. GDP estimates are constructed from a combination of many datasets, most of which are produced by the Census Bureau but some of which come from BLS, BEA, the Department of Agriculture, Treasury, IRS, OMB, and state governments. Putting everything together is quite a task. "The source data available to BEA are not always ideal for the preparation of the NIPAs" (3-2).

The most reliable GDP estimates are based on the Economic Census, which occurs every five years. Between censuses, statisticians must rely on surveys that have sampling properties chosen based on, well, the most recent Economic Census (and the Business Register, which forms the backbone of many business datasets). Advance estimates are basically built entirely on survey data.

Today we see the advance estimate for 2013Q1 GDP. Here's what the handbook says:

For most of the product-side components, the [advance] estimate is based on source data for either 2 or 3 months of the quarter. In most cases, however, the source data for the second and third months of the quarter are subject to revision by the issuing agencies. Where source data are not available, the estimate is based primarily on BEA projections. (3-7)

The components for which only 2 months of data are typically available include several categories of construction, inventories, exports, and imports. Missing data have to be filled in somehow, and the solution will probably be something based on trends--so it may sometimes be difficult for advance estimates to catch turning points. Also, a rough rule of thumb might be that data frequency and data quality are negatively related (not to mention that survey quality may decline as time since the last Economic Census increases). In short, advance estimates require a lot of guesswork.

And this is to say nothing of the microdata. Survey microdata can be pretty nasty.

The people at BEA have a pretty tough job. It is therefore not surprising that GDP sometimes receives pretty big revisions. Advance data should probably be taken with a grain of salt.