Wednesday, July 29, 2015

And now for something completely different

My brother and I installed some pavers for my parents a couple weeks ago. In the past I've had a lot of floor installation jobs, but I've never done pavers. It's not too hard. Most of the work is prep.

First focus on getting level ground at the height you need. In our case, this meant that the pavers would end up flush with adjacent concrete. Account for a layer of sand, which in our case would be 3/4 inch. Focus on water flow if relevant--we wanted a slight tilt away from the house.

Rent a dirt compactor. Use your old skateboard to transport it from truck to work site (Figure 1).

Figure 1
Go around the area several times (Figure 2).

Figure 2

You will want your sand layer to be flat and appropriately level. A good way to do this is to get some pipes with the right diameter, then drag sand across them with a straight 2x4 (Figures 3-4).

Figure 3

This can be tricky if you have obstacles in the patio, like the tree above.

Figure 4

As you lay the pavers, you can fill in the holes left by the pipes. Lay the pavers straight down; if you have to drag them into position you will get sand between them and have a bad fit. Getting flush next to concrete may require some improvisation; in my case we had to bring the sand up a bit, which was fine since we want water flowing away from the house (actually, we laid them slightly higher than flush, to allow compacting).

Figure 5

The carts pictured in Figure 5 are by Racatac. I used them a lot when I installed carpet for a living; they are also good for tile and, obviously, pavers. These will make a huge difference for your back and knees. If you ever do a flooring job, buy or borrow some. For jobs like tile and pavers, you need a board like the one pictured so the cart wheels don't screw up individual bricks.

Figure 6

Figure 7

In the early part of the job, we kept the tiles square against the house, which forms a (somewhat) right angle for us. You can see in Figures 6 and 7 that we had to build a bridge to the sidewalk and lost the house on the right side due to a flower bed. Use a string to make a straight line. You may think you can lay bricks straight, but you can't.

Our biggest obstacle was the tree in the middle of the patio. We had to wrap the bricks around this and meet behind it, which always presents a challenge. Again, we used a string to keep lines straight (Figure 8).

Figure 8

Figure 9 shows things coming together.

Figure 9

My mom wanted round corners on the tree and flower beds. This is a part that takes a lot of time. You can do this with a tile saw, but it's cheaper and just as easy to use a grinder with a masonry blade (Figure 10). If the blade isn't large enough, you can score the bricks then break them (carefully).

Figure 10

Figure 11 shows the result.

Figure 11

Around flower beds, insert plastic border to hold bricks in place. This is harder than it looks (Figure 12).

Figure 12

The last step is to sweep fine sand over the bricks. This will fall into the cracks and act like grout, keeping the pavers as laid and preventing wobbling. It will take many, many coats to fill in all cracks. Spread, spray with water, allow to dry, repeat. Figures 13-15 show the process.

Figure 13

Figure 14

Figure 15

The grouting isn't quite finished, but it's just a matter of a few more coats.

Pavers look nice, but a key advantage over concrete is that they can be spot repaired. This particular patio has a tree in the middle that must be watered; as a result, the concrete that was there before was constantly settling and buckling. Repairing concrete is expensive; replacing a few pavers is cheap.

UPDATE: My dad finished the grouting and sealing; here's the finished product:

Figure 16

Thursday, July 2, 2015

Theory and empirics in cancer research

A couple years ago I read The Emperor of All Maladies, a fantastic book on the history of cancer research and treatment. I think a lot of economists would like this book because, among other things, it focuses on the differing roles of empirical and theoretical progress in a discipline that is constantly asked to serve the real world. In particular, both fields seem to display a constant tension between quick, "credible" fixes to problems and a deeper theoretical understanding of driving forces.

Reader beware--medicine and biology are way outside my wheelhouse, and it's been some time since I've read this book.

If you've known someone with cancer, there's a good chance that the treatment consisted of some combination of chemotherapy, radiation, and surgery. In a sense these are very brute force ways to treat the problem, and as you might have noticed that the collateral damage can be massive. For the most part, the brute force approach to treating cancer comes to us from an empirical (actually experimental) tradition.

For a long time, surgery was the preferred method of treating cancer. The tricky thing is that simply cutting out a visible tumor doesn't always eliminate cancer from the body. This fact, discovered empirically, led to improvements in surgical technique but also to some extreme practices among some doctors. William Halsted performed "radical mastectomies" which left breast cancer patients with severed collarbones and gaping holes in their bodies. "Halsted and his disciples would rather evacuate the contents of the body than be faced with cancer recurrences" (65). This was part of a broader milieu in the world of surgery. "By 1898, it had transformed into a profession booming with self-confidence" (66).

Over the short term, these extreme surgical methods seemed to work reasonably well. But they reflected a woefully incomplete understanding of cancer. They could often cure the problem in women with small, local cancer activity, but at the cost of destroying the body without cause. On the other hand, cancer that had metasticized throught the body was not eliminated with the surgery. The approach did not save nearly as many lives as its proponents hoped, and it left a lot of people tragically disfigured.

Radiation was a more precise approach allowing specific targets (see page 75). It could often work well for localized tumors, saving many lives. Sometimes, though, it could actually cause cancer. And in any case, the collateral damage can still be huge, leaving patients "scarred, blinded, and scalded." It destroys cells indiscriminately.

Chemotherapy has roots in experiments with cloth dyes in the 1800s (85) and mustard gas in WWI (87). In the 1940s, treatments with combinations of chemicals were employed in increasingly well-designed experiments. This is chemotherapy, and while it's more advanced than surgery it is similar in that it employs brute force methods with huge collateral damage. Its advancement was driven by experiment rather than a growing understanding of cancer. Treating each specific cancer was just a matter of finding the right combination of toxins. The approach could lengthen lives, but sometimes by only a few months (208) (and sometimes for much longer!). Collateral damage was often huge. Much of cancer research funding went toward these experiments as opposed to deeper research:

They wanted a Manhattan Project for cancer. Increasingly, they felt that it was no longer necessary to wait for fundamental questions about cancer to be solved before launching an all-out attack on the problem (121). 

This approach had some success in treating cancer, with high costs (page 330 reviews cancer treatment's results to the mid-1980s). So there was some backlash:

As the armada of cytotoxic therapy readied itself for even more aggressive battles against cancer, a few dissenting voices began to be heard along its peripheries. These voices were connected by two common themes. 
First, the dissidents argued that indiscriminate chemotherapy, the unloading of barrel after barrel of poisonous drugs, could not be the only strategy by which to attack cancer. Contrary to prevailing dogma, cancer cells possessed unique and specific vulnerabilities that rendered them particularly sensitive to certain chemicals that had little impact on normal cells. 
Second, such chemicals could only be discovered by uncovering the deep biology of every cancer cell. Cancer-specific therapies existed, but they could only be known from the bottom up, i.e., from solving the basic biological riddles of each form of cancer, rather than from the top down, by maximizing cytotoxic chemotherapy or by discovering cellular poisons empirically.

This biology-based approach gained some traction, and there were some major breakthroughs in the 1980s that identified cancer-causing mechanisms at the molecular level. There has been a lot of progress since then. Understanding cancer better has made us better at early detection, which has significantly reduced mortality. More chemotherapy has played a role too. One cancer researcher looks back at a pioneer of experimental cancer treatment, Sidney Farber (from the 1940s), and writes,

Farber's generation had tried to target cancer cells empirically, but had failed because the mechanistic understanding of cancer was so poor. Farber had had the right idea, but at the wrong time. (433)

The author concludes that "an integrated approach" is needed (457). The case of breast cancer is particularly illustrative of this point (402).

So basically we spent a century treating cancer with methods discovered through experiment, with results that range from tragic to pretty good (the book reviews some studies). In a sense the approach served us pretty well, providing ways to treat a terrible disease without having to invest a lot of time and money in knowing a lot about it. Then we focused more on understanding the disease, which gave us pretty good progress. At any given point in time, focusing more on basic research may have denied effective treatment to existing patients, but it may have sped the process of finding better approaches.

I don't really have any big conclusions other than to say that I think the conflict between theory and empirics is complicated and may be unavoidable, particularly in the case of economics. The "credibility revolution" is a huge step forward for the field with its ability to provide quick answers to policy questions. But what does it tell us about a $15 minimum wage? Nothing, really. But theory is a messier business, and it can result in a lot of wasted effort. So neither the theorists or the empiricists are in a position to feel overly important. We need both, and sometimes one will make progress faster than the other.

This book is a really good read with a lot of other insights relevant to economics (page 211 rings a bell, for example); recommended.

Wednesday, June 10, 2015

Dire predictions about firm failure

John Chambers made a prediction that CBS News journalists find really noteworthy:

In one of his last appearances as Cisco (CSCO) CEO, John Chambers offered a stark view of the future for many businesses. He estimated that 40 percent of them wouldn't exist in 10 years because of the rapidly changing technological landscape. 

The article proceeds to pile on, with quotes from eminent scholars:

"He is right in ... characterizing the nature of transformative change that's talking place in the economy," said Rahul Kapoor, assistant professor of management at the Wharton School of Business at the University of Pennsylvania.

Of course, the "transformative change" that we're talking about is actually going on all the time; please read your Schumpeter.

The annual firm failure rate in the US ranges from 8 to 10 percent. We can't quite get a 10-year exit rate without microdata, but we can get a rough, ballpark example from the BDS. Suppose I do the following: count how many firms exist in a given year, count how many firms aged 11+ exist 11 years later, then divide things appropriately. This won't capture the actual failure rate due to things like M&A, but it should give us a magnitude to work from. Figure 1 plots this 10-year exit rate (click for larger image). Interpret as follows: the line at year t gives the percent of year-t firms that won't exist in year t+11.

Figure 1

This rate ranges from 50 to 60 percent, and it has been falling. Chambers may get the prediction right, but if so, it will be because the next ten years will be less "transformative" than the last 30. If the 40 percent figure is cause for alarm, that's only because it would mean the US economy has become even less dynamic (which seems plausible given current trends [pdf]); and even then it's unclear whether we should worry. These numbers are nice because they remind us that young firms make a big contribution to labor markets. (Caveat: as I mentioned above, some of this is M&A, so these data provide an upper bound).

I want to be fair to Chambers: it's possible that, contrary to CBSN's interpretation of his comment, he was actually referring to large, established firms that we generally don't think of as being at high risk of failure. So it's useful to think of these numbers in employment-weighted terms. That is, what will be the employment of firms in existence in year t when we get to year t+11, and how does that compare to year t employment of those firms? Figure 2 plots 10-year exit rates on an employment-weighted basis (click for larger image).

Figure 2

You can see that this rate ranges from 10 to 20 percent, but really it was 10 percent until the Great Recession. 

A similar way to interpret Chambers is to focus only on established firms, and ignore startups. Figure 3 plots 10-year exit rates when we only look at firms age 5 and above (unweighted; click for larger image).

Figure 3

These numbers range from less than 5 to about 15 percent, with a big rise associated with the Great Recession. So in this sense, Chambers' prediction is noteworthy. If employment among today's older firms falls by 40 percent over the next decade, that would be a pretty big deal and require a lot of reallocation. In some senses this could be a plausible prediction: you can see that the Great Recession really moved this number, and we know that said recession was not cleansing in a productivity sense, so our current stock of firms could be a bit weaker than we're used to. But I am going to take a risky stand and predict that 40 percent is an overestimate for this quantity. What is likely to move these numbers that much is a huge recession, not ongoing technological progress, and you can see what even a recession the size of our last one does.

In any case, the way Chambers stated the prediction, and the way the journalists reported it, betrays a serious lack of familiarity with the pace of pedestrian reallocation that occurs in the US economy. Economic transformation is not a big, discrete event; it is a continual process. Journalists reporting on stories like this would do their viewers a big service by providing some quantitative context.

Sunday, May 24, 2015

Beating dead horses

The econ twittersphere has erupted in response to a provocative (but very blog-like) essay by the great Paul Romer, published in the Papers & Proceedings at AER. Romer is annoyed that certain old freshwater econ guys use math in an annoying way. Romer follows up here; Tony Yates has some thoughts here. Noah Smith, always up for a good bashing of said old guys, opines here, making the same points he usually makes (to wit: those silly freshwater guys just build models in which government can't be good, physicists and engineers have physics and engineering models that perform better empirically, here are some examples of silly things predicted by some DSGE models, etc.).

In recent years, guys like Noah have made very clear that they don't like bare bones RBC and that they don't like certain old guys in macroeconomics. But during these years, while the blogosphere has obsessed over this stuff, macroeconomists have been doing a ton of interesting work for which the blog debate is an uninteresting sideshow.

I've said this before and I will say it again: Whatever one might think of the contributions of the certain old guys to macroeconomics, the field has moved lightyears beyond that stuff. Nobody is using bare-bones RBC. The "freshwater vs saltwater" distinction is a redundant taxonomy--as best I can tell, it's really about Calvo pricing vs. flexible price models, while the sticky price assumption is just one of hundreds of ways that people add frictions to the RBC model. If you use the water-based terms instead of just describing specific frictions, you're just facilitating mood affiliation.

Few, if any, of the people writing models with flexible prices (but other frictions) would say that nominal frictions don't matter. It's just that nominal stickiness is one among many ways in which the real world deviates from bare-bones RBC, and every model must assume away something, and sometimes nominal stickiness is that something for good reason (meanwhile, a lot of good Calvo pricing papers ignore important financial sector frictions, not to mention heterogeneity and tons of other stuff, and that's ok). Sure, you can always find an absurd element of any model, as Noah does with relish in his post. But we're stuck with a world in which no model can explain everything, and in any case a paper that's good at some things and bad at others is an opportunity for another paper that's good at a few more things and bad at slightly fewer things. That's the nature of the discipline. It will always be easy to make the discipline look silly to outsiders who haven't confronted the magnitude of the problems we face.

Let me also say this: if there is anyone out there who criticizes the absurd oversimplification that is the representative agent model* but also criticizes mathiness, here's a newsflash: deviations from rep agent require hard math and/or nasty computation. The Mian and Sufi critique requires models in which agents differ at least along a wealth distribution. The Geanakoplos stuff requires hard math. So be careful how you use the term "mathiness" (I think Romer is using it in a reasonably precise way, and a lot of people are misinterpreting him and using it too broadly). More realistic models are going to require harder math, though I agree completely with both Romer and Roger Farmer that adding more math isn't always productive.

So while the blogosphere keeps restating 1970s fights, practicing macroeconomists are doing a lot of really interesting research that makes the freshwater/saltwater taxonomy irrelevant or at least useless. Bashing caricatures of the economics profession is a great way to get followers and sell books, but it doesn't advance the discipline.

UPDATE: This post is more snarky than I intended or am comfortable with. I think Noah and I are actually closer on this than it would seem from this text and the comments below. My basic point is that I hope people do not use the blogosphere as a sufficient statistic for what is going on in modern macroeconomics.

*I actually think rep agent is remarkably useful, particularly compared to how much it costs.

Saturday, May 9, 2015

Mobilizing and upgrading idle, depreciated capital

Here's a Bloomberg article:

Real estate buyers seeking money to renovate and flip U.S. houses are getting help from some of the world’s biggest investment firms. 
Colony Capital Inc., Blackstone Group LP and Cerberus Capital Management are among the companies that have started making bridge loans to investors who buy homes to sell them quickly for a profit. 

The title of the article (likely not chosen by the author) is heavy on mood affiliation: "House flippers are back together with Wall St. What could possibly go wrong?"

Lately I've watched a few episodes of Flip or Flop, an HGTV show (on Netflix) that follows a couple who flip houses for a living. The show takes a fair amount of artistic license (producers: you can't portray your stars as living on the financial edge if they drive a custom Escalade), but it is a nice illustration of what flipping can do for the economy.

The houses are typically in pretty bad shape. Many of them were foreclosures. They have been sitting empty for some time. In some cases, the previous residents stole things or poured concrete down drains. Generally they are unlivable (well, by modern American middle class standards). The flippers buy the houses, do very nice renovations on them, then put them on the market within a month or two.

This is really good! Idle capital is a waste. Houses with concrete in their drains don't do us any good. Depreciation is bad. These people are making large additions to the US capital stock, so it's efficient to allocate resources to them (with all the usual caveats about overborrowing externalities, potential policy distortions, etc.). The economy needs this stuff; hence:

Home flippers are benefiting from rising prices, limited new construction and a shortage of inventory on the market.

And it makes sense for big, risk-neutral firms to play this game (see my brief conversation with Lucas Goodman about this).

There's also this:

The new lenders are focused on more experienced investors, many of whom have have established companies, rather than the amateurs that proliferated during the housing boom a decade ago. Today’s flippers are more sophisticated after the crash weeded out most of the weaker investors, Lewis said.

Friday, May 1, 2015

Why manufacturing?

From Dietz Vollrath:

One of my continuing questions about research in economic growth is why it insists on remaining so focused on manufacturing to the exclusion of the other 70-95% of economic activity in most economies. 

He mentions work by Chad Syverson and others. It's true--much of the productivity literature focuses on manufacturing.

Why do we persist in focusing on this particular subset of industries, sectors, and firms? I think one of the main reasons is that our data collection is skewed towards manufacturing, and we end up with a “lamppost” problem.

That is almost certainly the main reason. I guarantee that the people using the microdata would love to be able to carefully study productivity outside of manufacturing. Vollrath describes how industry code schemes are heavily tilted toward detail in manufacturing. Read his post; it's very instructive. (And hey--he's not the only one who blogs industry codes!). I think the switch from SIC to NAICS was a huge improvement on this, though the problem hasn't disappeared.

It's not just about industry codes. The big microdata sources are typically limited to employment information for businesses outside of manufacturing, i.e., there's no capital. And the lack of coverage has persistent consequences. Recent improvements to the data often aren't easy to roll back to earlier years, which means it's hard to study the time series.

The other issue, though, is that even our primary productivity concept--TFP--is really designed for a manufacturing world. The Census Bureau actually has survey microdata for firms in retail and services. But how do we measure capital in those industries? What other inputs are important? This is a much bigger problem than simple lack of coverage. (Obviously, there is a literature).

The further you get from producing widgets with machines, the harder it is to map the TFP concept to the real world. And that's just at the firm (or establishment) level! Facing this challenge in the microdata colors one's views of the TFP concept at the macro level. There is no clean mapping from micro to macro; go down this rabbit hole if you won't take my word for it. Aggregate TFP isn't actually a thing, even if it's still a useful fiction.

Wednesday, April 29, 2015

BED: 7.2 million jobs created, 6.6 million destroyed in Q3 2014

From the BLS:

From June 2014 to September 2014, gross job gains from opening and expanding private sector establishments were 7.2 million, a decrease of 259,000 jobs from the previous quarter, the U.S. Bureau of Labor Statistics reported today. Over this period, gross job losses from closing and contracting private sector establishments were 6.6 million, an increase of 115,000 jobs from the previous quarter. 

I like this data series, with some caveats.* If you're not familiar with this series, note that gross flows are large relative to net flows. Roughly speaking, think of the Great Recession as involving about 8.5 million net job losses. Entering and expanding business establishments create at least half that many jobs even in terrible quarters, but a recession is characterized by even larger numbers of jobs being destroyed by shrinking or closing establishments. So gross flows are large relative to net flows.

I like to slice the data by extensive margin (opening or closing business establishments) and intensive margin (expanding or contracting existing establishments). Figure 1 reports the flows of employment associated with opening and closing establishments, and Figure 2 reports actual numbers of establishments that opened or closed (click for larger images).

Figure 1

Figure 2

The last time I blogged this series was the 1q2014 release, and that release didn't look great. But, confirming the usual cautions, that release did not seem to mark a new trend. Reallocation from the extensive margin is moving sideways, more or less, with year-over-year job gains from openings slightly down and losses from closings barely changed.

Next, the intensive margin. Figure 3 reports employment flows from expanding and contracting establishments, and Figure 4 reports establishment counts for these categories (click for larger images).

Figure 3

Figure 4

On the intensive margin, reallocation seems to be sticking to its gentle upward trend since the Great Recession. Both jobs created and jobs destroyed on the intensive margin are slightly up year over year.

So there isn't a lot going on along the establishment entry margin, with things pretty much moving sideways (which means fairly constant positive net job creation from entry). Reallocation associated with growth or contraction of existing establishments is steadily rising and may soon approach pre-Great Recession levels.

Now some usual thoughts: gross flows give us an idea of where jobs are being created and destroyed, which fleshes out the net job numbers that are more popular (and timely). More broadly, these data help dissuade us from always thinking in representative agent terms, which is what the net numbers incline people to do. It's tempting to think that net numbers tell us about the experience of most businesses, but in reality there is a lot of heterogeneity among firms, and reallocation proceeds at a high pace.

Some previous BED posts are here.

*The BED are quarterly data provided from the BLS based on state UI data. They are released with a lag of about 8 months. Like the BDS (the dataset I usually use here), the BED basically covers the universe of private nonfarm employers; unlike the BDS, the BED is available at higher frequency and is released more quickly. BED has other drawbacks compared to the BDS, such as a more limited ability to track firms.

The BLS effectively expanded the sample definition in the first quarter of 2013. The 2013q1 observation was the most obviously affected, as it reported all establishments that were added to the sample as establishment openings. For openings data, I have replaced the 2013q1 observation with the average of 2012q4 and 2013q2. I haven't dug into the data enough to know whether users can manually correct for this over the longer run.

It is also important to note that these numbers are seasonally adjusted, and any guess at net numbers based on the difference between two seasonally adjusted series is very, very rough. Non-SA numbers are available on the BLS website.

These numbers track business establishments, which are different from firms. Costco is a firm; your local Costco store is an establishment. Most firms consist of only one establishment. The BED is not ideal for tracking firms, as it has limited ability to correctly link establishments to the firm level.