Fri, 25 August 2017
Samuel Hammond is returning to the podcast today to discuss the relationship between capitalism and social justice.
The controversial ad featured generic protesters and Kendall Jenner sharing a Pepsi with police. While the ad was insensitive and more than a little absurd, Sam pointed out that Pepsi has a history of promoting social justice and racial harmony through its marketing.
Back in 1940, Pepsi was a small player compared to Coca-Cola; the latter selling 25 sodas for each one Pepsi sold. In order to compete, Pepsi's CEO Walter S. Mack Jr. decided to do something the other companies weren't doing: marketing specifically to African Americans:
"In 1947, [Mack Jr.] hired Edward F. Boyd, an African-American adman who later became known as one of the fathers of niche-marketing. Boyd crafted ads for Pepsi that celebrated black cultural and professional achievements, and above all portrayed African-Americans as normal, middle-class consumers. It was this marketing push that ultimately drove Pepsi’s rise to the number two soda company in America."
In pre-civil-rights America, this was a major achievement, and it served a deeper social purpose by extending social recognition to black Americans.
We also discuss some other interesting niche marketing campaigns, like Subaru's marketing strategy targeting lesbians in the 1990s. Finally, we tie it all back to capitalism as a force promoting diversity and inclusion, with references to Becker's work on taste-based discrimination.
Fri, 18 August 2017
This episode’s guest is Vincent Geloso, here to talk about his work on Cuban healthcare statistics. He recently released a working paper with coauthor Gilbert Berdine titled "The Paradox of Good Health and Poverty: Assessing Cuban Health Outcomes under Castro." The abstract reads as follows:
In spite of being poor and lacking in economic opportunities, the population of Cuba enjoyed significant improvements in health outcomes under the Castro regime. Many have praised the ability of the regime to overcome the barriers of poverty and economic stagnation in order to improve health outcomes. Many have also argued that efficient features of Cuba’s health policy should be imported regardless of political considerations. In this paper, we argue that these improvements are probably overestimated, but that they are real nonetheless. We also argue that some of these improvements were an integral part of health policy and could only have been realized by the use of extremely coercive institutions. While efficient at fighting certain types of diseases, coercive institutions are generally unable to generate economic growth. On the other hand, the poverty such coercive institutions engender may have actually helped improve health outcomes, providing us with a false impression of the efficacy of the health care system in Cuba.
We have a wide-ranging discussion about Cuban health statistics, what they mean and don't mean, how good health could be achieved by forcing people into healthy behaviours, and how well other Latin American countries have done in comparison to communist Cuba.
Photo credit: Eric Marshall
Fri, 28 July 2017
Noel recently released a working paper titled "The Effects of Land Redistribution: Evidence from the French Revolution." It is coauthored with Theresa Finley and Raphael Franck. The paper examines the consequences of the land auctions held by the Revolutionary government in France. The abstract reads as follows:
This study exploits the confiscation and auctioning off of Church property that occurred during the French Revolution to assess the role played by transaction costs in delaying the reallocation of property rights in the aftermath of fundamental institutional reform. French districts with a greater proportion of land redistributed during the Revolution experienced higher levels of agricultural productivity in 1841 and 1852 as well as more investment in irrigation and more efficient land use. We trace these increases in productivity to an increase in land inequality associated with the Revolutionary auction process. We also show how the benefits associated with the head-start given to districts with more Church land initially, and thus greater land redistribution by auction during the Revolution, dissipated over the course of the nineteenth century as other districts gradually overcame the transaction costs associated with reallocating the property rights associated with the feudal system.
What's so interesting about this particular instance of land redistribution is the fact that it was all sold to the highest bidder rather than being given to the poor. This breaks with the pattern of most attempts at land reform throughout history. People have been trying to take land away from the rich and give it to the poor since at least Tiberius Gracchus in the second century BCE. But the Revolutionary government needed money and they needed it fast. So they concocted a plan to seize and auction off all French lands owned by the Catholic Church, which comprised about 6.5 percent of the country.
Land auctions take time though, and the government desperately needed funds in the short term, so they issued a monetary instrument known as the assignat that could be used in these land auctions. The land was eventually auctioned off and then traded in secondary markets, where much of it was consolidated into large estates that could employ capital-intensive agricultural practices on a large scale.
The evidence suggests that these land auctions added to the productivity of the regions where they occurred. Noel argues that this occurred because the reduction in transaction costs allowed for a more efficient allocation of property rights. One could argue, however, that the Church might have simply owned more productive land to begin with, and the paper uses a series of identification strategies to show that this is not the main driver of their results.
Rachel Laudan discusses the history of potatoes and other foods on EconTalk.
Photo credit: Early French banknote issue during the French Revolution (Assignat) for 400 livres, (1792), from the National Numismatic Collection at the Smithsonian Institution.
Thu, 20 July 2017
My guest for this is Ekaterina Jardim of the University of Washington. Ekaterina is one of the authors of the new minimum wage study that has been making headlines recently, "Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle." One reason this study is so interesting is that it was funded by the City of Seattle, which is something that governments aren’t obligated or expected to do when they enact major policy changes like these minimum wage hikes.
There was a broad theoretical and empirical consensus in the 1980s that higher minimum wages have disemployment effects on the low skilled, and then Card and Krueger (1994) started a new empirical literature that found no evidence of disemployment effects.
A major problem with Card and Krueger (1994) and with many of the other studies conducted over the past quarter century was their use of proxy measures for low-skilled workers. Instead of looking at workers who actually earned less than the new minimum wage, these studies looked at groups that they knew to contain many minimum-wage workers: generally teenagers or restaurant workers. This new study does not face this limitation because Washington State requires firms to report both the hours worked and the wages of all workers.
One criticism I’m seeing a lot in response to the media coverage of this study is the fact that they had to drop multi-location firms from the sample. The reason for this is that the data only shows what firms people work for, not their location. So if a firm has locations both inside and outside Seattle, you don't know whether a given worker in that firm belongs in the treatment or the control. Still, despite this limitation, the study's sample included over 60 percent of workers in Seattle. Furthermore, the study authors surveyed employers and found that the multi-site firms that were excluded from the sample actually reported more reductions in work hours than did the firms that remained in the sample. So if anything, this omission understates rather than overstates the effect of the minimum wage increase.
One big concern people have is just how much this study's results deviate from the established literature. The authors address this by repeating their analysis using employment in the restaurant industry as a proxy for low-skilled labour. They find that using this proxy for low-skilled labour reduces the measured impact of the minimum wage to near zero, consistent with past studies that have looked only at the restaurant industry.
It seems that this apparently robust finding, replicated in study after study over the past few decades, was actually a quirk of studying the restaurant industry, which tends to substitute high-skilled labour for low-skilled labour rather than cutting total labour hours as a short-run response to minimum wage hikes.
Kevin Grier explains the synthetic control method, which the minimum wage study uses to construct a control group.
Fri, 14 July 2017
My guest today is Kevin Erdmann, he blogs about economics and finance at Idiosyncratic Whisk.
Kevin has written a ton about housing, as evidenced by the titles of his blog posts. A recent one is labeled Housing: Part 239. This series is part of a larger book project that Kevin is publically drafting on his blog.
We discuss the housing bubble of the 2000s and the post-2008 housing market. I took my first undergraduate economics class in 2008, just as the financial crisis was beginning, so there's never been a time in my economics career when people weren't talking about this. And yet, I still have so much to learn!
Kevin makes an interesting distinction between "open-access cities" and "closed-access cities." Closed-access cities are places like San Francisco, New York, and San Jose that have restricted their housing supplies. Open-access cities are places like Houston and Phoenix with more elastic housing supplies. We talk about these factors and how they relate to the housing boom and bust, liquidity, and central bank policy.
Kevin points out that supply side restrictions on housing construction are necessary for demand-side factors to cause housing bubbles. That's because in a market with an elastic housing supply, more demand doesn't result in higher prices, it just causes more homes to be built.
Fri, 7 July 2017
The guest for this episode is Jonathan Morduch, he is a professor of public policy and economics at NYU and the author of The Financial Diaries: How American Families Cope in a World of Uncertainty, co-authored with Rachel Schneider.
The book looks at the financial situations of ordinary American families. It is centered around a detailed survey of 235 households where they recorded what they earned and what they spent at an extremely granular level.
From a truck mechanic whose income depends on bad weather wearing out the parts on trucks to a blackjack dealer whose tips literally depend on her customers' winnings at the blackjack table, the surveys reveal a huge amount of variance in the incomes and expenses of these households. This variance is not captured in annualized statistics, but it has profound implications for the way these households spend and save.
We discuss financial literacy in the context of the real problems people face and relate the stories to some results from behavioural and experimental economics.
Fri, 30 June 2017
Kewei, Chen, and Lu have coauthored a paper titled "Replicating Anomalies," a large-scale replication study that re-tests hundreds of so-called "anomalies" in financial markets. An anomaly is a predictable pattern in stock returns, or stated differently, it is a deviation from the efficient markets hypothesis. Their abstract reads as follows:
The anomalies literature is infested with widespread p-hacking. We replicate the entire anomalies literature in finance and accounting by compiling a largest-to-date data library that contains 447 anomaly variables. With microcaps alleviated via New York Stock Exchange breakpoints and value-weighted returns, 286 anomalies (64%) including 95 out of 102 liquidity variables (93%) are insignificant at the conventional 5% level. Imposing the cutoff t-value of three raises the number of insignificance to 380 (85%). Even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Out of the 161, the q-factor model leaves 115 alphas insignificant (150 with t < 3). In all, capital markets are more efficient than previously recognized.
We discuss the process of replicating these anomalies, issues involving the use of equal-weighted vs value-weighted returns, and the problems of p-hacking in finance research.
Fri, 16 June 2017
My guest on this episode is Kevin B. Grier of the University of Oklahoma.
Our topic for today is a paper Kevin wrote on the economic consequences of Hugo Chavez along with coauthor Norman Maynard.
I had Francisco Toro on the show last year to discuss Venezuela's economic history, so you can listen to that episode if you want a refresher on Chavez. For this episode, our main topic is the empirical method Kevin used to quantify Chavez' effect on Venezuela: synthetic control.
Synthetic control is a relatively new empirical technique. It grew out of an older technique called difference in differences (or diff-in-diff). Diff-in-diff is simple and intuitive: Given two statistics with parallel trends, we can compare their changes before and after some intervention affecting only one of them to see the effect of the intervention. So for instance, if you wanted to know the effect of Seattle's minimum wage increase, you could compare the employment trend among low-skilled workers in Seattle to the same trend in Portland. Then assuming Seattle and Portland would have had similar trends if not for the minimum wage hike, we say the difference between the employment growth in the two cities is attributable to the minimum wage hike.
But what if Seattle and Portland don't have similar trends? What if there's no labour market similar enough to Seattle's to provide a valid comparison? That's where synthetic control comes in. Seattle might not be like Portland, but it might be like a weighted average of Portland, San Francisco, and several counties just outside Seattle. We could construct this weighted average and call it a synthetic Seattle; it is designed to mimic the dynamics of Seattle's labour market before the minimum wage hike. Then if the synthetic Seattle deviates from the real Seattle after the wage hike, we can attribute that difference to the hike.
This is what Kevin has done to study the impact of Hugo Chavez on Venezuela. Listen to the episode to find out his results!
Fri, 9 June 2017
My guest for this episode is Mark Koyama of George Mason University. Our topic is a recent paper titled, "States and Economic Growth: Capacity and Constraints," which Mark coauthored with Noel Johnson.
As stated in the paper, "state capacity describes the ability of a state to collect taxes, enforce law and order, and provide public goods." That said, state capacity does not mean big government. A state may have the power to impose rules across its territory, but it doesn't have to use that power in a tyrannical way. Another way of saying that is to say that having a high state capacity is compatible with Adam Smith's desire for "peace, easy taxes, and a tolerable administration of justice."
One metric that researchers use to measure state capacity is tax revenue per capita. But as Mark is careful to point out, a state with less state capacity can still sometimes achieve a relatively high income through tax farming. This is the practice in many pre-modern states of auctioning off the right to extract tax revenues to local elites in different regions.
We discuss the rise of modern nation-states in various regions, and why some states developed more state capacity than others going into the twentieth century. In particular, we discuss Europe's transition away from a feudal system ruled in a decentralized way by monarchs who held power based on their personal relationships with local lords. England's Glorious Revolution of 1688 allowed it to develop its state capacity earlier than other European nations, with a centralized tax system controlled by parliament.
By contrast, continental powers like the French Ancien Régime and the Hapsburg Empire were legally and fiscally fragmented, leading them to develop their state capacity much later than England.
We also discuss the development of state capacity in Asia, and why Meiji Japan was able to develop its state capacity much faster than Qing Dynasty China.
Fri, 2 June 2017
My guest for this episode is Nuno Palma, he is an assistant professor of economics, econometrics, and finance at the University of Groningen.
Our discussion begins with the monetary history of England. Nuno has authored a study that reconstructs England's money supply from 1270 to 1870. We discuss his methods and findings. We also discuss the influx of precious metals into European markets after the discovery of the New World.
Later in the conversation, we discuss the effect of trade on economic growth during the industrial revolution. Nuno places a greater importance on international trade than McCloskey and Mokyr, but a lesser importance than historians like Wallerstein. Although gross trade flows were not particularly large, trade created new domestic industries like the porcelain industry that was created to compete with Chinese imports. Imports also encouraged urbanization among the European population, something that created many positive spillover effects over the long term.