Category

Editor’s Pick

Category

According to an issue brief recently released by the Council of Economic Advisers, dynamic pricing algorithms are reducing competition in the housing market. The brief’s authors contend that landlords who use these algorithms tacitly collude to raise prices above competitive levels, leaving renters worse off. 

This argument and others like it are part of a broader push by Democrats to blame rising prices on companies like RealPage and other dynamic pricing algorithms, instead of the reckless fiscal and monetary policies they enacted. 

Rather than address inflation’s underlying causes, Democrats are vilifying landlords. President Biden stated during his State of the Union address last year, “We’re cracking down on big landlords who break antitrust laws by price-fixing and driving up rents.” 

Mr. Biden’s remarks weren’t just empty political rhetoric. A few weeks after the President’s address, Sen. Ron Wyden (D-OR) and several Democratic co-sponsors introduced the Preventing the Algorithmic Facilitation of Rental Housing Cartels Act. This legislation would create several undesirable consequences, and before banning these algorithms, Congress should consider how costly these consequences might be.

Firms routinely adopt flexible pricing strategies to deal with fluctuations in demand. Movie theaters, for example, charge lower prices during the day than at night to avoid having empty theaters. As a result, they can operate at higher capacity and lower average cost.

Dynamic pricing algorithms reduce the cost of detecting changes in demand, making it easier for firms to adopt flexible pricing strategies. Uber and Lyft use these algorithmic prices to adjust the number of drivers on the road by communicating surges in demand. As a result, consumers don’t need to wait as long for a ride, and drivers spend less time driving without passengers.

Companies like RealPage do something similar for landlords. The information about pricing they provide reduces landlords’ costs, increasing supply while lowering rental prices. Of course, these algorithms will recommend higher rental prices when the demand for rental housing increases sufficiently, just as Uber and Lyft charge surge prices during periods of high demand like New Year’s Eve. 

While no one likes paying more, higher prices serve three crucial functions:

They prevent shortages by ensuring that the demand for rental units does not exceed the number of units available. 

They ensure the available rental units go to the consumers who value the housing most. 

They create a powerful incentive for developers to build more rental units.

Preventing prices from quickly rising to changes in demand will lead to housing shortages, waste, and fewer rental units. This is hardly good for consumers.

The broader point is that markets work well when prices adjust to reflect changes in supply and demand. All going after landlords who use products like those sold by RealPage will do is slow down this critical price adjustment process, leaving us poorer overall. 

Not content with prosecuting landlords who use dynamic pricing algorithms, the Biden Justice Department filed a civil antitrust lawsuit against RealPage last August, alleging that the company facilitated collusion between landlords.

There are several issues with this argument. For one, collusion is difficult to sustain because the colluding parties are all incentivized to lower their prices. If RealPage’s product facilitated collusion, landlords could increase profits by ignoring the company’s rental price recommendations. Essentially, they would be undercutting other landlords using RealPage’s product. If this were happening, landlords would regularly ignore RealPage’s pricing suggestions. But that is not happening. Instead, 90 percent of RealPage’s clients use the company’s recommended pricing.

The lawsuit’s claim is also inconsistent with rental price and vacancy data. First, landlords must keep some rental units off the market to raise prices above competitive levels. If RealPage had made it easier for landlords to collude to raise prices, vacancies would have risen. That is not what we observe. Between the first quarter of 2020 and the second quarter of 2022, the vacancy rate fell from 6.6 to 5.6 percent. More importantly, real (inflation-adjusted) rental prices declined six percent nationally between June 2020 and June 2022. While they began rising again in July of that year, today, they’re only three percent higher than in January 2020 and still well below their pre-pandemic trend. 

If Democrats want to make housing more affordable, they should make it easier to build housing. Zoning restrictions, impact fees, building codes, property taxes, energy efficiency standards, and permitting fees all restrict the housing supply, driving rental prices up. Easing these restrictions would go a long way to making housing more affordable. 

Blaming landlords and corporations for high rental prices may be good politics, but it is bad economics. Not only will restricting dynamic pricing algorithms not make housing more affordable, it will potentially stifle the development of this valuable technology in other sectors of the economy. Policymakers should keep this in mind.

However much you plan and pray,
Alas, alack, tant pis, oy vey,
Now — heretofore — til Judgment Day,
The drunken driver has the right of way.
~Ethan Coen

What principle should guide a wife and husband concerning the number of children they bring into the world? A dictate of Thomas Malthus, namesake of the Malthusian trap of merely arithmetically enlarging resources set against geometrically expanding human populations, who regarded war, famine, and epidemic as the only effective checks on massive imbalance? Or perhaps the couple pays mind to the God of Genesis 1:28, who blessed the first humans and urged them to “Be fruitful and multiply?” Are decisions about human reproduction of such great national and civilizational interest that they should be made only by a society’s elite, economists and demographers with PhDs, and politicians with the power of violence to enforce them? Or should they be left to the consciences of couples, who are far more cognizant of their own circumstances than a far-removed ruling elite in a national capital?

For about 25 years, beginning in 1979, the People’s Republic of China (an increasingly inapt misnomer) determined that such decisions were simply too societally momentous to be left to individual discretion. Instead, the state needed to intervene and tell people how many children to have, or (more to the point) not have. The “one-child policy,” grounded in a Malthusian fever dream of impending overpopulation, sought to reduce the fertility rate from about six children per woman in 1970 to one child per woman, and to do so overnight. Officials used both persuasion and coercion that extended in some cases to mandatory contraception, sterilization, abortion, and even infanticide to ensure that the dictates of the National Population and Family Planning Commission were followed, backed up by substantial penalties for failure to comply.

In one sense, the “one-child policy” may have worked. For many years, the Chinese Communist Party loudly touted its success, pointing to a fertility rate that did indeed decline to one birth per woman, crediting itself with preventing some 400 million births that would have threatened the nation’s “economic miracle.” But there were many unintended and unforeseen effects. For one, many women, especially in rural environments, were forced to hide their pregnancies and avoiding public healthcare facilities for prenatal and pediatric care. The cultural preference for sons meant that many female fetuses were aborted and female infants abandoned, resulting in today’s excess of at least 34 million males who, in a monogamous society, are left without a mate. Moreover, mothers who had given birth to girls experienced a 43 percent higher divorce rate than women who had a boy.

In addition to these horrors, the “one-child policy” was not nearly so efficacious as its proponents have argued. For example, in 1950, the fertility rate in China’s neighbor Taiwan was about seven births per woman. By the mid-1970s, it had declined to about three. By 2020, it was around one, far below the “replacement rate” at which a population is maintaining its size. In fact, both China and Taiwan now have among the lowest fertility rates in the world. And yet the Taiwanese government never instituted any “one-child policy.” It is reasonable to suppose that many non-governmental forces were at work reducing birth rates in both nations. Such factors include massive urbanization, increased levels of educational attainment, the availability of contraceptives, and rapid economic development, all of which generally track with declining fertility worldwide.

The consequences of this shift in reproductive patterns in China are likely to be severe and enduring. Tens of millions of “surplus males,” known culturally as “bare branches,” may turn out to be less socially content and more prone to criminal activity and political unrest. Less speculative are two additional mutually reinforcing changes: a dramatically higher proportion of older people in the population and a growing shortage of younger adults, who typically constitute the workforce. Preventing 400 million births means there are 400 million fewer laborers, with the result that a declining number of workers supports each pensioner. Today this ratio is about five to one, but it will decline to about two to one by 2040. The problem is compounded by the fact that China largely relies on the production of labor-intensive, low-end commodities, an unsustainable approach absent a large, skilled workforce.

Faced with such projections and the social unrest that may accompany them, how is China responding? Predictably, it is not decentralizing decision making and relying more on the discretion of individual couples and communities. Instead, it is doubling down on centralized expertise. Beginning in 2016, Chinese couples were “allowed” to have two children, and in 2021, this number was increased to three. Such approaches betray a mistaken view that the government controls fertility rates, which it can manipulate by fiat, a misapprehension further exemplified by even newer governmental programs to incentivize childbearing. For example, the government is providing increasing subsidies and tax breaks for families with small children, extending maternity leave to six months, and doubling tax breaks for childcare.

Yet the downward spiral continues. The fertility rate is not rising, many Chinese women report no desire to have more than one child, and only about 30 percent of couples with one child express a desire to have a second. New marriage registrations are in sharp decline, having fallen 25 percent year-on-year. The governmental slogan “later, longer, fewer” – delaying the first birth, waiting at least four years between births, and producing fewer offspring — appears to have become deeply embedded in the Chinese psyche, although it is debatable whether this is a direct result of government policy. Loneliness and the mental and physical health consequences associated with it are on the rise, as aging parents have fewer adult children and receive fewer visits, many young adults have no siblings, aunts, uncles, or cousins. People lead more socially isolated lives.

Headlines naturally focus on the macroscopic demographic collapse — the fact that, according to United Nations estimates, China will lose half its population by the year 2100. But the real story is far more nuanced and instructive. In a society that values the collective over the individual and allows a few rulers at the top to direct the lives of a vast and diverse populace, persons tend to be treated as mere statistics. The state does not see individuals or families, but only population trends. As a result, its power-drunk rulers implement befuddled policies that distort personal spheres of life that they cannot even perceive, let alone regulate responsibly.

And instead of learning from their failures, they simply double down on autocratic methods, wielding bulldozers and wrecking balls when they should be entrusting their citizens with needles and thread.

In one of his final acts as President, Joe Biden has blocked Nippon Steel’s acquisition of US Steel, citing national security concerns. This controversial decision, executed through executive action on January 3, 2025, has reignited debates over protectionism and its impact on economic stability. The Executive Order demands US Steel and Nippon Steel to submit documents to the Committee on Foreign Investment in the United States (CFIUS) within 30 days to terminate the merger. It also authorizes Attorney General Merrick Garland to enforce the order. The reasoning is nominally national security: “I hereby reserve my authority to issue further orders with respect to the Purchasers or US Steel as shall in my judgment be necessary to protect the national security of the United States.”

In the gearing up of the 2024 Presidential Election, Biden promised to shield the industry from foreign competition. My analysis suggests this situation was a political ploy to achieve stronger support in the 2024 Presidential Election amongst union workers and industry leaders in the Rust Belt. The Keystone State, Pennsylvania, is a swing state that Biden won by razor thin margins in 2020. Roughly a year ago, the US Steel decision was put on ice pending a review by CFIUS. CFIUS acts as a guard dog ensuring who can and can’t invest in the USA in order to protect “national security”. On September 22, 2022, President Biden, through Executive Order, was the first President to expand the scope of CFIUS to include, “present risks to the national security of the United States, and it is for this reason that the United States maintains a robust foreign investment review process focused on identifying and addressing such risks.”

The investment review process directed CFIUS to focus on five emergent spectra, including supply chain protection, advanced technology, industry investment trends, cyber security, and personal data. In short, Biden armed CFIUS with new mandates and now has used an undisclosed report from CFIUS to intervene in domestic and international markets, citing national security as justification to block the merger between Nippon Steel and US Steel.

While Biden’s decision may seem tailored to present, unique challenges, it continues a recurring trend in US trade policy. The Reagan administration’s protectionist measures in the 1980s offer a cautionary tale of economic inefficiency and unintended consequences. On July 19, 1983, President Ronald Reagan issued Proclamation 5074 – Temporary Duty Increases and Quantitative Limitations on the Importation Into the United States of Certain Stainless Steel and Alloy Tool Steel. The United States International Trade Commission, or USITC, has months prior issued an undisclosed report claiming various steel goods are harming domestic industries, “all the foregoing of stainless steel or certain alloy tool steel; and round wire of high speed tool steel… are being imported into the United States in such increased quantities as to be a substantial cause of serious injury to the domestic industries producing articles like or directly competitive with the imported articles.” In reaction to this report, Reagan opted to raise tariff rates against steel imports: “I am providing import relief through the temporary imposition of increased tariffs and quantitative restrictions on certain stainless steel and alloy tool steel.”

As the 1980s rocked forward, the USA and other nations signed “Voluntary Restraint Agreements.” These agreements sought to provide protection for US domestic industries without becoming a fixed quota system. Trade import quotas were prohibited in international trade law under the GATT, General Agreement on Trade and Tariff, now the World Trade Organization (WTO). Voluntary Restraint Agreements became a lever of power in Reagonomics, while still aiding the declining domestic steel industry. Steel imports into the United States would be kept to 18.5 percent market penetration, although “Japan and the European Economic Community each have about 5 to 6 percent of the American steel market.”

In the late 1990s, the Cato Institute released research about the steel industry and the protectionist measures undertaken by the Reagan Administration (1981-1989). Authors, Brink Lindsey, Daniel T. Griswold, and Aaron Lukas questioned steel protectionism effectiveness in job retention declaring, “Employment in the steel sector has declined by more than 60 percent since 1980 largely because of rising productivity, and employment will continue to fall even if trade barriers are imposed.” In addition, the Voluntary Restraint Agreements, amongst other protectionist measures, cost the US economy roughly $7 billion in the 1980s.

Brookings Institution author Robert Crandall agreed. In his examination of the lack of firm growth under the 1980s import restrictions, he writes, “Despite the trade protection of the late 1970s and 1980s, the integrated steelmakers were forced to launch a major retrenchment in the early 1980s. These companies began closing plants, reducing capacity from 138 million tons in 1980 to 90 million tons in 1987.” 

Jobs evaporated and companies downsized, and protectionist measures of the 1980s imposed more harm than good. Why would 2025 be any different?

Clearly, both the current and incoming administration have failed to learn the repeated lessons of the previous decades. Government desires to protect the steel industry. But the industry remains a world powerhouse, the USA ranking number four in steel production. On top of this, Nippon Steel promised to invest in the rust belt, “including at least $1 billion to Mon Valley Works and approximately $300 million to Gary Works as a part of $2.7 billion in investment that it has already committed.” Nippon Steel even agreed to entrust the future board members of the merger to be US citizens, going so far as to preserve key positions such as CEO to US citizens. Just prior to its prohibition, on December 23, 2024, US Steel and Nippon Steel remarked the merger as an act of “Friendshoring” bringing not only two strong industries together, but also the governments of Japan and USA.

The Biden Administrations’ interventions in the steel industry reflect a pattern of prioritizing short-term political gains over sustainable economic strategies. The market manipulation of the 1980s through Voluntary Restraint Agreements, mirrored in the recent prohibition of Nippon Steel’s acquisition of US Steel, demonstrates a troubling reluctance to learn from past mistakes. Rather than fostering investment, innovation, and competition, these protectionist policies risk stifling growth and ignoring the potential for revitalization in regions like the Rust Belt, where Nippon Steel’s proposed investments could have delivered much-needed economic benefits.

In an increasingly interconnected global economy, clinging to outdated protectionist measures is a losing strategy. To secure its economic future, the United States must adopt policies that encourage openness and collaboration while leveraging its position as a global leader in innovation. A commitment to fostering competition and welcoming investment will strengthen not only the steel industry but the broader economy — paving the way for long-term growth and stability, a far more important goal than fleeting political favoritism.

Philanthropic contributions are a quiet but integral part of American education funding.

In 2016 (the most recent year for which good data are available), philanthropists contributed nearly $5 billion to US public schools, with $800 million stemming from  the Bill and Melinda Gates Foundation and the Walton Family Foundation alone. Private schools are not much different — the National Association for Independent Schools reported in 2023 that their member schools secured $4.87 billion in philanthropic funds to continue their missions. 

This might seem like a drop in the bucket. After all, American schools received nearly $800 billion from all sources last year. But most education spending gets tied up in paying off debt, subsidizing pension funds, or paying salaries. Schools never see the vast majority of revenue their districts collect. Philanthropic donations, on the other hand, carry real value. If the Kennedy Center comes in and says “here’s $20 million, create an arts program for disadvantaged students,” more often than not, it gets done. That said, the evidence on whether educational philanthropy really works is, at best, mixed. 

This state of affairs has generated a lot of criticism. Many scholars claim that educational philanthropy harms “democracy” because it directs schools’ attention away from liberal or pluralist values and more toward market interests. Others attribute educational philanthropy’s woes to the knowledge problem, as many foundations naively assume that they alone can solve education’s intractable problems. Others still argue that philanthropy harms liberal education by injecting wokeness and other shenanigans into America’s schools. 

To a certain extent, all these critiques are valid, but they miss the bigger picture. Of course educational philanthropists are going to pursue programs and policies that benefit them — whether or not their activities are good for those of us who value human freedom depends on a given philanthropist’s values and aptitudes. Neither is this a new problem; schools and communities have been dealing with this for as long as there have been schools and communities. 

Take, for example, San Antonio, Texas back when it was under Spanish and Mexican administration. In 1811, after a failed revolution against the Spanish crown, a group of the town’s well-to-do citizens decided to found a school together. The school’s purpose was to take the riff-raff’s children and turn them into loyal Spanish citizens who would never even think to question the crown’s authority. Spain, so the school would teach, was a benevolent actor who only had their best interests at heart. The Spanish royalists would meet any resistance or misbehavior with harsh discipline. 

One Don Bicente Travieso — a prominent cattle rancher who had business interests with the Spanish government — offered to invest his own wealth, as well as take custody of public funds, to make the school a reality. What happened next is unclear. Some historians claim that Travieso ran off with the funds, purchasing only the worst-quality materials in order to retain his wealth. Another view is that Travieso made an honest effort to supply the school, and encountered unforeseen difficulties in planning the endeavor. Regardless, the school failed, and it may have never even operated. 

This school would not have been liberalism’s friend. It was meant to indoctrinate a revolutionary community’s youth to secure an authoritarian regime’s political and economic interests. In that sense, educational philanthropy can harm a liberal democracy. But that’s not the only reason it failed — and many of the reasons for its failure are still reflected in modern educational philanthropy. 

The royalists failed because they overpromised, assumed they could control all possibilities, and didn’t care what anybody in the surrounding community had to say. Modern educational philanthropists do exactly the same thing, and yet are surprised when they experience similar outcomes. 

Look no further than the Bill & Melinda Gates Foundation, which contributes hundreds of millions of dollars a year to charter schools, accountability schemes, and teacher performance metrics. When parents, schools, and teachers complained that the Gates Foundation was not listening to them, and misunderstood the situation on the ground, the Foundation ignored them, arguing instead that their models and know-how held all the answers. 

The results were predictable — everything they’ve tried hasn’t worked. The Gates Foundation took some responsibility, but mostly blamed others. For example, when technology investments failed to generate meaningful results, Bill Gates called students “unmotivated.” When the group’s Common Core project met resistance and performance failures, Bill’s response was to double down. In effect, the Gates Foundation repeatedly attempted to slam a square peg in a round hole, much to the chagrin of all other stakeholders.  

Fortunately, San Antonio’s history offers another path forward. In 1828, after the Mexican War of Independence, the city wished to promote its new Catholic-democratic bonafides. To that end, six leading men, along with the powerful General Anastacio Bustamente, resolved to found a new school. These students, they reasoned, would be the ones to steward the young, fragile democracy into a new age. In this effort, they had widespread community support, and also received funding from the state government. Everybody shared the same fundamental values, and everyone wanted their new, democratic society to prosper. 

Unlike the Spanish royalist school, this one had a firm, independent curriculum — reading, writing, math, arithmetic, and the values and virtues needed to live a good life in a liberal-democratic society. The school’s teacher was primarily left alone, and discipline was no harsher than what might be expected in the United States at the time. The goal (a free society) and the means (a solid education) were clear, but the execution lacked resources. In other words, the philanthropists’ role was to facilitate what the community already wanted. 

The school operated from 1828 to 1834, a good run by nineteenth century Mexican standards. Towards the end, after funding from the state government dried up, the school sustained itself almost entirely on donor funds. Many of its alumni became fervent liberals who actively resisted tyranny in Mexico and beyond. If the school’s goal was to cultivate a liberal body of citizens, it was an outrageous success. In this sense, it greatly benefited liberal democracy by creating citizens who actively wanted to live in a free society. 

This model worked because the philanthropists acknowledged and acted within the community’s values. It also worked because the philanthropists did not try to coerce outcomes, nor did they intervene in the school’s day-to-day operations. Even today, a handful of educational philanthropists get this stuff right. Those investing in leadership development, for instance, often garner a significant amount of community support — something that’s helped kids in Columbus, Ohio, across Florida, and even in China. 

Educational philanthropy is a complex but important field, and there’s a great deal of ambiguity as to what’s happening now and what comes next. When applied detrimentally, educational philanthropy can do a lot of damage to human freedom. But those of us worried about the future of liberalism need not despair. There are ways to do it right, and ways to fix the damage — even within the same community.

With President Trump’s return to office after the Biden interregnum, we can be sure of one thing: tariffs are going to be a major part of his policy. He has touted tariffs as revenue generators, as ways to bring back manufacturing, and as negotiating tactics. The trouble is that all of these are in tension with each other, and none are particularly effective at what they purport to do. Indeed, their likely failure will result in harm to the people Trump claims to care for the most — working-class families.

It has become commonplace among tariff supporters to point out that the US government used tariffs as its main source of revenue in the nineteenth century, implying that we would do well to replicate that model and perhaps even abolish the income tax. The latter ideas are fanciful, but even regarding the amount of revenue that could be raised from tariffs, the idea that tariffs will be a significant source of revenue is inaccurate. As the Tax Foundation explains in its detailed analysis of the revenue effects of the Trump tariff proposals, a 10 percent universal tariff will raise $2.7 trillion in customs revenue and a 20 percent tariff will generate $4.5 trillion over a 10 year period (2025 to 2035.) Income taxes, by contrast, contribute $2.2 trillion annually.

This is why the idea that tariffs could replace income taxes is fanciful. The spending cuts required would be wonderful to see, as any free-market economist will tell you, but in a political reality where getting any spending cuts at all is more painful than pulling teeth without anesthetic, it requires purely wishful thinking to imagine them happening.

But even those revenue figures for tariffs miss the mark. Because the costs of tariffs are ultimately borne by American consumers (a point tariff supporters seem unable to grasp), they reduce income and therefore spending, which means fewer imports, which means less tariff revenue. After accounting for this, revenues fall to $1.7 trillion and $2.8 trillion for the lower and higher rates respectively. The higher rate has a greater effect and therefore a larger reduction in revenue.

Nor do tariffs occur in a vacuum. Countries generally respond to higher tariffs with retaliatory tariffs on American exports. These, too, would reduce consumer income and therefore tariff revenues, knocking those figures down by around $190 billion over the decade.

Because tariffs are ultimately paid by consumers, it is worth considering which households bear the brunt of those costs. Research has repeatedly found that because lower-income households typically spend a larger share of their income on tradable goods (such as everyday household items) than higher-income households, those poorer households end up bearing a disproportionately large share of the cost increases. In other words, the tariffs hurt the working class household more. As revenue generators go, tariffs are regressive.

Moreover, as suggested above, revenue is not the only purpose attributed by their supporters to raising tariffs. Another purpose that is frequently cited is that tariffs will cause industries to re-shore their operations to maintain sales, thus bringing good manufacturing jobs back to the United States and thereby increasing household income.

Yet if this happens, then import revenue will be reduced. If all goods that can be produced in the United States are, then revenue will fall accordingly. Tariffs bringing jobs back and tariffs providing a consistent revenue stream are mutually incompatible.

Moreover, tariffs are at least as much a threat to manufacturing as they are an opportunity. Much that America imports is not finished goods, but inputs, i.e. raw materials and parts, that domestic manufacturing needs to operate efficiently. With tariffs, fewer of these inputs will be imported, or the price of finished goods will go up. With fewer imports, American manufacturing jobs will suffer. With higher prices, American middle class households will see their standard of living fall. Neither is good for the American middle class.

Of course, tariff proponents will say that these duties will cause American industries to onshore supply chains, which means more jobs for Americans. Up to a point, Lord Copper, as Evelyn Waugh would put it. Some things must be sourced offshore as either lack of natural resources or things like American environmental law make them impossible to be sourced domestically. For those things that could be made here, the price differential will have to be addressed. In most cases that means that automation is the best option to keep labor costs down, which means once facilities are constructed, they will have few jobs. And the industrial robots that will staff those factories are mostly made in Japan, Germany, or Switzerland.

The idea that tariffs will bring back jobs and therefore benefit the middle class is one that puts the interests of citizens as producers ahead of the interests of citizens as consumers. Yet while all producers are consumers, not all consumers are producers. Consider that tariff supporters often say that they want wages to be high enough to support a worker and his family – in this case, the housewife is a non-producing consumer, who will be motivated to keep costs low. This was why Margaret Thatcher often pitched her economic reforms directly at the housewife.

This is the problem that Adam Smith recognized as being the main flaw of the mercantilist system when he wrote in Book IV of The Wealth of Nations,

Consumption is the sole end and purpose of all production; and the interest of the producer ought to be attended to, only so far as it may be necessary for promoting that of the consumer. …. But in the mercantile system, the interest of the consumer is almost constantly sacrificed to that of the producer; and it seems to consider production, and not consumption, as the ultimate end and object of all industry and commerce.

In other words, the misconception that economic policy should be focused on producers is not a new idea, but one that resurfaces again and again. Just as with Britain’s corn laws or the Smoot-Hawley tariffs that worsened the Great Depression or what is happening in countless countries like Nigeria, it will be the working class that suffer the most from the imposition of tariffs that are supposed to help them.

Yet what about the other justification – that tariffs are useful tools in negotiations? Once again, a moment’s thought will reveal that this isn’t compatible with the other justifications. If tariffs are just a negotiating tool, they won’t bring in significant revenue and nor will that re-shore massive amounts of industry. Even the respectable academic argument for “optimal tariffs,” which suggests that large countries can drive down the price of imports through tariff policy, puts them in the low single digit range once other factors are accounted for, and they are about lowering import prices, not revenue generation or reshoring.

Those other factors include the fact that the other party in the negotiations is rarely a vassal. They may make countermoves in the negotiation, which might be conciliatory, in which case the threat worked, or might be retaliatory, in which case both sides suffer. Sadly, the history of trade wars proves that retaliatory action is by far the most common. We are still stuck with the “chicken tax” on imported light trucks because of a trade war in the 1960s, where America was the party to retaliate.

The most comprehensive analysis of the effects of American trade restrictions that were aimed at pressuring foreign countries was written back in 1994 just as the real era of open trade was starting. It found that in only 17 percent of cases did America actually achieve its objectives.

This certainly appears to have been the case with the recent rounds of Trump-Biden tariffs.

American export growth slowed in the face of retaliation, which demonstrates that demands for reciprocity are usually heeded in the shape of reciprocal tariffs. This hurts both sides, and, as always, it is the working-class family that suffers the most in terms of reduced income and higher prices.

There are other factors, too. Do we really want to antagonize allies, for instance (although the answer to this appears to be depressingly often yes)? What are the cascading effects on other supply chains? And so on.

All these are why the major nations after the Second World War collectively agreed to stop using tariffs to advance parochial interests. There is a reason why the precursor to the World Trade Organization was called the General Agreement on Tariffs and Trade. Tariffs are self-defeating as a negotiating strategy, despite their attractiveness to politicians in need of an electoral boost.

To sum up, the three arguments for tariffs can only seriously be advanced by someone suffering from advanced cognitive dissonance. They are poor revenue raisers, they cause net harm to manufacturing, and they backfire as negotiating tools. These three objectives are mutually exclusive, and ineffective even when taken individually. A better policy would focus on reducing costs to the consumer.

A familiar excuse for protective tariffs and other trade restrictions goes like this: It would be all well and good for our government to follow a policy of free trade if other governments did the same. But other governments don’t do the same. Other governments use tariffs and subsidies to give producers in their countries unfair advantages over producers in our country. Unless and until other governments embrace complete free trade, our government must “retaliate” with its own protective measures to counter the protective measures imposed by foreign governments.

Every competent undergraduate who has passed a well-taught course in Econ 101 can identify a significant problem that lurks in this excuse for protectionism — namely, protective tariffs and subsidies are a net cost to the people of any country whose government intervenes in these ways. Tariffs and subsidies distort the allocation of resources in ways that reduce the overall wealth of the nation, a fact that is true regardless of the trade policies of foreign governments. “Why,” this undergrad will ask rhetorically, “should we let our government inflict harm on us just because other governments inflict harm on their citizens?”

The undergrad is wise and right to ask this question. But immediately after asking it, this undergraduate will be corrected by a first-year economics graduate student strutting his advanced knowledge of the subject.

“Silly girl,” the grad student tells the undergrad, “you fail to see the real benefit of retaliatory tariffs and subsidies. It’s true that the bulk of the burdens created by tariffs and subsidies fall on the people of any country whose government intervenes in these ways. But it’s also true that those of us in the home country also suffer some harm from those same interventions done by foreign governments. After all, trade is mutually beneficial, and in 2025 most of us are all part of one global economy. And this economy is distorted by those foreign tariffs and subsidies. So our government, in its wisdom, can use its own tariffs and subsidies to retaliate against foreign governments to pressure them to end those harmful interventions. When these interventions end, global trade is made freer and resources are more efficiently allocated. While our government’s retaliatory tariffs and subsidies impose net costs on us in the short run, by eventually making global trade freer this retaliation works to our net benefit in the long run!”

The undergraduate knows enough to realize that she cannot disagree in principle. The graduate-student’s hypothetical description of the operation of retaliatory protectionism is logical. It is indeed possible that such retaliation will work as described in the real world and redound to the benefit of almost everyone abroad and at home.

But is the graduate student’s case for retaliatory protectionism practical? No.

One practical obstacle to the success of retaliatory protectionism is that governments typically dispense tariff protection and industrial subsidies in response to interest-group pressures. Politically well-organized producer groups are usually the main drivers of such policies. Unless retaliatory tariffs by the home government neutralize the particular interest groups that are the motive force in foreign countries behind the foreign tariffs or subsidies, such retaliation will likely fail to pressure foreign governments into ending their protectionist interventions. Indeed, it’s not unlikely that foreign governments will respond by imposing their own retaliatory tariffs or subsidies against the home-government’s retaliatory measures, with the end result being a trade war.

A second practical obstacle is that special-interest groups in the home country will cunningly take advantage of any inclination to use retaliatory measures, resulting in overuse and abuse of such measures.

Opportunities for such cunning are especially numerous when talk turns to subsidies dispensed by foreign governments. Home-country tariffs and subsidies marketed to the public as well-meaning devices for pressuring foreign governments to end various subsidies will too often simply be protectionist measures meant to do nothing more than line the pockets of powerful producer groups in the home country.

Such subterfuge is too easy, not least because in the real world what is and isn’t a subsidy is often unclear. Not all subsidies are as straightforward as is the doling out of cash to European farmers through the EU’s Common Agricultural Policy. Far too many innocent policies pursued by foreign governments can be deviously portrayed by home-country protectionists as subsidies that justify retaliation.

Consider each of the following actions by a foreign government:

it establishes and operates engineering schools

it extends government-backed loans to students enrolled in engineering schools

it spends more money training people to work in factories

it increases the amount of money it spends to build and maintain infrastructure such as electrical grids and extensive networks of highways

it changes the designation of some public land from protected wilderness to land available for economic development

it dispenses more money to fund basic scientific research

Each of these foreign-government actions likely would be undertaken without much thought of the effect it would have on that country’s international commerce. Yet each of these actions also arguably does improve the ability of producers in the foreign country to churn out more and better outputs, and to export these outputs at lower prices.

So are these government actions subsidies to industry? Whatever your answer, you’d be naïve to think, if voters in the home country are groomed to tolerate protectionist measures as retaliation for foreign subsidies, that home-country producers will not portray such foreign-government actions as subsidies that justify home-country retaliation.

A consequence of the ambiguity of subsidies is that, if retaliatory protectionism is encouraged (or even just tolerated) as a response to foreign-governments’ subsidization of their exports, sightings of such subsidies by home-country producers, politicians, and administrative officials will be too numerous. Rent-seekers at home will routinely leap to the conclusion that foreign exporters’ gains in market share are the result of “unfair” subsidies allegedly enjoyed by these exporters. With no way to separate the vast majority of government expenditures into objectively agreed upon classes of “subsidies” and “not subsidies,” the best rule of thumb is a policy of free trade followed regardless of foreign-governments’ subsidization of producers within their jurisdictions.

Advertising has never been a popular thing amongst pundits. It is depicted as an attempt to manipulate consumers into desiring things they do not (or should not) need. Others depict it – as is often seen in series like Mad Men which was popular a decade ago – “selling dreams” to consumers by perpetuating unrealistic standards of success and well-being tied to a product or a service.

These criticisms have existed since the late nineteenth century, remaining largely unchanged except for the language and imagery used in public discourse. The most recent reiteration is President-elect Donald Trump’s prospective nominee for secretary of Health and Human Services, Robert F. Kennedy Jr (RFK Jr.) who has been on the record as advocating for banning pharmaceutical companies from advertising on television. The reasoning is exactly like those late nineteenth century criticisms – the adverts are meant to prop up, by appeals to emotion rather than reason, demand for drugs that are not needed or have little added value.

Just as the criticisms are merely a rebranding of the same nineteenth-century argument, the responses that economists have marshalled remain the same. Advertising helps consumers by providing information and by creating competition between firms. It also encourages firms to invest in product quality by creating brands that consumers value because of the product quality. Advertising quality is a way to create further competition. More importantly, and very often, the relationship is reversed — innovating firms try to meet the difficult features of consumer demand and they generate new products. The advertising follows afterward to inform consumers. It allows them to inform the public of their innovation. If advertising is unavailable, innovation becomes unappealing in the first place and never happens.

All of this is exactly what the much-maligned advertising by pharmaceutical companies actually does.

First, when direct advertising was allowed in the late 1990s, it had the effect of creating greater product competition. Drugs that were close substitutes for each other began exhibiting greater sign of price competition rather than by competition via brand loyalty. By weakening brand loyalty, it also allowed generic medication (even those that were less advertised) to break through.

It also offered valuable information to consumers by pointing to treatment options that consumers were previously unaware of and assisted them in identifying their symptoms and seeking medical care. One study found that direct to consumers advertising stimulated new diagnosis in 25 percent of patients who came after viewing ads. At the same time, physicians were also able to inform in further details regarding whether a treatment was truly needed. By levelling the playing field between patients and physicians in terms of knowledge, patients are better able to identify trade-offs that affect their well-being. This helps achieve better health outcomes.

For example, one study found that advertising to consumers directly for relevant drugs increased the likelihood of attaining cholesterol management goals within certain groups of patients. A more recent study focusing on advertising for antidepressants found that such advertising led to an increase in prescriptions and subsequently, lower rates of workplace absenteeism.

Second, the research and development process for new drugs is lengthy, costly and risky. Only a tiny fraction of all molecules that are researched end up being marketed. From that fraction, we must subtract those that fail to cover their development and regulatory approval costs (in excess of 10 years and more than $1 billion per drug). Given these features, pharmaceutical companies must promote drugs to generate the revenues needed to cover costs and risks. If one cuts the room for advertising, one cuts the potential sales revenues. And by cutting the potential sales revenues, you dampen the incentive to innovate in the first place. In other words, boosting revenues via advertising is vital as it motivates continued investment in R&D for new drug development.

Slowing down the rate of research and development into new drugs is something that is immensely damaging to improvement in living standards. For example, one study found that 73 percent of the increase in life expectancy at birth after 1990 across high-income countries could be explained by the introduction of new drugs. Another study found that for the earlier period from 1986 to 2000, new drugs explained 46 percent of the improvements in life expectancy at birth. Additionally, new drugs indirectly reduce healthcare costs by substituting drug-based treatments for in-facility care, such as treatments requiring a hospital stay or visit, thereby helping to mitigate the effects of rising healthcare demand.

Bans of advertisements could thus hurt patients now (by reducing their information) and patients later (by reducing innovation). It is easy to fall prey to populist contempt for “Big Pharma” but this is a bad policy that could hurt many.

As beltway watchers intently debate the incoming administration’s cabinet picks and White House appointments, the more jaded observers of our economic condition have good cause to wonder: what difference does it make? The political order is dominated by high-dollar special-interest lobbyists and power-hungry bureaucrats, and lacks the incentives to reduce intervention and return decisions to the people. Can bureaucracy be made useful by better bureaucrats, or only restrained by the resurgence of individual choice? 

How will Robert Kennedy Jr., if confirmed in his proposed role as Secretary of Health and Human Services reduce obesity in America? Will he promote new regulations and taxes, which will invariably look after entrenched interests, at the expense of individual and experimental solutions? Will he, as others have, look to regulators to ban this or that ingredient, without knowing the economic or health costs of replacing them? Will he reduce the barriers to healthy food options and stop the subsidies that contribute to obesity? Will he reduce the unintended consequences generated by previous “solutions” to public health and diet fears? Will he, in short, battle political sclerosis with exactly the tools that brought us here, asking advice from those who benefit from the status quo? Or will he challenge bureaucratic-corporate collusion and reawaken a competitive market?

If he does, he’ll face a stiff fight. The building blocks of ultra-processed food — corn, wheat, soybeans, and sugar — are subsidized by the US Department of Agriculture to the tune of about $6 billion dollars per year. High-calorie food loaded with cheap soybean oil and sweeteners appears cheaper than they are relative to more nutritious and less calorie-dense foods. The magnitude of the impact on consumer choice is unclear, but here is one thing we can count on: If Kennedy, with the backing of the Trump administration, goes after subsidies, lobbyists for corn, wheat, soybeans, and sugar will turn Congress upside down to thwart him.

Trump’s nominee for energy secretary is Chris Wright. Will he be able to help reduce government spending by eliminating subsidies for solar and wind energy? I would not bet on that occurring without an epic battle in Congress.

Many might wish it were different, but in reality, crony firms will continue to exist during Trump’s presidency. The revolving door between industry and regulators will continue to revolve. Past programs’ policy blunders will continue to be propped up, often with bigger budgets. Whether Trump’s presidency is successful will hinge, in part, on his ability to break longstanding government alliances with crony firms and resist new calls for corruption. 

Let’s ask the big question. Can genuine change be achieved through political means?

In his classic book The State, the late German sociologist Franz Oppenheimer observed that there are two ways to wealth: the peaceful “economic means” and the coercive “political means.” Non-coercive wealth creation is an economic process where businesses and people fulfill consumer needs. Wealth through political maneuvering involves firms and individuals using government power to obtain unearned riches. According to Oppenheimer, economic means demand “work,” unlike political means, which demand “robbery.”  

In “Profit and Loss,” Ludwig von Mises reflected on how the “ballot of the market” forces entrepreneurs into an endless process of working to serve consumers: “The ballot of the market elevates those who in the immediate past have best served the consumers.”

Unlike politics, in a market process, people freely and easily change their minds. Mises added, “Choice is not unalterable and can daily be corrected. The elected who disappoints the electorate is speedily reduced to the ranks.”

Some businesses, unable or unwilling to adapt and serve, rely on the government to restrict consumers’ choices as a means to gain profits they could not have earned otherwise. Rather than compete to win the “election” in the “ballot of the market,” they seek to elect politicians who will support their schemes to forcibly appropriate the wealth of others, and that is robbery.

Oppenheimer’s choice of the word robbery wouldn’t have surprised Ralph Waldo Emerson. 

In his essay Politics, Emerson wrote, “Every actual State is corrupt.” He then added, “What satire on government can equal the severity of censure conveyed in the word politic, which now for ages has signified cunning, intimating that the State is a trick?” 

Emerson was writing in 1844 when the government was a tiny fraction of the size it is now. The exact size of the federal budget in 1844 was hard to come by, but in 1837 the budget was approximately $39 million dollars. (Or roughly $1.6 billion in 2024, since the dollar has lost 98 percent of its value since 1844.) Federal spending in fiscal year 2024 is around $6.75 trillion.

In short, Federal spending in 1844 was about 0.024 percent of what it is today. But, if Emerson is right, politics had already become irredeemable.

Emerson observed, “Of all debts, men are least willing to pay the taxes. What a satire is this on government! Everywhere they think they get their money’s worth, except for these.” Remember, there was no federal income tax in 1844.

Emerson railed against taxes: “A man who cannot be acquainted with me, taxes me; looking from afar at me, ordains that a part of my labor shall go to this or that whimsical end, not as I, but as he happens to fancy.” 

Emerson was clear: “The less government we have, the better — the fewer laws, and the less confided power.” 

Similar to other classical liberals, Emerson advocated for voluntary cooperation to solve mutual problems: 

Whilst I do what is fit for me, and abstain from what is unfit, my neighbor and I shall often agree in our means, and work together for a time to one end. But whenever I find my dominion over myself not sufficient for me, and undertake the direction of him also, I overstep the truth, and come into false relations to him.

Those who used coercion met with Emerson’s disapproval. He always advised working towards “self-control.” It was wrong “to make somebody else act after [our] views.”  When others “tell me what I must do” their commands are absurd. “Therefore,” Emerson wrote, “all public ends look vague and quixotic beside private ones.”

Wisely, Kennedy should consider those words, ending subsidies while preserving consumer choice. But removing self-seeking interest from government power may be among the most quixotic goals we could undertake. Can Leviathan restrain itself?

Instead of demanding government solutions, Emerson expected us to attend to our spiritual growth: “The antidote to this abuse of formal Government, is, the influence of private character, the growth of the Individual.”

You cannot change an effect without changing its cause. The consciousness of Americans is the cause; government robbery and overspending are the effects.

Emerson wrote: “Cause and effect, means and ends, seed and fruit, cannot be severed.” He argued we need “a reliance on the moral sentiment, and a sufficient belief in the unity of things to persuade [people] that society can be maintained without artificial restraints.” 

Do we get upset at the behavior of politicians? It’s unwise to be angry about the predictable. Emerson chided, “We might as wisely reprove the east wind, or the frost, as a political party, whose members, for the most part, could give no account of their position, but stand for the defense of those interests in which they find themselves.”

Emerson would agree with the old saying, we get the government we deserve. He said, “The State must follow, and not lead the character and progress of the citizen… the form of government which prevails, is the expression of what cultivation exists in the population which permits it.”

Individual spiritual evolution is a prerequisite for political change. Emerson wrote, “Under the dominion of an idea, which possesses the minds of multitudes… the powers of persons are no longer subjects of calculation. A nation of men unanimously bent on freedom… can easily confound the arithmetic of statists.”

In 1837, Emerson spoke to the Phi Beta Kappa Society at Harvard. It was later published as  The American Scholar. He ended his talk with a rousing call to stand for principles and not cave to expediency. He warned, “The spirit of the American freeman is already suspected to be timid, imitative, tame. Public and private avarice make the air we breathe thick and fat.”

Emerson, of course, couldn’t have imagined how much avarice would bloat government and make our political discourse “thick and fat.” The consequences are severe when private interests exploit political processes for theft. 

Some people, Emerson said, “very naturally seek money or power; and power because it is as good as money — the ‘spoils,’ so called, ‘of office.’” Such people are “sleep-walking.” Emerson advised, “Wake them, and they shall quit the false good and leap to the true, and leave governments to clerks and desks.”

Emerson knew the choice to awaken is a choice made by an individual. 

Is political change a realistic hope? Let’s start change with ourselves; societal flourishing doesn’t come from politics. The time to begin is now. For, as Emerson reflected, “This time, like all times, is a very good one, if we but know what to do with it.”

As we enter 2025, inflation remains a primary concern, frustrating consumers and confounding policymakers and economists alike. From 2019 to 2023, the Consumer Price Index (CPI) rose by 25.7 percent, reflecting a broad increase in the cost of goods and services. Grocery prices (not included in CPI) also rose by more than 25 percent and put additional strain on household budgets. 

Amid this economic strain, the term “greedflation” has gained traction, fueling public frustration and influencing even PhD economists to misinterpret inflation’s roots and offer misguided solutions. The paper Prices, Profits, and Power: An Analysis of 2021 Firm-Level Markups by economists at the Roosevelt Institute found market power is a key driver of inflation due to corporate markups and profits skyrocketing in 2021, their highest levels since the 1950s. Their proposed “all-of-government administrative, regulatory, and legislative approach to tackling inflation” — encompassing interventions in demand, supply, and market power — reads more like a recipe for an economic nightmare than sound policy.

This narrative overstates the role of corporate markups in inflation, grossly underplays the impact of government fiscal and monetary policies, and disregards basic economic principles. Worse still, the proposed interventions risk exacerbating economic instability. It’s time to move past the sensationalized ‘greed’ narrative and focus on the true drivers of inflation and why free market solutions are key to recovery.

Markups Only Explain 10 Percent of Grocery Inflation

The greedflation argument claims that markups are the primary driver of inflation, but a closer examination shows their role, particularly in rising grocery prices during and after the pandemic, was relatively minor. 

To determine whether market power significantly contributed to grocery price inflation, it is crucial to analyze the relationship between prices and markups before and after the pandemic. Data shows that while grocery markups did increase during the pandemic, their impact on overall price inflation was limited. Had grocery stores maintained their 2019 markup levels, prices would have risen by 22 percent rather than 23.5 percent, indicating that markups accounted for less than 10 percent of the total price increase. 

Instead, the primary drivers of grocery price inflation were supply chain disruptions, increased production costs, and shifting consumer behavior. Shipping costs soared by as much as 200 percent during the pandemic, while transportation labor shortages pushed wages up by 25 percent. Rising input costs, such as a 50 percent surge in fertilizer prices and a 30 percent increase in packaging material costs, further strained supply chains. Pandemic-induced bottlenecks, like prolonged port delays, exacerbated supply constraints, contributing significantly to higher prices.

Additionally, the perception of increased profits and markups in the grocery sector is partly explained by evolving consumer behavior. For example, the growing demand for private-label brands — which are more affordable for consumers but offer higher margins for retailers — has influenced profit margins. These shifts in consumer purchasing patterns may create the illusion of higher profits, but they underscore how evolving shopping habits and preferences shape the grocery sector’s economics.

Attributing inflation in grocery prices to corporate greed oversimplifies a complex issue driven by structural and economic factors beyond just markup adjustments.

Greedflation Distracts from the True Driver of Inflation — Government Policies

The greedflation argument wildly underplays the government’s significant role in driving inflation, particularly through its expansive fiscal and monetary policies. Beyond corporate markups, fiscal policies injected massive amounts of money into the economy, fueling excessive demand while supply chains were still constrained. For instance, the CARES Act, costing approximately $2.2 trillion, not only carried a hefty price tag but also incentivized Americans to stay home longer than necessary, delaying workforce recovery. Similarly, the $1.9 trillion American Rescue Plan further flooded the economy with stimulus. Adding to this, $1,400 checks were sent to 165 million Americans, totaling over $230 billion, further increasing demand pressures. 

What’s more, while large government spending bills significantly contributed to inflation, the Federal Reserve amplified these effects through its monetary policies. To accommodate the influx of fiscal spending, the Fed increased the money supply, flooding the economy with liquidity that allowed consumers to bid up prices — more money chasing fewer goods. At the same time, the Fed slashed interest rates to historically low levels, making borrowing cheaper for consumers and businesses, which boosted demand for goods, services, and housing. 

This surge in aggregate demand came at a time when supply chains were crippled by lockdowns, worker shortages, and regulatory bottlenecks. 

The combination of ultra-loose monetary policy, excessive liquidity, and delayed tightening created a perfect storm of inflationary pressures. Expansive fiscal and monetary measures, coupled with supply chain disruptions, injected unprecedented amounts of money into an economy with constrained production capacity. 

Centralized Inflation ‘Solutions’ Disregard to Basic Economic Principles

Finally, the call for an “all-of-government administrative, regulatory, and legislative approach to tackling inflation,” developed by some economists, stems from the observation that global markups — an indicator of market power — have risen by 33 percent since the 1980s. In their view, the surge in “excess profits” following the COVID-19 pandemic underscores the need for policy interventions to curb inflation.

Their proposed solutions focus on limiting profit-driven price increases through measures such as excess profits taxes, stronger competition policies, and price controls in key sectors. What makes this proposal particularly troubling is that it comes from economists — those expected to understand basic economic principles — who not only ignore fundamental market mechanisms and the true drivers of inflation but prescribe policies that will lead to an economic collapse. These interventions aim to force businesses to absorb rising costs instead of passing them on to consumers, intending to stabilize inflation and ease its burden on wage earners. However, this approach disregards fundamental market dynamics.

Claims that market power drives inflation imply that one side of the market controls prices, a fundamental misunderstanding of the price system and market mechanisms. In a free market, prices emerge from the interaction of consumers’ willingness to pay and producers’ willingness to supply. No single player, including so-called ‘greedy’ corporations, controls prices. To mischaracterize this process is to disregard the very mechanics of how markets function. 

Their proposal disregards market mechanisms and disrupts the price system, leading to consequences that chill investment, stifle innovation, and create long-term instability. History offers stark warnings: in countries like Cuba, government interventions aimed at making prices “fair” or “cheaper” destroyed the very incentives suppliers need to produce the goods and services people rely on daily. Sweeping government controls and centralized interference destabilize markets, replacing the efficient allocation of resources with arbitrary standards and government overreach. Allowing the government to define acceptable profit levels undermines the free market’s ability to function, risking shortages, market collapses, and widespread suffering. 

Inflation was not driven by greedflation but was primarily a government-made phenomenon. The greedflation argument and its proposed remedies are not merely flawed — they are dangerous, repeating the mistakes of failed policies that have devastated economies and left entire populations starving.

Inflation Solutions Start with Free Markets

The greedflation narrative collapses entirely when tested against even the most basic economic principles. Rather than corporate greed, rising grocery prices reflect market mechanisms, evolving cost-structures, federal and monetary policies, and the unique pressures industries encountered during and after the pandemic.

Profit motivation is not the problem — it is the engine of economic progress, driving innovation and ensuring goods and services reach consumers efficiently. The real issue arises when government intervenes, attempting to dictate prices and vilify profit. 

To lower inflation effectively, we must address supply chain issues, reduce inefficiencies, and foster supply-side growth through incentives, not burdens like taxes and price controls, while reducing the fiscal and monetary interventions that have fueled inflation. 

Free markets, not government controls, provide the best path to stability and growth. 

In 2016, near the conclusion of his second term, President Barack Obama was asked by Chris Wallace about his greatest mistake as president. Obama didn’t hesitate to respond. He said his “worst mistake” was “probably failing to plan for the day after what I think was the right thing to do in intervening in Libya.”

Five years earlier, a coalition of NATO members, led by France, Britain, and the United States, intervened in the Libyan civil war and overthrew the government of Muammar Gaddafi. This resulted in Gaddafi’s death and the transformation of Libya into a failed state, a condition that persists thirteen years later, which has resulted in an ongoing civil war, countless deaths of civilians, and a humanitarian and refugee crisis. The US-led intervention in Libya was strategically misguided and ultimately harmful, providing a cautionary example for future US foreign policy. It would have been a better option for the United States to have done nothing than to trigger such a calamitous descent into chaos.

Today, Libya is essentially split between two rival factions: the Government of National Unity (GNU), based in Tripoli, which controls parts of western Libya, and the Government of National Stability (GNS), backed by the eastern-based House of Representatives (HoR), which operates in the east and south of Libya. 

Efforts to hold national elections have repeatedly failed. Numerous armed groups, militias, and foreign mercenaries regularly clash. In mid-December 2024, a battle between two rival groups led to a major fire and destruction in the country’s second-largest oil refinery, which will lead to further economic turmoil as Libya’s economy depends almost entirely on oil production.

Reports of arbitrary detentions, torture, and extrajudicial killings by various armed factions are widespread. The situation has been exacerbated by the aftermath of natural disasters, notably the floods in Derna in September 2023, which resulted in thousands of deaths and displacements. Libya’s economy, heavily reliant on oil exports, has been disrupted by the conflict and counterfeiting is widespread. Ordinary Libyans face a deteriorating economic situation. Many Libyans have fled to Europe, though severe human rights abuses against migrants, including in detention centers where forced labor, extortion, and sexual assault have been reported, are common. In short, the thirteen years since the United States and NATO invaded Libya have been nothing short of disaster.

The US-led intervention that overthrew Muammar Gaddafi in 2011 is the direct cause of Libya becoming a failed state. If the goal was simply to overthrow a tyrant, it achieved that. If the goal was to put an end to the ongoing Libyan civil war, or to alleviate the suffering of civilians, or to transform a dictatorship into a democracy, or to demonstrate that Western military power could be a force for good in the world — all goals claimed by the Western powers involved — then it failed disastrously.

Libya, post-intervention, remains undemocratic and war-torn, with countless civilian casualties. It can reasonably be argued that the average Libyan’s — those still alive — quality of life is far lower today than it was under Gaddafi. Millions have been displaced, many of whom have flooded into Europe, poverty has increased (more than 800,000 need humanitarian assistance out of a population of under seven million), and food security and the availability of basic services have dramatically decreased. These criticisms collectively paint a picture of an intervention that, while initially justified on humanitarian grounds, led to unintended and severe negative consequences for Libya and the wider region.

It is clear that the United States should never have allowed France or Britain to talk it into intervening in Libya. There was a complete disconnect between the use of military force to overthrow a minor regional power — that was the easy part — and fostering an environment in which violence would cease and an effective and democratic government would emerge. The use of military power could never achieve the desired end goals. No US nor NATO plan existed for the aftermath of Gaddafi’s overthrow. This lack of planning led to a power vacuum that was filled by competing factions and militias rather than a stable, democratic government. (Keep in mind just how poor the US track record of democracy promotion is.) Critics, including Obama himself, have acknowledged that they failed to “plan for the day after” the intervention. Even had the United States and its feckless NATO allies been willing to invade and occupy Libya for years, the cases of Iraq and Afghanistan demonstrate the folly of such an approach.

By no means should this critique be read as a defense of Gaddafi. He was an odious tyrant, a proliferator of weapons of mass destruction, a once-active sponsor of international terrorism, and a perpetual thorn in the side of the United States and Western Europe. But the US and NATO military intervention in Libya has been an unmitigated disaster for the people of Libya. Like Iraq (and many other locales before it), the military intervention in Libya began with positive, humanitarian intentions. To some Europeans and Americans, the prospect of overthrowing a violent tyrant was worth the cost of intervening militarily. But that military intervention has had, predictably, a host of negative unintended consequences.

In Libya, just like in Iraq, ideology — the desire to promote democracy and humanitarianism — trumped realism and American national interests, and has resulted in a series of costly failures for all concerned. The interests of the United States and Western Europe, not to mention those of ordinary Libyans, were in no way served by overthrowing Gaddafi.

The problem is that the US military has been, and continues to be, used to conduct operations that far exceed American national interests. The invasion of Libya not only was unnecessary and cost significant resources and lives, but it also destabilized the country and region, and has opened the door to increased Russian influence in Libya. 

US national interests would have been much better served by simply taking no action in 2011 and beyond. It is also worth noting that the Congress never authorized the use of military force in Libya; the Obama administration did not seek Congressional authorization, and justified the intervention based on the president’s constitutional powers as Commander in Chief and an international mandate from the UN Security Council. The House of Representatives even voted against a resolution (HJRes. 68) that would have authorized continued US involvement, but this did not legally bind the administration to stop the operation.

Libya should be seen as a cautionary tale for future American policymakers and strategists. In deciding whether or not to use US military force to affect some outcome abroad in the future, we should return to first principles. Does taking military action, and expending our precious, finite military resources, meaningfully advance significant US interests? If not, we should take no military action. 

The best arguments mustered by supporters of military intervention, including Bill Kristol, were that Gaddafi was committing humanitarian violations and that he had once sought nuclear weapons and supported terrorism (though we should note that long before 2011, Gaddafi seems to have given up his weapons of mass destruction programs and support for international terrorism). In this neoconservative mindset, such actions justified US military action. Ironically, deposing Gaddafi only seems to have increased the amount of violence and suffering within Libya, which is likely only to exacerbate the terrorist threat. 

We should always follow the advice of past American leaders and strategic thinkers like George Washington and George Kennan: avoid unnecessary wars, defend and maintain our constitutional order, and ensure that every American has the opportunity to achieve economic prosperity. 

We can do that, placing the national interest at the core of everything we do as a nation, and remain perfectly secure, while doing no harm abroad.

Generated by Feedzy