Category

Editor’s Pick

Category
Pavel Durov, Russian-born founder of Telegram Messenger, at a tech conference in Berlin 2013.

Moments after stepping off his private jet at Le Bourget airport outside Paris last week, tech titan Pavel Durov was arrested by French police. The Russian-born billionaire’s crime, the BBC reported, stemmed from an alleged “lack of moderation” of Telegram, a cloud-based social media platform Durov owns and operates that boasts nearly one billion monthly users.

Though the arrest did not take place on US soil, George Washington University law professor said the event poses a direct threat to not just free speech in Europe, but the United States.

“This is a global effort to control speech,” Turley said on Fox News, noting that European regulators have also put pressure on Elon Musk, a US citizen, to censor Americans, including presidential candidate Donald Trump, through the Digital Services Act.

While the extent to which the US government was involved in Durov’s arrest is unclear, reports indicate that the FBI has for years attempted to penetrate Telegram. 

In 2017 Wired reported on alleged attempts to “bribe Durov and his staff to install backdoors into its service.” More recently, the New York Times reported on Durov’s allegation that the FBI attempted to hire a Telegram programmer in order to help the US government breach user data.

“The FBI did not respond to a request for comment,” the Times reported. 

‘A Refusal to Communicate’

Following the arrest of Durov, who was released on bail equivalent to some $5.5 million and is prohibited from leaving France, Time magazine reported that the event had ignited “fierce global debates about the limits of digital freedom of speech, and how much responsibility social media companies should bear over the content on their platforms.”

The preliminary charges against Durov primarily stem from allegations that Telegram users are using the message platform in harmful ways, including “crime, illicit transactions, drug trafficking, and fraud.”

In this sense, the prosecution’s allegations may be true, but as Turley points out, criminals use all kinds of tools and technologies for illicit purposes. But it’s not the practice of civilized countries to place corporate executives behind bars over crimes committed by their customers.

“It’s like arresting AT&T’s CEO because the mob used a telephone,” Turley said. 

Part of Durov’s alleged crime, quite literally, was that he wasn’t cooperating sufficiently with government officials, who charged him with a “refusal to communicate, at the request of competent authorities… .”

Though French President Emmanuel Macron said the arrest wasn’t political and the nation remains “deeply committed” to free expression, Durov’s arrest looks very much like the latest development of what Turley described as the “global effort to control speech.” 

European leaders have made it clear they are quite happy to censor speech they don’t like, whether it’s a comment about a politician’s weight  or criticizing immigration policy, all in the name of protecting people from hate speech, crime, or “disinformation.” 

Outsourcing Censorship

Americans might be inclined to shrug their shoulders and attribute this to “those crazy Europeans.” But this would be a mistake.

It’s apparent that many in Washington also want to censor speech and control the flow of information. Just two years ago, many will recall, the Department of Homeland Security (DHS) announced the creation of a new “Disinformation Governance Board.” 

“The spread of disinformation can affect border security, Americans’ safety during disasters, and public trust in our democratic institutions,” DHS announced at the time.  

This sounds similar to the language of Věra Jourová, the vice president of European Commission for Values and Transparency, who just months prior to the EU’s adoption of the  Digital Services Act, during a speech at the Atlantic Council, made a case for cracking down on disinformation to protect democracy. 

While the Biden Administration pulled the plug on the Disinformation Governance Board shortly after its rollout due to public uproar, it’s clear that many in Washington don’t begrudge the EU’s censorship power; they envy it. 

Fortunately for Americans — and unfortunately for federal lawmakers — the First Amendment and decades of court precedents make it much more difficult to suppress speech in the US than in Europe. Because of this, the Censorship Industrial Complex (to borrow a term from Investigative journalist Matt Taibbi) has had to get creative. 

Lacking the constitutional authority to censor Americans directly, powers in Washington have, in recent years, outsourced censorship to others. In 2022, the Twitter Files exposed for the first time the government’s sprawling censorship apparatus, which involved government officials leaning heavily on social media companies to get them to do the dirty work of censoring problematic information (sometimes, even when the information was true). 

Indeed, just days after Durov’s arrest, Meta founder and CEO Mark Zuckerberg told the House Judiciary Committee that the White House “repeatedly pressured our teams for months to censor certain COVID-19, including humor and satire.” 

‘A Robust Anti-Trust Program’

The First Amendment, which states “Congress shall make no law…abridging the freedom of speech, or of the press,” applies to the government, not private entities. But numerous Supreme Court precedents make it clear that it is unconstitutional for government agencies or officials to coerce private actors to suppress speech on their behalf.

The line between “permissible attempts to persuade and impermissible attempts to coerce” is not always clear, but it’s a distinction the high court has explored in many cases in recent decades, including  Bantam Books, Inc. v. Sullivan (1963), which concluded that government officials may not use the “threat of invoking legal sanctions and other means of coercion…to achieve the suppression” of disfavored speech. 

That the Biden Administration crossed this line is difficult to deny. The White House’s high-pressure campaign against Facebook to coerce the social media company to censor speech is well-documented. 

Those efforts include veiled threats from Andy Slavitt, the White House Senior Advisor for the COVID–19 Response, who told Facebook on March 15, 2021 that the administration was not impressed with Facebook’s censorship efforts and “[i]nternally we have been considering our options on what to do about it.” It also includes threats that are less veiled, like when White House Press Secretary Jen Psaki was asked in May 2021 about Facebook’s censorship practices, including its decision to keep former President Donald Trump off its platform. Psaki replied that it was the president’s view that platforms had a responsibility to protect Americans from “untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections” and that Biden also supported “a robust anti-trust program.”

In other words, while Slavitt was putting immense pressure on Facebook behind the scenes to censor “untrustworthy content,” the White House was publicly saying Facebook needed to more aggressively monitor and remove problematic content — and, by the way, the federal government has the power to break up your company if you don’t comply.  

All of this explains why Facebook, in 2021, began to remove and suppress content that ran afoul of the COVID State. Some of the content that was removed was no doubt false, of course. Some of it was simply problematic, like claims that COVID-19 might have emerged from the Wuhan Institute of Virology, something most US agencies today, including the FBI and CIA, believe is true. 

What’s clear is that the White House’s pressure and threats had an impact on Facebook’s policies. Zuckerberg, who in leaked conversations with employees in 2019 acknowledged the “existential” threat of antitrust regulators, admits he caved under the White House’s coercion, something he today regrets.  

“I believe the government pressure was wrong, and I regret that we were not more outspoken about it,” Zuckerberg wrote in an August 26 letter to House Judiciary Chairman Jim Jordan. “I also think we made some choices that, with the benefit of hindsight and new information, we wouldn’t make today.”

‘Not a Message This Court Should Send’

In June, in Murthy v Missouri, the Supreme Court had the opportunity to address what the US District Court for the Western District of Louisiana described as “a far-reaching and widespread censorship campaign” by federal officials. 

Sadly, the high court punted on the matter, arguing that the plaintiffs of the case lacked legal standing. 

“We begin — and end — with standing,” Justice Amy Coney Barrett wrote in the majority opinion. “At this stage, neither the individual nor the state plaintiffs have established standing to seek an injunction against any defendant. We therefore lack jurisdiction to reach the merits of the dispute.”

Among those who lacked standing, according to Barrett, was Jill Hines, a healthcare activist who advocates that people have the right to say no to medical procedures. Hines, the co-director of Health Freedom Louisiana, a group opposed to mask and vaccine mandates, had her Facebook page suppressed during the pandemic. Barrett stated that Hines had the strongest case for standing, but had failed to demonstrate that the restrictions against her “are likely traceable to the White House and the CDC.”

Not all justices agreed. 

In his dissenting opinion, which was joined by Justices Clarence Thomas and Neil Gorsuch, Justice Samuel Alito argued the evidence of the case “was more than sufficient to establish Hines’s standing.”

Standing aside, Alito warned that by shirking its responsibility to hold government actors accountable for their coercive efforts to pressure private entities into censoring information on their behalf, the high court was sending a dangerous signal to those in power attempting to control and suppress political speech. 

“Officials who read today’s decision…will get the message,” Alito wrote. “If a coercive campaign is carried out with enough sophistication, it may get by. That is not a message this Court should send.”

‘The Indispensable Right’?

The extent to which Pavel Durov’s arrest stems from the involvement of any US agency or official is unclear. But it could be the latest example of what Turley describes as censorship by surrogate.

US agencies and officials cannot legally censor users themselves, but they can get others to do their dirty work for them. We know today that the FBI was deeply involved in efforts to control the flow of information on Twitter, as well as Facebook. This is an obvious example of censorship by surrogate, but leaning on social media companies is hardly the only method US officials have at their disposal to control or suppress speech. 

US officials can lean on advertisers and ad organizations — like the Global Alliance for Responsible Media, an initiative run by the World Federation of Advertisers that was abruptly shutdown after Elon Musk sued it over alleged antitrust activities. And they can lean on other  governments — and they probably are. 

It would be naive to believe that Macron arrested Durov without at least the blessing of US officials, who had taken a deep interest in both the Russian billionaire and Telegram. Nor is Durov the only person being targeted by those behind the global effort to control speech. Elon Musk, who bought Twitter (now rebranded as X) and exposed the FBI’s shenanigans there, has become a target in not just the EU but also Brazil, where the social media platform is reportedly under the risk of being deplatformed. 

“The question for Americans is whether we’re going to allow these global censors to basically control speech from Europe,” says Turley, author of The Indispensable Right: Free Speech in an Age of Rage.

All of this censorship, of course, is taking place in the name of some “greater good.” Indeed, it’s not uncommon for those most loudly supporting censorship to do so in the name of protecting democracy. But as I’ve pointed out, democracy without free speech is like a picnic without food. It’s not pointless, but a charade. 

The whole point of a constitutional system is to protect the rights of individuals, and when governments themselves become the primary transgressors of these rights, you no longer have a benevolent government; you have a tyrannical one. And free speech is arguably the most fundamental of these rights. 

“If the freedom of speech is taken away then dumb and silent we may be led, like sheep to the slaughter,”  George Washington famously said in “The Newburgh Address.” 

The Supreme Court had an opportunity to put some of the bad government actors in place in Murthy v Missouri, a case that clearly showed government officials coercing private companies to censor information on their behalf, which Justice Alito described as “blatantly unconstitutional.” 

The United States may come “to regret the Court’s failure to say so,” Alito observed, and he might be right. 

The right to free speech is indeed indispensable, but it appears that those who believe otherwise may have already found bigger fish than Facebook and X to act as their censorship surrogates — and perhaps more stringent methods.

If you doubt this, just ask Pavel Durov, who in 2014 fled Russia after the Kremlin “tightened its grip over the Internet” — only to find himself a prisoner in the West a decade later.  

Gourds and vegetables for sale on the honor system, characteristic of high-trust societies.

It’s hard to imagine a starker difference in political visions than between Trump-Vance and Harris-Walz. This could get ugly, so now is a good time to remind ourselves of what it is that holds us together as a nation and a people.  

America is a nation of immigrants who had very different ideas about all sorts of things, but no group was able to impose its culture on the others to a significant degree. We naturally presume this produced a melting pot that united us by creating a new alloy out of many different metals.  

But the real key to America’s success was not uniting us by homogenizing us. It was the emergence of a uniquely American culture that held us together through shared moral beliefs and principles, while allowing us to retain our personal individuality. 

In America, heavy investment into our civic culture through these shared moral beliefs and principles produced the freest thinking minds in human history. The founders recognized this and worked hard to preserve it. This is why they wrote a constitution that provided a formula for a government that was to serve the citizens and not the other way around.  

When Alexis Tocqueville published his first installment of Democracy in America in 1835, he argued that America had a distinctive culture that made it especially capable of self-government. There was something about the American culture that led to the proliferation of mediating institutions that in turn led to an extraordinary level of organic (uncoerced) cooperation. That, in turn, made Americans uniquely well-suited to practice democracy.  

But just exactly how did that happen?  

As America grew, specific religious beliefs became increasingly subordinated to an overarching moral belief structure. In short, not doing the moral don’ts (not lying, not stealing, etc.) became increasingly viewed as a universal moral duty and a public matter, while doing the moral dos (being conscientious, being generous, etc.) became things that were encouraged but otherwise viewed as a purely private matter.  

This was not by design. It happened because, as the scale and scope of economic activity increased, it became increasingly impractical to abide by moral standards for behavior based on promoting, rather than protecting, the welfare of others around us. 

This shift in moral thinking began long ago in the West. As people in the West lived in ever larger groups, religious wisdom began to reflect and reinforce this shift. As but one example Hillel (הלל), a towering figure in first century Talmudic thought, proclaimed: 

That which is hateful to you, do not do to your fellow [man]. That is the whole Torah; the rest is the explanation; go and learn

Hillel was effectively saying that avoidance of harm is what the Torah is about, not benevolence, which is consistent with not doing the moral don’ts taking precedence over doing the moral dos.  

Because of America’s extraordinary diversity, the idea that we should concern ourselves with not doing the moral don’ts above all flowered most fully. This was also consistent with America’s early Protestant nature, which stressed that one’s conscience should guide moral decisions rather than any kind of religious formulary.  

This was very important, because our ability to trust others we don’t know has nothing to do with hoping they’ll be nice to us by doing the moral dos to promote our welfare. In a large society it can’t. Small group trust is lovely, but it doesn’t scale up.   

When you walk the streets of Manhattan, it is not your belief that everyone you pass is so inclined to do nice things for everyone else that it makes you feel safe enough to go about your business. It is your belief that they won’t do the moral don’ts.  

Since not doing moral don’ts involves not taking actions, it doesn’t require resources. This means we can all obey all the moral don’ts at the same time. The moral don’ts therefore provide a basis for trust that can scale up.  

The rise of civilization is the story of people living in ever-larger groups. In places like America, culture evolved even further, producing the moral belief that we should never do moral don’ts and use government, if necessary, to enforce them. Meanwhile, obeying the moral dos is to be treated as a purely private matter. In other words, we should mind our own business. This is so deeply ingrained in the American ethic that for us it’s like water to fish. 

Being confident that, in most contexts, no harm would come to us led to a habit of extending trust to strangers unless there was a good reason not to. That is the essence of a high trust society. Since trust is a powerful catalyst to voluntary cooperation, this unleashed the power of freely directed cooperation as never before in human history.  

Tocqueville’s own thesis for American success notes that many of our mediating institutions are highly trust dependent. These institutions were voluntary associations which is why they were epiphenomenal within a culture of freedom. It is difficult to imagine that such voluntary associations would last long if everyone in them was highly suspicious of everyone else.  

But America’s cultural glue, which makes all of this possible, is weakening. Today’s civic and moral educators don’t stress the primacy of not doing the don’ts over doing the moral dos.  

Instead, they preach that certain kinds of positive moral actions are duties – like driving an electric car. This is a prescription for a virtue-signaling arms race wherein people indulge their moral vanity by doing whatever they can to appear morally superior to everyone else.  

Not so long ago in America it was considered rude to ask anyone other than one’s inner social circle which positive moral actions they undertook. But it now happens every second of every day on social media, in our grade schools, on our campuses, and even at work. 

What really matters for trust is not what you do, but what you don’t do. But since inactions are not observed, they cannot be rewarded with social approval. Just imagine the reaction you’d get by bragging about the lies you didn’t tell, the property you didn’t steal, and the people you didn’t murder.  

To earn explicit social approval, one must do the moral dos. So today, Americans loudly tout their doing of moral dos – whether that’s using the “right” pronouns or boycotting the “wrong” people. But they are basically touting that they are following the company line, so the price of social approval is steadfast conformity that can hardly be described as genuine freedom.  

In most American schools today, children are taught that they should care enough about everyone else to be willing to think, say, and do approved things to produce conformity sufficient to unite us. But that’s not what made America a free and prosperous country. Getting along well enough to freely cooperate even with strangers, while preserving our individuality, is.  

Unless we return to prioritizing not doing the moral don’ts over doing the moral dos, our cultural glue will weaken further, and we will become less trusting and therefore less willing to cooperate outside our most intimate social circles. We will increasingly be unable to do that which made America the envy of the world.  

Figures walk through the slums in Washington, DC, captured by the The Farm Security Administration. 1935.

“Slumming it” is a slang expression describing the practice of young people from families of means visiting (or temporarily living in) impoverished areas to experience lifestyles foreign to their upbringing. The practice is often deemed exploitative, the expression offensive. Still, it may well be that slumming it (pardon the historical expression) played an important role in making the world rich.

Economic historians have told and tested a great many tales of how the world got rich — or specifically, how innovation surged in about 1760 in England, then elsewhere, and never let up. In the last few decades, Deirdre McCloskey has promoted a compelling, qualitative origin story broadly subversive to these.

McCloskey’s story is one of ideas and perspectives. Before institutional protections could arise, England had to first find a way to overcome strong moral prejudices against profiteering lifestyles. England somehow did. The damnable pursuit of wealth — when squinting just right — became the courageous spirit of commerce by about 1700.  

McCloskey’s epic effort is remarkable, but her “somehow” remains hazy. Dan Klein has admirably stepped up and suggested it may be found in Hugo Grotius’s philosophy of 1625. Grotius helped establish that commerce only had to be honest — not virtuous — to be acceptable. “Having a go” broke wide open.

I would like to suggest a different origin. Instead of philosophical tomes, it may have been salacious plays and vulgar urban dictionaries — a pop culture from the same era which derived from slumming it around London’s hawkers, slop sellers, and bunters.

A Story of Two Journeys

Two types of immigrants to the rapidly growing city of London are the protagonists of this economic story, both arriving due to legal conundrums.

Primogeniture was conundrum number one. Primogeniture required inheritance to go primarily to the first born. By the 1590s, following a post-plague baby boom, the elite landowners had a surplus of cadets (the younger siblings). Many cadets had to leave the countryside and go to vicious London to try to make their way through education or apprenticeship.  

Restrictions against vagrants and vagabonds was conundrum number two. Wandering theater troupes found their way of life in jeopardy so they came to set up permanent playhouses in metro London, the first occurring in 1576.

It is in England’s first “theatre district” (co-located with the marketplaces, alehouses, and brothels in the suburbs known as the Liberties) that our protagonists meet. The playwrights had to appeal to their given audience. They did so by writing tales of validation about this coterie of young cadets who were in the slums (and, yes, enjoying them) but who refused to be of the slums.

From Vicious to Gallant

These cadets were in a difficult social position. They yearned to return to the gentleman’s social status but to do so they needed to engage in those profiteering acts shunned by the gentleman. Playwrights came to their aid by portraying these young men as “Gallants.”

The Gallant was a new version of the traditional British outsider, the Trickster. Where Tricksters were deplorable in their carnal schemes, Gallants were appealing in their designs for love, honor, and money. They embodied their names of, say, Witgood and Possibility against lamentable elder elite such as Lucre and Hoard. More importantly, where prior Tricksters would be cast out when discovered, Gallants would always be forgiven and accepted into the social circle, their guile revalued as cleverness, their crimes as “human follies.”

For three decades, the “City Comedy” genre of the Gallant defined the London theater scene. Night after night it favorably recast the messy primordial stuff of McCloskey’s bourgeois virtues — ambition, opportunism, calculation, and the wily destruction which would in time become “creative destruction.” In addition, it authorized London’s “constant mingling of blood, class, and occupation” and deprecated its fuddy-duddy hierarchies and world views.

Whereas prominent men of science and letters, such as those in “Hartlib’s circle,” used reason to overcome mistrust of social change and experimentation, playwrights steered with what may have been the more powerful stuff of emotion, sympathy, and humor.

From Vulgar to Estimable

Evidence of this genre’s effect can be seen in the curiosity it created regarding the people of the slums and their slang — also known as cant, vulgar tongue, flash, and conny-catching. For the next two centuries, slang dictionaries became a popular purchase, a tantalizing sort of Fodor’s guidebook through the underbelly of London.  

These dictionaries credibly assisted a cultural transformation. First slang became fashionable. Then the dictionaries and their prose spinoffs began to tentatively characterize slang as that of “the people,” then to associate it with the emergent concept of British liberty, then to relish in its British free-spirit (via what must rightly be called an early form of gonzo journalism).

That spirit, it turns out, was foremost the hustle for money — or, I should say, the raising wind for ribben, rhino, cole, colliander, crap, crop, spans, quidds, ready, lowre, balsam, plate, prey, gelt, iron, mulch, gingerbread, dust, and darby. (Not to mention curles, shavings, pairings, and nigs, of course.)

Nothing is more represented in this lexicon than the pursuit of money — from the old professions of bully backs, pot coverts, cutpursers, cole fencers, Covent Garden Nuns, Fidlam Bens, jarke-men, and Figgers, to the new ones of gullgropers, impost takers, sealers, sleeping partners, Grub-street writers, duffers, and stock jobbers.  

By the eighteenth century, the great defenders of the market economy would make use of these familiar portraits of lowly markets — so prominent were these portraits in the public conscience. Bernard Mandeville would didactically assert that their private vices produced public benefits and, as an acerbic inversion, that the true speakers of criminal cant were traditional authorities. And Adam Smith would assert that the “higgling and bargaining of the market” was a natural and beneficial expression of a human propensity. We are all higglers; a philosophy of bourgeois equality had finally come into its own.

Conclusion

England, adhering to the Great Chain of Being, cast its detritus toward London. London cast back a provocative new pop culture. This pop culture helped society negotiate the ambiguity of hawker ethics and come to terms with the messiness of the emergent commercial order. It helped carefully determine where to cut Grotius’s honest commercial practices from unvirtuous cloth.

I propose, then, that the miracle of the modern economy owes as much to the accidental playhouse of 1576 as to the intentional jurisprudence of 1625 and the late-to-the-game Glorious Revolution in 1689. My proposal does not lend itself to a positivist research agenda, but I put it forward — in the spirit of Deirdre McCloskey — as a charge to read widely, explore deeply, and be willing to slum it a bit in the disciplines of others.

Modern capitalism has no virgin birth; of that one should be sure. And if it so happens that it received its just form, direction, and salvation through a slumming voyeurism, so be it. We would be strengthened in our defense of it to recognize how closely commerce once communed with sin and, in the minds of some, still does.

Attorney General Merrick Garland delivers remarks to Department of Justice employees. 2021.

Consumer interests are paramount when making business decisions, but business growth is largely determined by the individuals at the helm of the organizations they serve. Some firms grow to serve a wider consumer base, some to be reactive to demand levels, some to be proactive regarding market trends, and some to be responsive to competitive pressures. Regardless of the ‘why’ of a firm’s growth, the ‘when’ and ‘how’ will impact (or impede) the efficacy of expansion. And sometimes the aspirations of business leaders don’t go according to plan — an acquisition can go awry, a business deal could go bust, or supply chain networks for scaling could shift. Moreover, a competitor, new technology, or a new situation could render a business obsolete. Uncertainties abound in the business realm, which is why most firms look to control what they can, when they can, and advance themselves when able. Google is a case in point.  

At the outset, Google had plans to be bought, not to be one of today’s biggest businesses receiving heat from the DOJ for its search engine prominence. In 1998, Google pitched itself to the premier search engine at that time, Yahoo, with a price tag of $1 million — but Yahoo declined. A year later, Google tried again and pitched itself to Excite, for $750,000. Once again, Google was turned down. (Excite was later bought by Ask Jeeves, now Ask.com).

Google pushed forward and prevailed, and by the start of the 2000s, Yahoo did an about-face and adopted Google as its own search engine provider, then sought to acquire Google for $3 billion. This time, Google said no; Google wanted $5 billion. What a difference a few years can make, paired with a determination to be the best. And, as fate would have it, Yahoo was later bought by Verizon in 2017 for just shy of $5 billion.

Google had some impressive milestones early on that helped it expand, thanks to internal ingenuity as well as external acquisition opportunities. Major initiatives included the 2004 start of Gmail, the 2005 launch of Google Maps, the 2005 purchase of Android, the 2006 acquisition of YouTube, the 2007 acquisition of DoubleClick, and the 2008 introduction of Google Chrome. Today, Google is a juggernaut. Google is to Gen Z what AOL was to Gen X.

Google’s growth is truly an impressive feat, and we have all benefited from its services — something we should remember while government agencies grill Google for its current market dominance in the search engine sector. The reason why Google has a dominant position when it comes to search is because it took steps to secure itself as the default. But the only way it could achieve default status was by deserving it (in addition to any dollars paid and contracts made). If consumers weren’t satisfied with Google Search, no amount of money could cover the cost for smart device providers to keep Google as the featured choice. Remember the backlash that ensued when Apple had its own map app installed as the default on iPhones in 2012. Had Apple Maps initially worked well, there wouldn’t have been an issue with it being featured over Google Maps, but that wasn’t the case. And, as The Guardian put it, Apple Maps was a $30 billion mistake. Apple had to relinquish its own app on its own smart devices and allow consumers to revert back to the better option they preferred — Google.

Google’s superior ability to provide relevant search results is a huge value for users. I know that if something unexpected happens and I need to get a loved one to a hospital, Google Search will find me the closest and best care providers and Google Maps will get me there — and this information is provided in an instant at no cost. Now some will say the cost is my data, which I say: go ahead, take my data, I want relevant search results and I like ads and promotions curated to my needs and interests. The last thing I want is to have my smart device ask me which app I prefer. I want the best one as the default, and if I learn of a better one, then I or the provider of my smart device will switch. And, the fact that Google is willing to pay for its default placement, even when it knows it is the best, demonstrates how vulnerable it is. 

If the government’s alphabet agencies could get out of the way of market mechanisms, an alternative brand or product could unseat Google’s monopolistic stance just as Google displaced Yahoo’s dominance in the early 2000s. Another supposed monopoly that toppled around the same time as Yahoo was Motorola. The 2007 launch of the iPhone took over Motorola’s superior and impressive market power position. A few years later, in 2012, Google bought Motorola, hoping to make a dent in the mobile sector and revive its status. That turned out to be a big mistake, and Google would later offload Motorola in a sale to Lenovo. 

Yes, Google has surpassed all in search but it has yet to make a mark in the smartphone space, or the messaging space (remember Google Wave, Google Talk, and Google Allo), or even the social media space (sorry Google+ and Google Buzz). Is it any wonder then that Google aims to safeguard its default search status on smart devices? 

Google competes in an international world of ideas, and there is no way of knowing which product or service may suddenly surpass Google’s own offerings. Actually, Apple Maps has recently relaunched in beta to have another go at Google Maps and, as for search, consumers are already toying around with new tools for their queries and questions thanks to AI advancements. Perplexity AI has been positioning itself to take on Google Search and now OpenAI is looking to topple Perplexity in the AI search race. Moreover, search engine startups with unicorn status due to venture capital investments, span the globe and show no signs of slowing down. The DOJ’s case against Google is a moot point and waste of taxpayer dollars.  

As stated earlier, Google is to Gen Z what AOL was to Gen X. And look at what came of AOL. During its heyday, there was no escaping AOL, which controlled more than 90 percent of the market for instant-messaging software. AOL was the undisputed default and its merger with Time Warner in 2001 was one of the largest corporate consolidations at that time. AOL and Time Warner together enabled control over all mass media and internet activity. Yet, the dotcom bubble burst, and the difficulty of managing such a massive organization hastened the megamerger’s demise. 

Clearly, even the best of the best don’t always know what is best in a dynamic marketplace. Government agencies and officials thinking they know better is truly absurd. The pervasive distaste coming from political elites for American firms who do well by their customers and do well to further their business growth is baffling, particularly when businesses like Google are a huge benefit to not only our economy but also our national security. It’s a shame that the DOJ isn’t on the same page as the Department of Defense, which contracts with Google Support Services (coincidentally, the Army uses Google Workspace and US Special Ops is looking to leverage Google Glass). 

Business leaders who continue to surpass global competitors and their capabilities, by taking risks and growing their firms, should be admired (not attacked) for their productivity and progress. And members on the Hill, who take more than they make for our economy, should realize that the bashing of big businesses will dull the dynamism this country is known for. Innovative entrepreneurs will start to shy away. Congress should Google the benefits of economic freedom and also search up how to curtail the gargantuan levels of government spending; the history of economic progress and federal budget restraints can shed light on the mismanagement we all see today. Without question, a bulging bureaucratic state is more costly for Americans than the growing success of our most innovative firms.

Empty supermarket shelves in Caracas, Venezuela, during widespread shortages of food and medicine caused by price controls and hyperinflation. 2018.

Things are rarely so bad that decisive action by government officials can’t make things worse.

In the current election, the Republicans are trying to outdo each other by proposing larger and more restrictive tariffs. The Democrats have just come out with a remarkably bad plan to outlaw “price gouging,” particularly for groceries.

Such proposals get more attention from politicians at election time, because to get votes you have to show you did something.  The fact that the right thing is to do nothing is hard for politicians to accept, because no one can claim credit for the market.

I’m not trying to make a partisan point, because as I noted above there are ill-advised proposals on both sides of the party divide.  And I’m not claiming markets are perfect. The problem is that asking voters what they want prices to be is a recipe for becoming…. well, Venezuela.

In 1981, about half of Venezuela’s population was living on the equivalent of $10 per day or less (the number for the US was less than 5 percent). That number was flat until 1992, the year that Hugo Chavez launched his unsuccessful coup attempt against the corrupt regime of President Carlos Andrés Pérez. Pérez was forcibly removed from office in 1993; officially, he was removed for embezzlement — which he did in fact do — but even more for showing a near-total inability to deal with social unrest over the collapse of the economic system, even with substantial oil revenues to fill government coffers.

Chavez was pardoned in 1994, and in 1998 he was elected to the Presidency. He immediately worked to deepen and expand the “Bolivarian Revolution,” focused on social welfare programs, nationalizing key industries, and “democratizing” the market system. As long as oil prices were high, and people were satisfied with essentially free electricity as a handout, the “Chavismo” regime was politically successful.

But Chavez died in 2013. His successors tightened and expanded the grip of their socialist philosophy, and GDP went into free fall. Where GDP per capita had been well over $18,000 US in 2013, today it has fallen to around $5,000, a decline of more than 70 percent for an oil-rich nation.

The situation eroded quickly, reaching an early head in the summer of 2015. Prices were skyrocketing because of inflation, caused by the government using newly printed money to pay off debts and make payroll. But the government had accused corporations that ran large grocery chains of “price-gouging.”

I remember reading about this at the time, and in a way I still can’t quite believe it, nearly ten years later. In July 2015, a massive police contingent raided a hoard of food and grocery products in Caracas. They found tons of food and groceries, which they then distributed for free to people in the street, thereby “liberating” the necessities from the hoarders.

The unbelievable part is that the “hoarder” was Empresas Polar, a giant grocery and food retail conglomerate. The “hoard” was a warehouse, a large distribution center where trucks delivered pallets of wholesale food items, and from which shipments went out to local retailers. It’s not really surprising that an enormous building designed to store food would have a lot of food in it.

But once the warehouses were raided, and the contents donated to the public, prices of food immediately tripled, or more, if food could be obtained at all. Groceries all closed, because their supply chains were cut off by the anti-gouging order. Seizing a warehouse full of food meant that a few thousand people got food for “free” for one day, but suppliers immediately tried to get their shipments sent elsewhere, before they could be “liberated” by the “representatives of the people” working for President Nicolas Maduro.

I should emphasize again that I am not trying to make a partisan point. Venezuela, at a time when it was having trouble feeding its population, also imposed very large tariffs on imported agricultural and other imports, thereby raising prices for consumers even further. But those policies pale in significance when compared to the price control fiasco.

Now, for the bad surprise: the US seems to be well on its way this summer, traveling down “The Road to Venezuela.” In a speech right here in my home town of Raleigh, North Carolina, Vice President Harris announced on August 16 that she would place controls on grocery prices.

As attorney general in California, I went after companies that illegally increased prices, including wholesalers that inflated the price of prescription medication and companies that conspired with competitors to keep prices of electronics high.  I won more than $1 billion for consumers.  (Applause.)

So, believe me, as president, I will go after the bad actors.  (Applause.)  And I will work to pass the first-ever federal ban on price gouging on food.

Problems with a federal law on “price-gouging” have been pointed out by others. That would require a benchmark of what the price should be, and a limit on how much grocers could charge. The proposal is also likely a violation of the Tenth Amendment, which reserves “police power” (which surely includes retail point-of-sale prices), to the states, rather than the federal government. On the other hand, interstate commerce might be expanded to encompass these kinds of sales, for large companies at least.

The real problem is the merits of price controls, rather than problems with enforcement. This description, from X (nee Twitter), spells out the generic step-by-step process, accurately identifying what happened in Venezuela and what could happen in the US.  

This article, in the New York Times points out unequivocally that there are good reasons to recognize that intentional price manipulation by grocery chains in the US played at most a minor role:

Consumer demand was very strong. Fed and congressional efforts to boost households and businesses during the pandemic, like the $1,400 payments for individuals Mr. Biden signed as part of the economic rescue plan early in 2021, fueled consumption.

“If prices are rising on average over time and profit margins expand, that might look like price gouging, but it’s actually indicative of a broad increase in demand,” said Joshua Hendrickson, an economist at the University of Mississippi who has written skeptically of claims that corporate behavior is driving prices higher. “Such broad increases tend to be the result of expansionary monetary or fiscal policy — or both.”

Ten years ago, Venezuela set out on a path to economic ruin and grave shortages of basic consumer goods, because of price controls on groceries and other products. Is the US really going to travel on the same road?

Uncle Sam as Tantalus, frustrated in seeking Prosperity by posts representing high protective tariffs and political agitation, divided from his goal by an “ocean of politics.” Puck. 1897.

The Republican Party wages an internal battle while the US economy teeters on the edge of a potential recession, marked by a weakening labor market and volatile financial conditions. The debate within the party is not just over political leadership, but the very economic principles that will define the nation’s future. With inflation-adjusted wages down since January 2021 and savings rates at historic lows, a pro-growth economic agenda is urgently needed. 

Yet, some within the GOP or right-leaning groups are pushing for a dramatic shift away from the free-market capitalism that has historically driven American prosperity. The deviation threatens to undermine decades of economic success.

The “New Right,” represented by groups like American Compass, advocates for a return to big-government policies. Under the guise of a new form of conservatism, this faction argues for increased government intervention in the economy, protectionist measures, and the strengthening of monopoly labor unions.

Oren Cass, who leads American Compass, pushed this interventionist approach in his article “Free Trade’s Origin Myth” at Law & Liberty

“As the American people, and American policymakers, rediscover the importance of promoting domestic industry and protecting the domestic market, economists have a vital role to play in analyzing how best to accomplish the nation’s goals.”  

Cass claims these policies will benefit workers and domestic industries, yet history and economics tell us otherwise. The New Deal, Great Society, and more recent Obama-Biden policies, all rooted in similar principles, have repeatedly demonstrated the failure of such approaches to deliver sustainable economic growth.

This misguided movement threatens the free-market policies that have been the hallmark of much of GOP economic policy. American Compass and its allies call adherents to these principles “free-market fundamentalists,” suggesting that the time has come for the GOP to abandon the policies that have lifted millions of Americans out of poverty and spurred innovation and economic growth.

Consider the economic successes of the Trump administration during its first term — a period characterized by substantial deregulation and tax cuts. 

The American Action Forum calculates the final rule costs at the same point of the last three administrations. The latest through August 23 in the fourth year of each term had final rule costs of $311.7 billion for Obama and $1.67 trillion for Biden, while Trump had a decline of $100.6 billion. The cost of doing business was lower under Trump than the other two. According to the Competitive Enterprise Institute, there is always room for more cuts: the high costs of regulations currently top $2.1 trillion per year. 

The Tax Cuts and Jobs Act helped boost the economy by lowering tax rates, contributing to more incentives to work and invest. The Trump tax cuts and deregulation empowered more economic growth, more job creation, and greater income distribution, by allowing the private sector to thrive. Before the destructive pandemic-related lockdowns, real median household income increased by $5,000, wages increased by nearly 5 percent, and the poverty and unemployment rates reached their lowest in 50 years. 

These gains, however, are now at risk as key provisions of the tax cuts are set to expire in 2025, and the fiscal crisis driven by government overspending threatens to reverse this progress.

The GOP must resist the allure of the “New Right” and reaffirm its commitment to pro-growth policies that prioritize economic freedom and limited government. This begins with reducing government spending, which is essential to making the Trump tax cuts permanent and preventing a tax hike that would stifle economic recovery. Simplifying the tax code by eliminating special provisions that pick winners and losers would further enhance economic efficiency and equity.

In addition to tax reform, the GOP must focus on streamlining welfare programs and enforcing work requirements. These policies would reduce dependency on government assistance, encourage labor force participation, and strengthen families by promoting self-sufficiency. The economic benefits of such reforms are clear: a more robust labor market, higher productivity, and greater economic mobility.

Embracing free trade is likewise crucial for maintaining America’s competitive edge in the global economy. Protectionist measures, as advocated by the “New Right,” may offer short-term relief to specific industries but ultimately harm consumers, reduce innovation, and weaken the broader economy. On the other hand, free trade fosters competition that drives technological advancement and delivers lower prices and more consumer choices.

The GOP should also prioritize fostering innovation, particularly in the technology sector. The US can lead the next economic revolution by reducing regulatory barriers and promoting a pro-innovation environment, driving productivity and economic growth for decades.

The alternative — a retreat into the big-government policies championed by the “New Right” — would be disastrous. Higher tariffs, increased taxes, and greater government and union control over the economy would exacerbate economic stagnation, fuel inflation, and increase poverty. They echo the failed strategies of progressive leaders like Woodrow Wilson, Franklin D. Roosevelt, and Lyndon B. Johnson — policies that expanded government power at the expense of economic freedom and prosperity for ordinary Americans.

The path forward for sound policy is to embrace a pro-growth approach championed by the American Institute for Economic Research, Club for Growth’s Freedom Forward Policy Handbook, Americans for Tax Reform’s Sustainable Budgeting, among others. Reducing government spending, taxes, regulations, and the money supply will unleash abundance.

By recommitting to pro-growth principles, the GOP can present a compelling alternative to the electorate and pave the way for a more prosperous future. Or, it can follow the “New Right” down the progressive road to serfdom.

Federal Reserve Chair Powell participated in a discussion at the Economic Club of Washington last month. 2024.

The Federal Reserve’s efforts to bring down inflation appear to have worked. Indeed, the latest data from the Bureau of Economic Analysis (BEA) suggests the Fed may have reduced inflation even more than it intended. The Personal Consumption Expenditures Price Index (PCEPI), which is the Fed’s preferred measure of inflation, grew at a continuously compounding annual rate of 1.9 percent in July 2024. It has averaged just 0.9 percent over the last three months.

Core inflation, which excludes volatile food and energy prices, also came in low. Core PCEPI grew at a continuously compounding annual rate of 1.9 percent in July 2024, and 1.7 percent over the last three months.

Despite the recent low inflation, prices remain elevated. Headline PCEPI is around 8.8 percentage points higher than it would have been had the Fed hit its 2-percent inflation target since January 2020. Core PCEPI is 7.9 percentage points higher.

Figure 1. Headline and Core Personal Consumption Expenditures Price Index with 2-percent Trend, January 2020 – July 2024

The Fed increased its federal funds rate target range by 525 basis points between February 2022 and July 2023, and has held its target steady over the time since. With inflation running slightly below target, the Fed now looks poised to begin cutting its target rate.

Speaking at the annual Jackson Hole symposium earlier this month, Federal Reserve Chair Jerome Powell suggested rate cuts would begin in September. “The time has come for policy to adjust,” he said.

It may even be past time for policy to adjust. Remember: monetary policy works with a lag. Today’s inflation reflects the stance of monetary policy months ago. Correspondingly, today’s monetary policy will affect inflation months from now. With inflation already running below target, today’s tight monetary policy will likely see inflation fall further still.

Additionally, disinflation tends to passively tighten monetary policy. Recall that the implied real (inflation-adjusted) federal funds rate target is equal to the nominal federal funds rate target minus expected inflation. Since inflation expectations tend to move in line with inflation, falling inflation typically causes the implied real federal funds rate target to rise. Ideally, the Fed would gradually reduce its nominal federal funds rate target as inflation falls, in order to prevent monetary policy from passively tightening. It hasn’t. Instead, it has maintained its nominal federal funds rate target.

To recap: monetary policy is already too tight given observed inflation in recent months and will likely tighten further as inflation continues to decline unless the Fed course corrects quickly.

A September rate cut would certainly be a step in the right direction. But the Fed has a long way to go. Its federal funds rate target range is currently set at 5.25 to 5.5 percent. In order to achieve a neutral policy stance and 2-percent inflation, the Fed must set its nominal federal funds rate target 2 percentage points above the natural rate of interest. Estimates from the New York Fed would put the neutral nominal policy rate at 2.7 to 3.2 percent. Similarly, in the June Summary of Economic Projections, the median Federal Open Market Committee member thought the midpoint of the (nominal) federal funds rate target range would eventually return to 2.8 percent.

How quickly will the Fed shave 2.5 percentage points off of its nominal federal funds rate target? Markets think it could move fast. The CME Group reports a 69.2 percent chance that the federal funds rate target range is at least a full percentage point lower by the end of the year. That would significantly reduce the distance the Fed needs to travel in order to return monetary policy to neutral.

Alas, history suggests the Fed will move slower than markets currently project. Fed officials were notoriously slow to react when inflation picked up in 2021; slow to reach a tight policy stance once they began raising rates in March 2022; and slow to respond to the disinflation experienced over the last year. Absent a severe economic contraction, it is difficult to believe the Fed would now pick up the pace.

The Fed will almost certainly cut its federal funds rate target by 25 basis points in September, and it will likely continue to cut its target rate by 25 basis points every month or every other month thereafter, until the stance of monetary policy has returned to neutral. Such an approach would shave 50 to 75 basis points off the federal funds rate target this year, not the 100 basis points or more that futures markets are currently pricing in.

Let’s hope that’s enough.

Vice President and candidate Kamala Harris addresses the American Federation of Teachers’ labor union convention in July 2024.

Vice President Kamala Harris recently announced an economic plan for her presidential campaign. A centerpiece is the transformation of the Child Tax Credit (CTC) into a child allowance. If it became reality, the policy would discourage parental employment and risk harming the long-run prospects of children. These unintended consequences together with the plan’s cost should lead voters to reject it. 

The existing CTC provides up to $2,000 per child and is only available to parents with a tax liability or earnings. The Harris plan would increase the credit to $6,000 for newborn children, $3,600 for children age 1 to 5, and $3,000 for children age 6 to 17. Just as important, Harris would delink the CTC from work by delivering the full amount to families who pay no taxes and have no earnings. 

Delinking the CTC from work would turn back the clock on decades of progress improving the safety net. In the 1990s, bipartisan welfare reform moved the country away from unconditional cash welfare to a safety net that required and rewarded work. Defying the predictions of skeptics, the policy shift was tremendously successful in leading single mothers in particular to go to work. Child poverty fell as more resources were brought into homes, and children’s long-run outcomes — as later research demonstrated — improved as well. 

Harris’ CTC plan would risk undoing this progress by going a long way toward bringing back welfare as we knew it. A non-working single parent with two children would receive between $6,000 and $9,600 from Harris’ child allowance. This is in addition to the $9,000 they currently receive in food stamps, totaling around $15,000 to $19,000 in guaranteed assistance not tied to work. This would exceed the combined (inflation-adjusted) value of food stamps and cash welfare the same family would have received in 1996 in the majority of states. In other words, the Harris plan would increase the amount of guaranteed cash or near-cash assistance paid to non-working families beyond what they received the year prior to welfare reform, even before accounting for the growth in the rest of the safety net over the past 30 years.  

In addition to making it more possible to get by without working, the bigger concern is that the Harris plan would diminish the reward to work — that is, a family’s resources would not increase as much as a result of working. Economists have generally attributed most of the pro-employment success of welfare reform to expansion of the Earned Income Tax Credit, which provides a several thousand dollar work reward per year. The CTC is structured the same way in providing up to a $6,000 work reward for a family with three children. The Harris plan would eliminate that work reward by making the credit a guarantee for everyone regardless of work effort. 

The best prediction is that the Harris plan could lead well over a million parents to exit employment, an effect concentrated among single parents. This was the conclusion of a study I coauthored on the effects of making the 2021 CTC permanent. The Harris plan adopts the same policy with the exception of an even higher $6,000 benefit for newborn children, which would tend to slightly magnify the employment loss we found in our study. 

Employment exit is not the only risk voters should consider. The effect on children is at least as important. In the short run, the greater amount of resources sent to low-income families via Harris’ child allowance would reduce child poverty. But in the long run, employment exit could deprive some children of resources and undo the non-financial benefits of having a parent who works.  

Research suggests that the long-run risks to children are real. A large body of evidence finds that work-rewarding tax credits drive academic improvements among children which translate into gains in employment, earnings and self-sufficiency upon reaching adulthood. The evidence for positive long-run effects of government aid that does not require work is weaker. So turning the CTC from a work-rewarding tax credit into unconditional government aid could risk reversing some of the gains children experienced as a result of welfare reform.  

Proponents of a child allowance may respond that some amount of employment loss — and the associated risks to children’s long run prospects — are a worthwhile tradeoff for a safety net that provides a basic level of protection to poor families with children. That’s a valid point.  

But we should keep in mind the fairly robust set of assistance programs that we already have. A family of four bringing in no income of its own receives around $12,000 in food stamps plus benefits from other nutrition programs, free health insurance coverage via Medicaid, and is eligible for (though may or may not actually receive) cash welfare, energy assistance, and rental housing assistance. We do not need to create a child allowance to ensure families have a floor of government aid. 

The final and arguably most important concern with Harris’ child allowance is its cost. According to the Committee for a Responsible Federal Budget, the proposal would cost over a trillion dollars over the next decade. Given the lack of political will to control the cost of existing government programs to tackle the $35 trillion federal debt, now is not the time to add even more spending to future taxpayers’ tab. The very Americans who the Harris plan seeks to help — children — are the ones who will ultimately face the burden of repaying it in the form of higher taxes and dampened economic growth. 

The Harris child allowance is not worth the costs. More resources would help children in the short run. But the risks to parental employment and the long-term wellbeing of children, not to mention the fiscal costs, are too big a price a pay. We learned from welfare reform that a pro-work safety net helps lift up families. We owe it to families and taxpayers not to forget that lesson. 

Liz Truss, former Prime Minister of the United Kingdom, speaks at an American conference. 2024.

“The Old Lady of Threadneedle Street” is the affectionate nickname of the Bank of England, as respected an institution as Britain ever had. Calling something as “safe as the Bank of England” was the highest praise of surety and soundness. Should any financial institutions get out of line, it was said that a simple rise of the Governor of the Bank’s eyebrow would get them back in line. It was a symbol of British tradition and stability. 

Because of the Bank’s stalwart reputation, the incoming Labour government of Tony Blair in 1997 announced that it would hand over responsibility for monetary policy to the Bank. This was meant to reduce the risk of politicized decision making. As an institution above politics, the Bank seemed to be the model for a new form of governing body: the respected, impartial, independent agency. Governments of left, right, and center have followed suit by increasingly turning over contentious decisions from Ministers to independent bodies. 

Yet, as anyone who has read the Federalist Papers could tell you, democratic and judicial checks and balances are important. Without them, power tends, as Lord Acton noted, to corrupt. In the Bank’s case, that fall from nobility is most apparent in its role in the fall of former Prime Minister Liz Truss. The consequences of its actions may be in the process of destroying Britain. 

The received wisdom of the fall of Truss was that she proposed an irresponsible “mini budget” that would have been fiscally disastrous and that sparked “the markets” to respond, sending a clear signal that her sort of supply-side policy was unacceptable and leaving her position untenable. This story just doesn’t stand up to scrutiny. All her policies were either expected or well-signaled in advance. The main fiscal issue was the cancelling of scheduled tax rises and a reduction in the top rate of income tax. None of this should have caused financial Armageddon. So what did? 

As the Wall Street Journal reported this week, the Bank is tacitly admitting to its role in the whole business. Unlike other central banks, including the Fed, the Bank had doggedly held on to low interest rates until even it could not credibly do so in the face of COVID-caused inflation. The trouble was that Britain’s legacy pension funds, which paid out guaranteed benefits, had followed a risky high leverage hedging strategy during the low-interest rate era. Once low interest rates evaporated, the funds were left with no alternative but to sell off government bonds. The Bank estimates that most of the rise in bond yields that followed the mini-budget was due to this sell-off, rather than to Truss’s announced policies. 

The Bank’s actions were compounded by the rest of what we can term “the economic blob” – officials insulated from effective oversight, just like the Bank. According to Truss’s autobiography, officials at the Treasury didn’t even know these hedges existed. At the Office of Budget Responsibility, another independent agency set up, this time by David Cameron, to ensure the depoliticization of fiscal matters, officials sent out critical letters to Truss and her Chancellor, containing an analysis that has since proved incorrect, and which were immediately leaked to the press. The damage was done – the Bank and the blob had their fall guy. 

The consequences of the blob’s actions have proved to be significant. The Conservative Party lost its reputation for economic competence, free-market policies became anathema, and the consequent Tory government of Rishi Sunak plunged headlong towards its worst defeat ever. 

This meant the election of a Labour government with an enormous majority and virtually no mandate. It has presided over the introduction of what many regard as a two-tier justice system, with native Britons sent to jail for Facebook posts while ethnic minority violent offenders get much lighter sentences or are let off entirely. The actual situation is more complicated, but the nuances are probably less important than the perception. 

As far as the economy goes, Prime Minister Keir Starmer has announced that things are going to get worse and his budget will need to be tough – this from the party that condemned “austerity” after the financial crisis. What Starmer has not done is show any sign of tackling the blobs that rule Britain. 

No wonder. As Stephen Davies of the Institute of Economic Affairs has noted, the school of politics that produced these blobs “combines designed and regulated markets with social engineering and government-by-experts.” That model is coming apart together with the country it tries to govern. The Bank of England may survive Britain’s potential collapse. Its reputation should not. 

A commissioner of the Bureau of Weights and Measures investigates food prices in a New York City store at the beginning of World War I. Bain News Service. 1914.

Rising grocery costs continue to put the squeeze on families. Overall, the cost of a trip to fill the pantry rose nearly 22 percent since the beginning of 2021. Many specific staples rose far more — eggs are up 110 percent, flour up 29 percent, orange juice up 82 percent. A family of four spending $1000 per month just three and a half years is spending an additional $2,640 annually for this same shopping list.  

Unfortunately, Vice President Harris misdiagnosed the source of the problem as “bad actors” seeing their “highest profits in two decades.” She blames the initial surge in food prices on supply chain issues during the pandemic — certainly a major contribution to the shortages and price increases on many items early in the pandemic.  

However, Harris mixes this truth with falsehood by claiming businesses are now pocketing the savings after these supply-chain issues have subsided. Her proposed solution — “the first-ever federal ban on price gouging on food” — will compound the misery.  

First, the faulty diagnosis. A look at the data easily counters this.  

An insightful way of analyzing whether price increases are due to “gouging” is to focus on the variable production costs of the goods sold plus the selling, general, and administrative expenses. Tyson Foods — the world’s largest chicken, beef, and pork processor — saw its margin drop from 8.4 percent in 2020 to just 1.1 percent last year. Kraft Heinz and General Mills — food processors with combined revenue nearly equal to Tyson Foods, suffered similar results. Kraft Heinz’s margin declined from 21.4 percent to 20.2 percent. General Mills’s shrank from 17.8 percent to 16.8 percent. Far from “gouging,” these industry leaders are failing to fully pass along the entirety of their own cost surges to consumers. Expenses relative to sales increased during the past three and a half years of elevated inflation.  

After accounting for all expenses — including extraordinary items, taxes, and interest — margins are even tighter. Notably, Tyson Foods experienced a net profit margin last year of NEGATIVE 1.23 percent. Kraft Heinz realized a 10.72 percent net profit margin last year, and General Mills a 12.91 percent margin.  

What about industry-wide? Profit margins are shrinking as food manufacturing costs rose 28.4 percent since January 2020, exceeding the 26.3 percent retail price hikes on food items. Grocery store profit margins sank to 1.6 percent in 2023, the third consecutive year of decline after peaking at 3.0 percent in 2020.  

In other words, grocer profit on $100 of sales is just $1.60. Profit margins contracted as overall food inflation totaled 20.6 percent in those three years. The biggest grocers have experienced this margin crunch. The Kroger Co. — the nation’s largest traditional supermarket — eked out an operating margin of 1.93 percent this past year, a margin lower now than it was pre-pandemic. These trends are the opposite of gouging.  

History provides endless proof that prices set by governments under the market price results in shortages. Demand expands as supply shrinks. What good is a lower price if the shelves become empty?  

Venezuela, Cuba, and the Soviet Union provide ample examples of the dangers of price controls. But the United States embarked on its own failed experiment just five decades ago. In August 1971, President Nixon ordered an initial 90-day freeze on prices and labor, with future price increases to be subject to federal approval. The proposal initially proved wildly popular, with 75 percent public support and a landslide re-election the following year. President Nixon even ordered an IRS audit on companies breaching the ceiling.  

Ultimately, the program ended in disaster. As explained by Daniel Yergin and Joseph Stanislaw, “Ranchers stopped shipping their cattle to the market, farmers drowned their chickens, and consumers emptied the shelves of supermarkets.” In April 1974, the administration dismantled most of the program.  

Importantly, the inflation of the early 1970s resulted largely from easy money. From the beginning of 1970 through the demise of the price-fixing program in April 1974, the M2 money supply expanded by 48 percent. In less than four years, prices rose by nearly 27 percent. In other words, prices jumped in fewer than five years by an amount equivalent to that of the entire prior decade!  

Does this sound familiar? It should. The inflationary surge of the post-COVID era is largely a direct result of the explosion of government spending beginning in 2020. The Federal Reserve financed much of this spending by ginning up its digital printing presses to purchase government bonds alongside a myriad of other assets — from mortgage-backed securities to corporate debt.  

The flood of new money coursed through the economy. The M2 money supply swelled by 40 percent in just two years. More dollars chasing goods and services ultimately resulted in dramatic price hikes.  

Harris appears to have forgotten the important lessons from this episode. Based on her insistence that price gouging is responsible for high grocery prices — when it clearly is not — the Vice President’s proposal would more likely function as a price freeze or command pricing. As such, the existence of state laws currently prohibiting dramatic price increases during emergencies should not assuage concerns about Harris’s proposal. Of course, even these state laws may result in the unintended consequence of shortages — but these temporary interventions in the market are rarely activated.  

With deficits looming even larger in the years ahead, the threat that the central bank will finance this spending with another bond purchasing spree only increases. The food production industry is not immune from the ravages of this reckless monetary policy: the spiral of rising labor costs, insurance, and equipment. In addition, the sector is particularly sensitive to the assault on affordable fuel vital to the cultivation and transportation of food.  

It’s time political leaders admit their own culpability in the shrinking purchasing power of the dollar at the grocery store. Blaming painful price increases on the very entities responsible for the most bountiful, readily accessible supply of sustenance in human history is woefully misleading. Imposing price controls is a demagogic solution harmful to farmers, processors, grocers, and families.  

Generated by Feedzy