Category

Editor’s Pick

Category
Concept of Bitcoin being used as currency for small purchases, like your morning cappuccino.

The federal government taxes cryptocurrencies as “property.” Income, if there is any, is taxed at regular income tax rates and changes in prices are treated as capital gains or losses. This treatment means that every transaction requires computation of the capital gain or loss in terms of US dollars.

Transactions using foreign currencies, in contrast, do not require paying capital gains tax for gains under $200. There is no reason not to treat cryptocurrencies the same way. Indeed, there have been several proposals to do exactly that.

For example, Robert F. Kennedy, Jr. proposes eliminating capital gains taxes on de minimis transactions in cryptocurrencies. “De minimis” is a legal term from Latin that means “sufficiently unimportant that it can be ignored.” A $200 gain is regarded as de minimis for foreign currencies. Why not for cryptocurrencies?

The Virtual Currency Tax Fairness Act has been submitted to Congress in recent years, including the current session of Congress. The 2024 bill would eliminate the tax on a de minimis amount of $200 and index that amount by inflation.

Taxing cryptocurrencies as property is no more of a problem for cryptocurrencies held as an investment than for corporate stock. Corporate stock has had this tax treatment for many years.

Taxing cryptocurrencies as property makes it more costly to use cryptocurrencies to buy or sell goods and services. If a buyer pays dollars to purchase a gallon of milk, he does not incur a tax on the dollars. (He may incur a sales tax on the value of the milk, but this is a separate issue.) If, instead, the buyer pays with a cryptocurrency, he must compute the capital gain or loss on the cryptocurrency and determine his tax. First, he must identify the dollar price of the cryptocurrency at the time it was acquired. Then, he must determine the dollar price of the cryptocurrency when the milk was  purchased. The change in the value of the cryptocurrency in dollars is the capital gain or loss. Finally, he must determine the capital gains tax rate that applies to the transaction.

That’s a lot of calculating to purchase a gallon of milk. Moreover, he must perform a similar calculation for every cryptocurrency transaction.This extra work raises the cost of using cryptocurrencies in transactions and limits their use in transactions. Even today, some people have long lists of gains and losses on cryptocurrencies to send to the Internal Revenue Service.

It is not hard to improve this situation: eliminate capital gains taxes on cryptocurrencies used in smaller transactions. A capital gains tax on small gains, for example a gain of one dollar, is absurd: the tax rounds to zero dollars because tax forms ignore pennies.

A common complaint by those who would like to eliminate cryptocurrencies is that cryptocurrencies seem more like financial assets than monies. Treating cryptocurrencies like property for tax purposes discourages people from using them like monies. By reducing the cost of using cryptocurrencies in small transactions, treating them like foreign currencies for tax purposes would encourage people to use them like monies.

Treating cryptocurrencies like foreign currencies might seem like an unimportant change, but it isn’t. Early cryptocurrency proponents suggested they might be used (among other ways) to make micro-payments on the Internet. For example, the Basic Attention Token lets people pay for content and advertisers pay people for viewing advertisements. This and similar schemes might well be more widely used if cryptocurrencies were treated like foreign currencies for tax purposes. Instead, they are treated like property. That means the associated taxes are either a pain if computed or a gray area if ignored.

Cryptocurrencies should be taxed in the United States on the same basis as foreign currencies. This would be a big change in the taxation of cryptocurrencies and might have big effects on how much they are used.

Federal Reserve Bank of St. Louis. 2019.

Fed-watching has become a hobby for many, and an occupation for some. When will the Fed cut interest rates? (September) How much will it cut interest rates? (25 – 50 basis points) How will this affect credit card and mortgage rates? (They will start coming down)

These are the kinds of questions most people ask. But what people wrongly ignore is the Fed’s huge balance sheet and its related consequences: subsidizing explosive government deficit spending and creating a bailout culture.

During the Global Financial Crisis of 2008, the Federal Reserve engaged in unprecedented interventions into the financial system. They opened Pandora’s Box and out of it came large scale asset purchase (LSAP) programs – better known as quantitative easing (QE). QE caused the Fed’s balance sheet to expand dramatically. 

Quantitative easing allowed the Fed to expand its securities purchases into other asset markets besides US Treasuries – namely mortgage-backed securities (MBS). Although the first QE program was justified as a crisis measure, once the Fed crossed that line it would do so again and again for what could hardly be called emergencies. The legacy of quantitative easing continues to this day.

 Let’s take a brief look at QE programs over the past fifteen years and speculate about what lies ahead. Table 1 shows five distinct periods of quantitative easing and their magnitudes. It also highlights the past two years of quantitative tightening. 

Table 1: Quantitative Easing Programs

DatesInitial AssetsEnding AssetsChangeQE I9/08 – 5/09$0.9 Trillion$2.2 Trillion$1.3 TrillionQE II11/10 – 7/11$2.3 Trillion$2.9 Trillion$0.6 TrillionQE III11/12 – 10/14$2.8 Trillion$4.5 Trillion$1.7 TrillionQE Covid2/20 – 6/20$4.2 Trillion$7.1 Trillion$2.9 TrillionQE IV8/20 – 4/22$6.9 Trillion$9 Trillion$2.1 TrillionQT4/22 – Today$9 Trillion$7.2 Trillion-$1.8 Trillion

Figure 1 shows the effects of all this QE on the Fed’s balance sheet.

Figure 1: Assets on the Federal Reserve Balance Sheet 2003 – 2024

In November of 2008, financial markets around the world were distressed. In the US, stocks were down almost 50 percent from a year earlier. Bear Stearns had failed in March. Lehman had failed in September. American Insurance Group (AIG) had been bailed out not once but twice. Then Treasury Secretary Paulson had testified to Congress in late September that another great depression was imminent if they failed to authorize the $600 billion Troubled Asset Relief Program (TARP) to rescue the financial system.

Bernanke and the FOMC had been actively reducing the federal funds rate from 5.25 percent in August 2007 to less than .25 percent by December 2008. They had also launched a variety of temporary liquidity facilities to dampen the fire sale pressure on financial assets. By late 2008, their target interest rate was nearly at the zero lower bound, but they wanted to do more to shore up financial markets.

Quantitative Easing I

In November 2008 the Federal Reserve opted to create more liquidity using the Fed’s balance sheet. It announced that it would buy $600 billion of securities to provide more liquidity to the market. $100 billion would go towards buying “GSE direct obligations” or agency debt, and the other $500 billion would go towards buying MBS backed by GSEs. 

But that was just the beginning. 

In March of 2009, citing weakness in the economy, the Fed doubled down on this strategy. It announced that it would expand its bond-buying program in three ways. First, it authorized the purchase of an additional $750 billion in agency-backed MBS. Second, it authorized the purchase of up to another $100 billion of agency debt. Third, it authorized adding up to $300 billion in purchases of long-term Treasury bonds.

At the end of this bond-buying spree, known now as Quantitative Easing I (QEI), the Fed’s balance sheet stood at $2.3 trillion in June 2010 – a 200 percent increase from early 2008. 

Graph 2: Federal Reserve Balance Sheet Assets 2008 – 2010

The Fed had provided massive liquidity to financial institutions and financial markets. But they had to figure out how to avoid an inflationary boom from that additional liquidity multiplying the money supply. To do this, the Fed utilized a new tool to “sterilize” all this new liquidity: interest on reserves (IOR). The Fed started paying banks interest on their reserves to encourage them to hold more reserves by issuing fewer new loans. The money supply (M1, M2, etc.) did not grow in nearly the same proportion as the monetary base did.

By late 2010, the FOMC indicated it would let its balance sheet shrink naturally as its assets matured and “rolled-off.” They estimated that their balance sheet would shrink back to $1.7 trillion by 2012. And so the first major foray into quantitative easing by the US ended with a whimper and with shifting goals more than a year and a half after the 2008 financial crisis. 

Yet subsequent decades suggest that the Fed apparently had no interest in going back to pre-crisis levels. Though June 2010 was the end of QE I, it turns out QE I was only the opening overture of the Fed’s asset-buying spree of the past decade and a half. 

Quantitative Easing II

Only a few months after indicating it would let assets start rolling off its balance sheet, the FOMC shifted course. With unemployment remaining stubbornly high throughout 2010, Bernanke and the FOMC decided to engage in another round of QE to put more liquidity in the economy. Although the financial crisis was well over by late 2010, the Fed still felt responsible for “fixing” the American economy – especially reducing the elevated unemployment rate. This would be its ever-weakening justification for buying huge quantities of securities whenever it felt like doing so.

This second round of QE involved buying another $600 billion of Treasury securities by the middle of 2011. By July 2011 when the Fed ended QE II, its balance sheet stood at roughly $2.8 trillion, half a trillion greater than its $2.3 trillion balance in 2010.

Quantitative Easing III

But as economies have a tendency of doing, the US economy did not respond to the half trillion purchase of Treasury securities the way Bernanke wanted. So in September 2012, Bernanke decided to pull out all the stops. He launched QE III or what some have called “QE-Infinity.” Under this third round of quantitative easing, the Fed began buying $40 billion of MBS every month. Within a few months they had increased their monthly purchases to $85 billion.

The program continued until October 2014, having begun tapering off their purchases in January 2014. By then the Fed’s balance sheet had grown almost two trillion dollars to $4.5 trillion. These numbers would have been unfathomable just a decade earlier. 

Graph 3: Federal Reserve Asset Growth During QE III

Yet it appears that Fed officials came to believe that they had the authority and power to do whatever they deem necessary to “fix” the economy. That belief was on full display in the Fed’s response to the pandemic, their response to the Silicon Valley Bank and Signature Bank failures, and their recent tapering of the quantitative tightening program.

Quantitative Easing Covid

When the global pandemic took off in early 2020, the Federal Reserve didn’t miss a beat. With governments shutting down economic activity and millions of people dropping out of the labor force, the Fed promised to inject as much liquidity as it took to keep financial markets stable. It turns out that number was just shy of $3 trillion – injected over 3-4 months.

Quantitative Easing IV

In a surprisingly similar pattern to the post GFC world, the Federal Reserve abruptly stopped injecting liquidity and expanding its balance sheet in July of 2020 and even let it start shrinking again briefly. Yet, despite the worst of the pandemic being behind us, in August 2020 the Fed began another round of quantitative easing (QE IV) in which it would inject liquidity into the economy and expand its balance sheet for a year and a half and an additional $2 trillion.

Quantitative Tightening

In the wake of the Fed’s balance sheet exploding to nearly $9 trillion dollars, and the accompanying high inflation, in April of 2022 the Federal Reserve embarked on a significant round of quantitative tightening to reduce the size of its balance sheet. Since then, the Fed’s balance sheet has contracted by almost two trillion dollars. But as I’ve written elsewhere, they slowed their rate of QT in May to a trickle.

What Lies Ahead

We are on the cusp of a monetary loosening cycle. Market participants believe an interest rate cut in September is a done deal. More rate cuts will almost assuredly follow. The Fed will also announce relatively early in the rate-cutting cycle that they have reached an “ample reserve” level and will no longer allow their assets to roll off the balance sheet. And depending on economic conditions around employment levels and GDP growth, the Fed could very well start a new round of quantitative easing to supplement their interest rate cuts.

The Federal Reserve likes having a large balance sheet that they can expand or contract at will to massage financial markets. The open question is whether they will be able to keep exercising discretion in the face of ever mounting federal debt and the increasing political pressure to purchase large quantities of that debt.

It’s fitting that Ronald Reagan spent many years of his all-American life in Hollywood. His trajectory — from small-town boy in Illinois to lifeguard to Eureka College football player to actor to TV pitchman to governor to president — somehow both embodies and transcends his era. It demands a feature-length, biopic adaptation.  

Yet Hollywood being what it is, there wasn’t one. Audiences have instead mostly been treated to Reagan in cameos and bit parts, typically (though sometimes at least amusingly) caricatured beyond recognition. Take Lee Daniels’s The Butler, which presents a fictionalized version of the life of Eugene Allen, a black man who served as a White House butler for 34 years, through multiple presidents, including Reagan. But Reagan (Alan Rickman) in The Butler is a mischievous somnambulant who is also racially retrograde (a fabrication).

We can dispense quickly with what Reagan, the new feature film starring Dennis Quaid and directed by Sean McNamara, does right. Quaid is passable in the title role. He adequately portrays the mannerisms and sometimes approximates the aura of Reagan himself. His vocal mimicry of Reagan is (mostly) not distracting, though it has been done better. The rest of the cast is fine. It is full of performers best described as “that guy”: serviceable character actors of sufficient ubiquity to inspire a vague sense of recognition. They might have been the only ones, outside of known conservatives in Reagan such as Jon Voight and Nick Searcy, willing to appear in the rare movie that is not merely not outright hostile to Reagan, but on his side. That is, itself, a kind of relief for the conservative moviegoer, but . . . more on that later.  

Yet this only makes its failures more frustrating. Start with the structure.

Reagan, bizarrely, directs attention away from Reagan himself. It is presented as a frame narrative, told by the fictional former KGB agent Viktor Petrovich (Voight). Paul Kengor, the Reagan historian on whose books The Crusader: Ronald Reagan and the Fall of Communism and God and Ronald Reagan: A Spiritual Life the movie is based, described Viktor as a composite of the many Soviet agents known to have monitored Reagan over the years, and “a smart way for the producers to keep the integrity of the story and yet make it entertaining.”  

It’s possible to conceive of a way this might have worked. Imagine a Viktor at the end of the Cold War, flashing back to when he first received his Reagan ‘assignment.’ To understand Reagan, he examines his life up to that point, bringing us back to his upbringing. He then continues to advise his Soviet superiors on Reagan as Reagan’s rise continues, all the way up to and through the presidency. Viktor’s study of Reagan, at first detached and clinical, compromises his objectivity, until he ultimately realizes Reagan was right about communism and about the Soviet Union. Think It’s a Wonderful Life crossed with The Lives of Others.  

What Reagan does instead makes no sense. It shows Viktor in Russia in the present day, visited by a young politician (Alex Sparrow) being groomed as Vladimir Putin’s successor. Viktor tells Reagan’s life story to this young man so that he can understand what went wrong for Russia during the Cold War. Conversations between the two periodically interrupt depictions of Reagan’s life, and Voight’s Russian-accented English forcibly hovers over many of the biographical events the two don’t interrupt, providing contrived assessments and explanations of what is going on. And this frame narrative ends with the apparent implication that the young politician now aspires to become Russia’s Reagan, a thematic takeaway of baffling intention, if meant deliberately. Dropping this aspect entirely could have improved the movie significantly.  

But it could not have salvaged Reagan. That is, in part, because it attempts to do too much. A complete depiction of Reagan’s life over a little more than two hours is an ambitious task. Reagan does not rise to it. It moves so quickly that it becomes a staccato collection of moments. Hey, there’s Reagan as a lifeguard! Oh, there he is playing football at Eureka! Look, now he’s in Hollywood! Wait, now he’s doing commercials! Also, his mom died! You’d be better off turning to any of the excellent biographies of Reagan to learn about his life with any meaningful depth.

The treatment of Reagan’s presidency is similar. It provides a superficial highlight reel that does not capture his political skills. On the whole, this all-of-the-above approach not only shortchanges individual scenes, even those that show or reference rightly famous moments from Reagan’s life, but also forces these shortened sequences to over-rely on the tropes and cliches that serve as crutches for the strained screenwriter. Overly explanatory dialogue, always a risk even for the best biopics, is abundant throughout. Nancy Reagan (Penelope Ann Miller) is saddled with much of it.  

Reagan aims to present its subject sympathetically, if not flatter him outright. But its approach for doing so fails. Despite all the voiceovers, all the explanations, all the helpful on-screen text telling us where and when we are, the movie cannot overcome the fact that it somehow at once asks too much of viewers and trusts them too little. It assumes that, with its clunky help, we can overcome its deficient provision of context and its failure to give us anything more than surface-level impressions of what is being depicted (an inevitable byproduct of its unwieldy scope). The end result, however, is that while some people might come out of the movie having learned a few interesting facts about Reagan’s life, they will not have come to understand him any better. Up to a certain point, he is shown, in rote fashion, as a mere product of inputs around him; after that, he is driven by an unexplained core the movie does not meaningfully explore or elaborate upon.

Reagan is like a glimpse into some alternate-universe Hollywood, dominated by right-wing hacks instead of left-wing ones, who reliably produce nigh-propagandistic films about subjects and people that suit their interests instead. You’d need to be pretty stubborn, as a conservative, not to take some joy in seeing Reagan presented positively, in hearing the name Whittaker Chambers spoken aloud without condemnation, or in witnessing the Soviet Union depicted as the “evil empire” it was. It may feel natural to do so.

But these are cheap thrills, and false joys.

Reagan doesn’t even succeed as propaganda, as it doesn’t know how to transmit its message or even what that message really is. It leaves us only with warm fuzzies about Reagan (reinforced by a credits sequence I found emotionally manipulative and possibly a bit tasteless). But vibes are a disservice to one of the greatest men of the twentieth century, a man who restored America’s faith in freedom and in itself while peaceably vanquishing its most fearsome adversary.  

There will be — indeed, there already is — an attempt to persuade the Right to watch Reagan as a kind of political obligation. Hostile reviews in the mainstream press will likely motivate this effort through negative partisanship and a divide between elites (herein represented by film critics) and the common people (moviegoers). But to criticize Reagan is not to criticize Reagan, a figure of enduring relevance today, despite what some argue. In fact, to have squandered a chance to convey his greatness — honestly, skillfully, and in full — to the world is the real offense. Conservatives should not accept propaganda (especially when it’s poorly done). Those who have longed for a satisfactory depiction of Reagan’s made-for-the-movies life will have to keep waiting.  

Senator Elizabeth Warren makes outreach calls beside her campaign’s ‘billionaire tears’ coffee mug. 2019.

Auburn University’s football team lost every game in 1950. As Bill Cromartie put it in his book Braggin’ Rights, a game-by-game account of the Alabama-Auburn football rivalry, “Alabama fans laughed, poked fun at and cracked jokes about Auburn.” The Auburn Tigers got their revenge in 1955 when they beat the Alabama Crimson Tide 26-0 to hand the Alabama Crimson Tide a winless season of their own. The Auburn faithful were ecstatic, and they reveled in their rivals’ misery.

In a conflict-riddled world, your gain is their loss, and vice versa. In a world infected with the zero-sum fallacy, there can be only one conclusion: whatever they have, they must have stolen.

It has always puzzled me with respect to college football. The NFL is a zero-sum game across the board: if you’re the Cincinnati Bengals competing for the AFC Central championship, then of course you want the Browns, Ravens, and Steelers to lose every week because their losses are your gains. It’s a little more complicated in college football, however, because the reward structure relies so heavily on impressions. The College Football Playoff is an invitational affair, and strength of schedule matters. No one was seriously advocating that the Liberty Flames get a spot in the 2023 year’s four-team CFP, even though they won all their games because they played a weaker schedule against which most teams in the Power Five conferences probably would have gone unbeaten.

If you ask a lot of college football fans, every Saturday brings the possibility of two great outcomes:

Our team wins.

Our hated rival loses.

A bit of good-natured ribbing for your neighbor flying the rivals’ flag can be fun and all, but this doesn’t make a lot of sense if you want to win championships. A more rational scheme would be:

Our team wins.

Any outcome that makes our team look better.

That means rooting for the rivals. Alabama got into the 2023 College Football Playoff on the strength of a decisive win over Georgia that wasn’t as close as the score indicated. If Georgia hadn’t won 29 straight games and back-to-back national championships, Alabama probably would have been passed over. Indeed, one of the main arguments against including Alabama was that they needed a last-second miracle play to beat an Auburn team that had been absolutely massacred on their home field by New Mexico State — a fine team, but a team Auburn had paid $1.8 million to serve as a punching bag.

So what does this have to do with political economy, and especially the upcoming presidential election? More than you might think. So much political discourse is about ensuring the bad guys suffer, even when their suffering hurts those who hate them. The people buying “Billionaire Tears” coffee cups and tumblers from Elizabeth Warren’s website in 2020 didn’t seem to realize that this is not a zero-sum game.

Jeff Bezos, Sam Walton, Bill Gates, and so many others became mega-wealthy not by stealing from people but by providing them with goods and services they liked at prices they were willing to pay. It’s ghoulish to think they should suffer.

College football fandom is a microcosm of the problem of a political economy. It shows that people are willing to pay a price to make their enemies suffer. This is all well and good in something innocuous like college sports — better that this all happens on the vicarious battlefields of college football rather than actual battlefields — but it is positively destructive in a world where we revel in others’ misery.

No one is poor because Jeff Bezos is rich. Unlike crowned heads and royal families, Jeff Bezos created his fortune by creating what is simply the greatest store that has ever existed: Amazon. Consuming his wealth — which includes a lot of Amazon stock — might briefly fund redistributive programs, but it reduces society’s stock of valuable productive assets, and thereby reduces everyone’s standard of living in the long run. In his article “Taxation as Social Justice,” Michael Munger refers to a song by Ten Years After that includes the lyric “Tax the rich/feed the poor/’til there are no rich no more.” He notes that it’s remarkable that it isn’t “‘til there are no poor no more.” Consuming Jeff Bezos’s capital out of spite and hamstringing Amazon might not be such a big deal if you’re reasonably healthy and financially comfortable and can maybe skip this latest iPhone. It’s a much bigger deal if you’re counting coins in the grocery store checkout line to see if you can afford to buy that last can of soup. The “billionaire tears” mug is funny and all, but there’s a lot of collateral damage. Being willing to lose a championship to hurt your sports rivals is one thing. Accepting lower living standards for everyone to hurt your political rivals is something else entirely.

View of the stage at the Democratic National Convention in Chicago. Lorie Shaull. 2024.

The Democratic National Convention featured the word freedom over and over and over. It used Beyonce’s song, “Freedom.” Kamala Harris said Democrats “choose freedom” and defined her campaign as “a fight for freedom,” to create “a country of freedom, compassion and the rule of law,” which would offer the freedom not to just get by, but to get ahead.” The media also played along. One Los Angeles Times headline I saw proclaimed that “Harris offers freedom.” 

The first thing all that hard sell brought to my mind was, “Why did someone from an administration that has been in office three and a half years need to promise such a freedom revolution now, when they have had plenty of time to do it already, or to do it now, for that matter?” Was it just the hope that Will Rogers was a prophet when he said, “The short memories of the American voters is what keeps our politicians in office”?

My second thought was that what was defined in Chicago as freedom was certainly “interesting,” to use a word my Mom would tell me to use when she couldn’t say something good about something. Pushing for ever-stricter gun control and ignoring its highly questionable efficacy and that it violated the freedom specified in the Second Amendment was redefined into “freedom to live without fear of gun violence.” The promise of compassion and “the freedom not to just get by, but to get ahead” ignored all the freedoms that would be infringed to get the trillions of dollars to fund their plans to advance such freedoms. The promise to deliver “the rule of law” ignored not only the striking difference from the Biden-Harris administration’s behavior, but that the laws they overwhelmingly favored involved massive special treatment for favorites at other Americans’ expense, a far cry from the Constitution’s call for the federal government to advance our “General Welfare.” Governor Shapiro even asserted that what Democrats were hawking was “real freedom.” 

Then what really struck me was how, for all the freedom talk, there was very little liberty on offer in Chicago. And the distinction was important, explaining among other things why I have always liked the word liberty better than the word freedom. 

“Liberty” seems clearer to me about what it is liberty from man-imposed coercion — while “freedom” is more agnostic about what it is freedom from. For instance, I can take your money and call it an increase in my freedom. Perhaps Ludwig von Mises stated what has become my view most clearly when he wrote in Liberty and Property, “Government is essentially the negation of liberty. Liberty is always freedom from the government. It is the restriction of the government’s interference.” And if there is anything the Democrats were not offering, it was less government interference and imposition. 

This episode reminds me of FDR’s “Four Freedoms” speech, in which his first two listed freedoms — freedom of expression and freedom of worship — were consistent with liberty because those freedoms for you do not take away from the same freedom for me. The only government role created is preventing others’ intrusions on our equal rights. They are aspects of liberty for all, defending citizens’ rights against man-imposed coercion

However, FDR’s third and fourth freedoms were inconsistent with liberty, because they provided what he called freedoms for some, but took away from others’ freedoms. 

His “freedom from want” (“compassion,” in the language used in Chicago) cannot be similarly universal. It commits government to provide some people more goods and services than they would have gotten through voluntary interactions (including voluntary charity) with others. But expanding a recipient’s “freedom” in that sense necessarily constricts others’ equal freedom to attain their desired goods and services with their resources. That is, it must violate liberty.

And his “freedom from fear” was also insufficiently generalized. It proposed protection against international aggression. But it said nothing about constraining a nation’s freedom to aggress against its own citizens. And FDR’s third freedom requires domestic government aggression to get the required resources for its “compassion,” so his freedoms omit the most significant agency most people must fear when it comes to their liberty, quite different from liberty for all.

I have written in defense of Americans’ liberty for decades. In many specific instances, I have substituted the word freedom for the word liberty. But I have come to more clearly distinguish between a specific freedom or privilege for some and liberty as a universally enjoyed freedom from government coercion. “Freedom” can be used to mean “liberty,” but it can also be used to mean freedom for some that denies the same freedom for others, enforced through government coercion. As the DNC has just demonstrated so well, a host of rhetorical abuses can find a foothold in offering so many freedoms but so little liberty. 

Googling “liberty” turned up similar distinctions. Liberty was defined as “the state of being free within society from oppressive restrictions imposed by authority on one’s way of life, behavior, or political views.” Independence, autonomy, sovereignty, self-government, self-rule, and self-determination were common synonyms, and constraint was cited as an antonym. That is generalized liberty. And it is no wonder that it played such a central role to America’s founders, as when John Dickinson asserted that “liberty…her sacred cause ought to be espoused by every man on every occasion, to the utmost of his power,” and Patrick Henry’s argued that “Liberty is the greatest of all earthly blessings.” But it is not what Democrats are offering.

Pavel Durov, Russian-born founder of Telegram Messenger, at a tech conference in Berlin 2013.

Moments after stepping off his private jet at Le Bourget airport outside Paris last week, tech titan Pavel Durov was arrested by French police. The Russian-born billionaire’s crime, the BBC reported, stemmed from an alleged “lack of moderation” of Telegram, a cloud-based social media platform Durov owns and operates that boasts nearly one billion monthly users.

Though the arrest did not take place on US soil, George Washington University law professor said the event poses a direct threat to not just free speech in Europe, but the United States.

“This is a global effort to control speech,” Turley said on Fox News, noting that European regulators have also put pressure on Elon Musk, a US citizen, to censor Americans, including presidential candidate Donald Trump, through the Digital Services Act.

While the extent to which the US government was involved in Durov’s arrest is unclear, reports indicate that the FBI has for years attempted to penetrate Telegram. 

In 2017 Wired reported on alleged attempts to “bribe Durov and his staff to install backdoors into its service.” More recently, the New York Times reported on Durov’s allegation that the FBI attempted to hire a Telegram programmer in order to help the US government breach user data.

“The FBI did not respond to a request for comment,” the Times reported. 

‘A Refusal to Communicate’

Following the arrest of Durov, who was released on bail equivalent to some $5.5 million and is prohibited from leaving France, Time magazine reported that the event had ignited “fierce global debates about the limits of digital freedom of speech, and how much responsibility social media companies should bear over the content on their platforms.”

The preliminary charges against Durov primarily stem from allegations that Telegram users are using the message platform in harmful ways, including “crime, illicit transactions, drug trafficking, and fraud.”

In this sense, the prosecution’s allegations may be true, but as Turley points out, criminals use all kinds of tools and technologies for illicit purposes. But it’s not the practice of civilized countries to place corporate executives behind bars over crimes committed by their customers.

“It’s like arresting AT&T’s CEO because the mob used a telephone,” Turley said. 

Part of Durov’s alleged crime, quite literally, was that he wasn’t cooperating sufficiently with government officials, who charged him with a “refusal to communicate, at the request of competent authorities… .”

Though French President Emmanuel Macron said the arrest wasn’t political and the nation remains “deeply committed” to free expression, Durov’s arrest looks very much like the latest development of what Turley described as the “global effort to control speech.” 

European leaders have made it clear they are quite happy to censor speech they don’t like, whether it’s a comment about a politician’s weight  or criticizing immigration policy, all in the name of protecting people from hate speech, crime, or “disinformation.” 

Outsourcing Censorship

Americans might be inclined to shrug their shoulders and attribute this to “those crazy Europeans.” But this would be a mistake.

It’s apparent that many in Washington also want to censor speech and control the flow of information. Just two years ago, many will recall, the Department of Homeland Security (DHS) announced the creation of a new “Disinformation Governance Board.” 

“The spread of disinformation can affect border security, Americans’ safety during disasters, and public trust in our democratic institutions,” DHS announced at the time.  

This sounds similar to the language of Věra Jourová, the vice president of European Commission for Values and Transparency, who just months prior to the EU’s adoption of the  Digital Services Act, during a speech at the Atlantic Council, made a case for cracking down on disinformation to protect democracy. 

While the Biden Administration pulled the plug on the Disinformation Governance Board shortly after its rollout due to public uproar, it’s clear that many in Washington don’t begrudge the EU’s censorship power; they envy it. 

Fortunately for Americans — and unfortunately for federal lawmakers — the First Amendment and decades of court precedents make it much more difficult to suppress speech in the US than in Europe. Because of this, the Censorship Industrial Complex (to borrow a term from Investigative journalist Matt Taibbi) has had to get creative. 

Lacking the constitutional authority to censor Americans directly, powers in Washington have, in recent years, outsourced censorship to others. In 2022, the Twitter Files exposed for the first time the government’s sprawling censorship apparatus, which involved government officials leaning heavily on social media companies to get them to do the dirty work of censoring problematic information (sometimes, even when the information was true). 

Indeed, just days after Durov’s arrest, Meta founder and CEO Mark Zuckerberg told the House Judiciary Committee that the White House “repeatedly pressured our teams for months to censor certain COVID-19, including humor and satire.” 

‘A Robust Anti-Trust Program’

The First Amendment, which states “Congress shall make no law…abridging the freedom of speech, or of the press,” applies to the government, not private entities. But numerous Supreme Court precedents make it clear that it is unconstitutional for government agencies or officials to coerce private actors to suppress speech on their behalf.

The line between “permissible attempts to persuade and impermissible attempts to coerce” is not always clear, but it’s a distinction the high court has explored in many cases in recent decades, including  Bantam Books, Inc. v. Sullivan (1963), which concluded that government officials may not use the “threat of invoking legal sanctions and other means of coercion…to achieve the suppression” of disfavored speech. 

That the Biden Administration crossed this line is difficult to deny. The White House’s high-pressure campaign against Facebook to coerce the social media company to censor speech is well-documented. 

Those efforts include veiled threats from Andy Slavitt, the White House Senior Advisor for the COVID–19 Response, who told Facebook on March 15, 2021 that the administration was not impressed with Facebook’s censorship efforts and “[i]nternally we have been considering our options on what to do about it.” It also includes threats that are less veiled, like when White House Press Secretary Jen Psaki was asked in May 2021 about Facebook’s censorship practices, including its decision to keep former President Donald Trump off its platform. Psaki replied that it was the president’s view that platforms had a responsibility to protect Americans from “untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections” and that Biden also supported “a robust anti-trust program.”

In other words, while Slavitt was putting immense pressure on Facebook behind the scenes to censor “untrustworthy content,” the White House was publicly saying Facebook needed to more aggressively monitor and remove problematic content — and, by the way, the federal government has the power to break up your company if you don’t comply.  

All of this explains why Facebook, in 2021, began to remove and suppress content that ran afoul of the COVID State. Some of the content that was removed was no doubt false, of course. Some of it was simply problematic, like claims that COVID-19 might have emerged from the Wuhan Institute of Virology, something most US agencies today, including the FBI and CIA, believe is true. 

What’s clear is that the White House’s pressure and threats had an impact on Facebook’s policies. Zuckerberg, who in leaked conversations with employees in 2019 acknowledged the “existential” threat of antitrust regulators, admits he caved under the White House’s coercion, something he today regrets.  

“I believe the government pressure was wrong, and I regret that we were not more outspoken about it,” Zuckerberg wrote in an August 26 letter to House Judiciary Chairman Jim Jordan. “I also think we made some choices that, with the benefit of hindsight and new information, we wouldn’t make today.”

‘Not a Message This Court Should Send’

In June, in Murthy v Missouri, the Supreme Court had the opportunity to address what the US District Court for the Western District of Louisiana described as “a far-reaching and widespread censorship campaign” by federal officials. 

Sadly, the high court punted on the matter, arguing that the plaintiffs of the case lacked legal standing. 

“We begin — and end — with standing,” Justice Amy Coney Barrett wrote in the majority opinion. “At this stage, neither the individual nor the state plaintiffs have established standing to seek an injunction against any defendant. We therefore lack jurisdiction to reach the merits of the dispute.”

Among those who lacked standing, according to Barrett, was Jill Hines, a healthcare activist who advocates that people have the right to say no to medical procedures. Hines, the co-director of Health Freedom Louisiana, a group opposed to mask and vaccine mandates, had her Facebook page suppressed during the pandemic. Barrett stated that Hines had the strongest case for standing, but had failed to demonstrate that the restrictions against her “are likely traceable to the White House and the CDC.”

Not all justices agreed. 

In his dissenting opinion, which was joined by Justices Clarence Thomas and Neil Gorsuch, Justice Samuel Alito argued the evidence of the case “was more than sufficient to establish Hines’s standing.”

Standing aside, Alito warned that by shirking its responsibility to hold government actors accountable for their coercive efforts to pressure private entities into censoring information on their behalf, the high court was sending a dangerous signal to those in power attempting to control and suppress political speech. 

“Officials who read today’s decision…will get the message,” Alito wrote. “If a coercive campaign is carried out with enough sophistication, it may get by. That is not a message this Court should send.”

‘The Indispensable Right’?

The extent to which Pavel Durov’s arrest stems from the involvement of any US agency or official is unclear. But it could be the latest example of what Turley describes as censorship by surrogate.

US agencies and officials cannot legally censor users themselves, but they can get others to do their dirty work for them. We know today that the FBI was deeply involved in efforts to control the flow of information on Twitter, as well as Facebook. This is an obvious example of censorship by surrogate, but leaning on social media companies is hardly the only method US officials have at their disposal to control or suppress speech. 

US officials can lean on advertisers and ad organizations — like the Global Alliance for Responsible Media, an initiative run by the World Federation of Advertisers that was abruptly shutdown after Elon Musk sued it over alleged antitrust activities. And they can lean on other  governments — and they probably are. 

It would be naive to believe that Macron arrested Durov without at least the blessing of US officials, who had taken a deep interest in both the Russian billionaire and Telegram. Nor is Durov the only person being targeted by those behind the global effort to control speech. Elon Musk, who bought Twitter (now rebranded as X) and exposed the FBI’s shenanigans there, has become a target in not just the EU but also Brazil, where the social media platform is reportedly under the risk of being deplatformed. 

“The question for Americans is whether we’re going to allow these global censors to basically control speech from Europe,” says Turley, author of The Indispensable Right: Free Speech in an Age of Rage.

All of this censorship, of course, is taking place in the name of some “greater good.” Indeed, it’s not uncommon for those most loudly supporting censorship to do so in the name of protecting democracy. But as I’ve pointed out, democracy without free speech is like a picnic without food. It’s not pointless, but a charade. 

The whole point of a constitutional system is to protect the rights of individuals, and when governments themselves become the primary transgressors of these rights, you no longer have a benevolent government; you have a tyrannical one. And free speech is arguably the most fundamental of these rights. 

“If the freedom of speech is taken away then dumb and silent we may be led, like sheep to the slaughter,”  George Washington famously said in “The Newburgh Address.” 

The Supreme Court had an opportunity to put some of the bad government actors in place in Murthy v Missouri, a case that clearly showed government officials coercing private companies to censor information on their behalf, which Justice Alito described as “blatantly unconstitutional.” 

The United States may come “to regret the Court’s failure to say so,” Alito observed, and he might be right. 

The right to free speech is indeed indispensable, but it appears that those who believe otherwise may have already found bigger fish than Facebook and X to act as their censorship surrogates — and perhaps more stringent methods.

If you doubt this, just ask Pavel Durov, who in 2014 fled Russia after the Kremlin “tightened its grip over the Internet” — only to find himself a prisoner in the West a decade later.  

Gourds and vegetables for sale on the honor system, characteristic of high-trust societies.

It’s hard to imagine a starker difference in political visions than between Trump-Vance and Harris-Walz. This could get ugly, so now is a good time to remind ourselves of what it is that holds us together as a nation and a people.  

America is a nation of immigrants who had very different ideas about all sorts of things, but no group was able to impose its culture on the others to a significant degree. We naturally presume this produced a melting pot that united us by creating a new alloy out of many different metals.  

But the real key to America’s success was not uniting us by homogenizing us. It was the emergence of a uniquely American culture that held us together through shared moral beliefs and principles, while allowing us to retain our personal individuality. 

In America, heavy investment into our civic culture through these shared moral beliefs and principles produced the freest thinking minds in human history. The founders recognized this and worked hard to preserve it. This is why they wrote a constitution that provided a formula for a government that was to serve the citizens and not the other way around.  

When Alexis Tocqueville published his first installment of Democracy in America in 1835, he argued that America had a distinctive culture that made it especially capable of self-government. There was something about the American culture that led to the proliferation of mediating institutions that in turn led to an extraordinary level of organic (uncoerced) cooperation. That, in turn, made Americans uniquely well-suited to practice democracy.  

But just exactly how did that happen?  

As America grew, specific religious beliefs became increasingly subordinated to an overarching moral belief structure. In short, not doing the moral don’ts (not lying, not stealing, etc.) became increasingly viewed as a universal moral duty and a public matter, while doing the moral dos (being conscientious, being generous, etc.) became things that were encouraged but otherwise viewed as a purely private matter.  

This was not by design. It happened because, as the scale and scope of economic activity increased, it became increasingly impractical to abide by moral standards for behavior based on promoting, rather than protecting, the welfare of others around us. 

This shift in moral thinking began long ago in the West. As people in the West lived in ever larger groups, religious wisdom began to reflect and reinforce this shift. As but one example Hillel (הלל), a towering figure in first century Talmudic thought, proclaimed: 

That which is hateful to you, do not do to your fellow [man]. That is the whole Torah; the rest is the explanation; go and learn

Hillel was effectively saying that avoidance of harm is what the Torah is about, not benevolence, which is consistent with not doing the moral don’ts taking precedence over doing the moral dos.  

Because of America’s extraordinary diversity, the idea that we should concern ourselves with not doing the moral don’ts above all flowered most fully. This was also consistent with America’s early Protestant nature, which stressed that one’s conscience should guide moral decisions rather than any kind of religious formulary.  

This was very important, because our ability to trust others we don’t know has nothing to do with hoping they’ll be nice to us by doing the moral dos to promote our welfare. In a large society it can’t. Small group trust is lovely, but it doesn’t scale up.   

When you walk the streets of Manhattan, it is not your belief that everyone you pass is so inclined to do nice things for everyone else that it makes you feel safe enough to go about your business. It is your belief that they won’t do the moral don’ts.  

Since not doing moral don’ts involves not taking actions, it doesn’t require resources. This means we can all obey all the moral don’ts at the same time. The moral don’ts therefore provide a basis for trust that can scale up.  

The rise of civilization is the story of people living in ever-larger groups. In places like America, culture evolved even further, producing the moral belief that we should never do moral don’ts and use government, if necessary, to enforce them. Meanwhile, obeying the moral dos is to be treated as a purely private matter. In other words, we should mind our own business. This is so deeply ingrained in the American ethic that for us it’s like water to fish. 

Being confident that, in most contexts, no harm would come to us led to a habit of extending trust to strangers unless there was a good reason not to. That is the essence of a high trust society. Since trust is a powerful catalyst to voluntary cooperation, this unleashed the power of freely directed cooperation as never before in human history.  

Tocqueville’s own thesis for American success notes that many of our mediating institutions are highly trust dependent. These institutions were voluntary associations which is why they were epiphenomenal within a culture of freedom. It is difficult to imagine that such voluntary associations would last long if everyone in them was highly suspicious of everyone else.  

But America’s cultural glue, which makes all of this possible, is weakening. Today’s civic and moral educators don’t stress the primacy of not doing the don’ts over doing the moral dos.  

Instead, they preach that certain kinds of positive moral actions are duties – like driving an electric car. This is a prescription for a virtue-signaling arms race wherein people indulge their moral vanity by doing whatever they can to appear morally superior to everyone else.  

Not so long ago in America it was considered rude to ask anyone other than one’s inner social circle which positive moral actions they undertook. But it now happens every second of every day on social media, in our grade schools, on our campuses, and even at work. 

What really matters for trust is not what you do, but what you don’t do. But since inactions are not observed, they cannot be rewarded with social approval. Just imagine the reaction you’d get by bragging about the lies you didn’t tell, the property you didn’t steal, and the people you didn’t murder.  

To earn explicit social approval, one must do the moral dos. So today, Americans loudly tout their doing of moral dos – whether that’s using the “right” pronouns or boycotting the “wrong” people. But they are basically touting that they are following the company line, so the price of social approval is steadfast conformity that can hardly be described as genuine freedom.  

In most American schools today, children are taught that they should care enough about everyone else to be willing to think, say, and do approved things to produce conformity sufficient to unite us. But that’s not what made America a free and prosperous country. Getting along well enough to freely cooperate even with strangers, while preserving our individuality, is.  

Unless we return to prioritizing not doing the moral don’ts over doing the moral dos, our cultural glue will weaken further, and we will become less trusting and therefore less willing to cooperate outside our most intimate social circles. We will increasingly be unable to do that which made America the envy of the world.  

Figures walk through the slums in Washington, DC, captured by the The Farm Security Administration. 1935.

“Slumming it” is a slang expression describing the practice of young people from families of means visiting (or temporarily living in) impoverished areas to experience lifestyles foreign to their upbringing. The practice is often deemed exploitative, the expression offensive. Still, it may well be that slumming it (pardon the historical expression) played an important role in making the world rich.

Economic historians have told and tested a great many tales of how the world got rich — or specifically, how innovation surged in about 1760 in England, then elsewhere, and never let up. In the last few decades, Deirdre McCloskey has promoted a compelling, qualitative origin story broadly subversive to these.

McCloskey’s story is one of ideas and perspectives. Before institutional protections could arise, England had to first find a way to overcome strong moral prejudices against profiteering lifestyles. England somehow did. The damnable pursuit of wealth — when squinting just right — became the courageous spirit of commerce by about 1700.  

McCloskey’s epic effort is remarkable, but her “somehow” remains hazy. Dan Klein has admirably stepped up and suggested it may be found in Hugo Grotius’s philosophy of 1625. Grotius helped establish that commerce only had to be honest — not virtuous — to be acceptable. “Having a go” broke wide open.

I would like to suggest a different origin. Instead of philosophical tomes, it may have been salacious plays and vulgar urban dictionaries — a pop culture from the same era which derived from slumming it around London’s hawkers, slop sellers, and bunters.

A Story of Two Journeys

Two types of immigrants to the rapidly growing city of London are the protagonists of this economic story, both arriving due to legal conundrums.

Primogeniture was conundrum number one. Primogeniture required inheritance to go primarily to the first born. By the 1590s, following a post-plague baby boom, the elite landowners had a surplus of cadets (the younger siblings). Many cadets had to leave the countryside and go to vicious London to try to make their way through education or apprenticeship.  

Restrictions against vagrants and vagabonds was conundrum number two. Wandering theater troupes found their way of life in jeopardy so they came to set up permanent playhouses in metro London, the first occurring in 1576.

It is in England’s first “theatre district” (co-located with the marketplaces, alehouses, and brothels in the suburbs known as the Liberties) that our protagonists meet. The playwrights had to appeal to their given audience. They did so by writing tales of validation about this coterie of young cadets who were in the slums (and, yes, enjoying them) but who refused to be of the slums.

From Vicious to Gallant

These cadets were in a difficult social position. They yearned to return to the gentleman’s social status but to do so they needed to engage in those profiteering acts shunned by the gentleman. Playwrights came to their aid by portraying these young men as “Gallants.”

The Gallant was a new version of the traditional British outsider, the Trickster. Where Tricksters were deplorable in their carnal schemes, Gallants were appealing in their designs for love, honor, and money. They embodied their names of, say, Witgood and Possibility against lamentable elder elite such as Lucre and Hoard. More importantly, where prior Tricksters would be cast out when discovered, Gallants would always be forgiven and accepted into the social circle, their guile revalued as cleverness, their crimes as “human follies.”

For three decades, the “City Comedy” genre of the Gallant defined the London theater scene. Night after night it favorably recast the messy primordial stuff of McCloskey’s bourgeois virtues — ambition, opportunism, calculation, and the wily destruction which would in time become “creative destruction.” In addition, it authorized London’s “constant mingling of blood, class, and occupation” and deprecated its fuddy-duddy hierarchies and world views.

Whereas prominent men of science and letters, such as those in “Hartlib’s circle,” used reason to overcome mistrust of social change and experimentation, playwrights steered with what may have been the more powerful stuff of emotion, sympathy, and humor.

From Vulgar to Estimable

Evidence of this genre’s effect can be seen in the curiosity it created regarding the people of the slums and their slang — also known as cant, vulgar tongue, flash, and conny-catching. For the next two centuries, slang dictionaries became a popular purchase, a tantalizing sort of Fodor’s guidebook through the underbelly of London.  

These dictionaries credibly assisted a cultural transformation. First slang became fashionable. Then the dictionaries and their prose spinoffs began to tentatively characterize slang as that of “the people,” then to associate it with the emergent concept of British liberty, then to relish in its British free-spirit (via what must rightly be called an early form of gonzo journalism).

That spirit, it turns out, was foremost the hustle for money — or, I should say, the raising wind for ribben, rhino, cole, colliander, crap, crop, spans, quidds, ready, lowre, balsam, plate, prey, gelt, iron, mulch, gingerbread, dust, and darby. (Not to mention curles, shavings, pairings, and nigs, of course.)

Nothing is more represented in this lexicon than the pursuit of money — from the old professions of bully backs, pot coverts, cutpursers, cole fencers, Covent Garden Nuns, Fidlam Bens, jarke-men, and Figgers, to the new ones of gullgropers, impost takers, sealers, sleeping partners, Grub-street writers, duffers, and stock jobbers.  

By the eighteenth century, the great defenders of the market economy would make use of these familiar portraits of lowly markets — so prominent were these portraits in the public conscience. Bernard Mandeville would didactically assert that their private vices produced public benefits and, as an acerbic inversion, that the true speakers of criminal cant were traditional authorities. And Adam Smith would assert that the “higgling and bargaining of the market” was a natural and beneficial expression of a human propensity. We are all higglers; a philosophy of bourgeois equality had finally come into its own.

Conclusion

England, adhering to the Great Chain of Being, cast its detritus toward London. London cast back a provocative new pop culture. This pop culture helped society negotiate the ambiguity of hawker ethics and come to terms with the messiness of the emergent commercial order. It helped carefully determine where to cut Grotius’s honest commercial practices from unvirtuous cloth.

I propose, then, that the miracle of the modern economy owes as much to the accidental playhouse of 1576 as to the intentional jurisprudence of 1625 and the late-to-the-game Glorious Revolution in 1689. My proposal does not lend itself to a positivist research agenda, but I put it forward — in the spirit of Deirdre McCloskey — as a charge to read widely, explore deeply, and be willing to slum it a bit in the disciplines of others.

Modern capitalism has no virgin birth; of that one should be sure. And if it so happens that it received its just form, direction, and salvation through a slumming voyeurism, so be it. We would be strengthened in our defense of it to recognize how closely commerce once communed with sin and, in the minds of some, still does.

Attorney General Merrick Garland delivers remarks to Department of Justice employees. 2021.

Consumer interests are paramount when making business decisions, but business growth is largely determined by the individuals at the helm of the organizations they serve. Some firms grow to serve a wider consumer base, some to be reactive to demand levels, some to be proactive regarding market trends, and some to be responsive to competitive pressures. Regardless of the ‘why’ of a firm’s growth, the ‘when’ and ‘how’ will impact (or impede) the efficacy of expansion. And sometimes the aspirations of business leaders don’t go according to plan — an acquisition can go awry, a business deal could go bust, or supply chain networks for scaling could shift. Moreover, a competitor, new technology, or a new situation could render a business obsolete. Uncertainties abound in the business realm, which is why most firms look to control what they can, when they can, and advance themselves when able. Google is a case in point.  

At the outset, Google had plans to be bought, not to be one of today’s biggest businesses receiving heat from the DOJ for its search engine prominence. In 1998, Google pitched itself to the premier search engine at that time, Yahoo, with a price tag of $1 million — but Yahoo declined. A year later, Google tried again and pitched itself to Excite, for $750,000. Once again, Google was turned down. (Excite was later bought by Ask Jeeves, now Ask.com).

Google pushed forward and prevailed, and by the start of the 2000s, Yahoo did an about-face and adopted Google as its own search engine provider, then sought to acquire Google for $3 billion. This time, Google said no; Google wanted $5 billion. What a difference a few years can make, paired with a determination to be the best. And, as fate would have it, Yahoo was later bought by Verizon in 2017 for just shy of $5 billion.

Google had some impressive milestones early on that helped it expand, thanks to internal ingenuity as well as external acquisition opportunities. Major initiatives included the 2004 start of Gmail, the 2005 launch of Google Maps, the 2005 purchase of Android, the 2006 acquisition of YouTube, the 2007 acquisition of DoubleClick, and the 2008 introduction of Google Chrome. Today, Google is a juggernaut. Google is to Gen Z what AOL was to Gen X.

Google’s growth is truly an impressive feat, and we have all benefited from its services — something we should remember while government agencies grill Google for its current market dominance in the search engine sector. The reason why Google has a dominant position when it comes to search is because it took steps to secure itself as the default. But the only way it could achieve default status was by deserving it (in addition to any dollars paid and contracts made). If consumers weren’t satisfied with Google Search, no amount of money could cover the cost for smart device providers to keep Google as the featured choice. Remember the backlash that ensued when Apple had its own map app installed as the default on iPhones in 2012. Had Apple Maps initially worked well, there wouldn’t have been an issue with it being featured over Google Maps, but that wasn’t the case. And, as The Guardian put it, Apple Maps was a $30 billion mistake. Apple had to relinquish its own app on its own smart devices and allow consumers to revert back to the better option they preferred — Google.

Google’s superior ability to provide relevant search results is a huge value for users. I know that if something unexpected happens and I need to get a loved one to a hospital, Google Search will find me the closest and best care providers and Google Maps will get me there — and this information is provided in an instant at no cost. Now some will say the cost is my data, which I say: go ahead, take my data, I want relevant search results and I like ads and promotions curated to my needs and interests. The last thing I want is to have my smart device ask me which app I prefer. I want the best one as the default, and if I learn of a better one, then I or the provider of my smart device will switch. And, the fact that Google is willing to pay for its default placement, even when it knows it is the best, demonstrates how vulnerable it is. 

If the government’s alphabet agencies could get out of the way of market mechanisms, an alternative brand or product could unseat Google’s monopolistic stance just as Google displaced Yahoo’s dominance in the early 2000s. Another supposed monopoly that toppled around the same time as Yahoo was Motorola. The 2007 launch of the iPhone took over Motorola’s superior and impressive market power position. A few years later, in 2012, Google bought Motorola, hoping to make a dent in the mobile sector and revive its status. That turned out to be a big mistake, and Google would later offload Motorola in a sale to Lenovo. 

Yes, Google has surpassed all in search but it has yet to make a mark in the smartphone space, or the messaging space (remember Google Wave, Google Talk, and Google Allo), or even the social media space (sorry Google+ and Google Buzz). Is it any wonder then that Google aims to safeguard its default search status on smart devices? 

Google competes in an international world of ideas, and there is no way of knowing which product or service may suddenly surpass Google’s own offerings. Actually, Apple Maps has recently relaunched in beta to have another go at Google Maps and, as for search, consumers are already toying around with new tools for their queries and questions thanks to AI advancements. Perplexity AI has been positioning itself to take on Google Search and now OpenAI is looking to topple Perplexity in the AI search race. Moreover, search engine startups with unicorn status due to venture capital investments, span the globe and show no signs of slowing down. The DOJ’s case against Google is a moot point and waste of taxpayer dollars.  

As stated earlier, Google is to Gen Z what AOL was to Gen X. And look at what came of AOL. During its heyday, there was no escaping AOL, which controlled more than 90 percent of the market for instant-messaging software. AOL was the undisputed default and its merger with Time Warner in 2001 was one of the largest corporate consolidations at that time. AOL and Time Warner together enabled control over all mass media and internet activity. Yet, the dotcom bubble burst, and the difficulty of managing such a massive organization hastened the megamerger’s demise. 

Clearly, even the best of the best don’t always know what is best in a dynamic marketplace. Government agencies and officials thinking they know better is truly absurd. The pervasive distaste coming from political elites for American firms who do well by their customers and do well to further their business growth is baffling, particularly when businesses like Google are a huge benefit to not only our economy but also our national security. It’s a shame that the DOJ isn’t on the same page as the Department of Defense, which contracts with Google Support Services (coincidentally, the Army uses Google Workspace and US Special Ops is looking to leverage Google Glass). 

Business leaders who continue to surpass global competitors and their capabilities, by taking risks and growing their firms, should be admired (not attacked) for their productivity and progress. And members on the Hill, who take more than they make for our economy, should realize that the bashing of big businesses will dull the dynamism this country is known for. Innovative entrepreneurs will start to shy away. Congress should Google the benefits of economic freedom and also search up how to curtail the gargantuan levels of government spending; the history of economic progress and federal budget restraints can shed light on the mismanagement we all see today. Without question, a bulging bureaucratic state is more costly for Americans than the growing success of our most innovative firms.

Empty supermarket shelves in Caracas, Venezuela, during widespread shortages of food and medicine caused by price controls and hyperinflation. 2018.

Things are rarely so bad that decisive action by government officials can’t make things worse.

In the current election, the Republicans are trying to outdo each other by proposing larger and more restrictive tariffs. The Democrats have just come out with a remarkably bad plan to outlaw “price gouging,” particularly for groceries.

Such proposals get more attention from politicians at election time, because to get votes you have to show you did something.  The fact that the right thing is to do nothing is hard for politicians to accept, because no one can claim credit for the market.

I’m not trying to make a partisan point, because as I noted above there are ill-advised proposals on both sides of the party divide.  And I’m not claiming markets are perfect. The problem is that asking voters what they want prices to be is a recipe for becoming…. well, Venezuela.

In 1981, about half of Venezuela’s population was living on the equivalent of $10 per day or less (the number for the US was less than 5 percent). That number was flat until 1992, the year that Hugo Chavez launched his unsuccessful coup attempt against the corrupt regime of President Carlos Andrés Pérez. Pérez was forcibly removed from office in 1993; officially, he was removed for embezzlement — which he did in fact do — but even more for showing a near-total inability to deal with social unrest over the collapse of the economic system, even with substantial oil revenues to fill government coffers.

Chavez was pardoned in 1994, and in 1998 he was elected to the Presidency. He immediately worked to deepen and expand the “Bolivarian Revolution,” focused on social welfare programs, nationalizing key industries, and “democratizing” the market system. As long as oil prices were high, and people were satisfied with essentially free electricity as a handout, the “Chavismo” regime was politically successful.

But Chavez died in 2013. His successors tightened and expanded the grip of their socialist philosophy, and GDP went into free fall. Where GDP per capita had been well over $18,000 US in 2013, today it has fallen to around $5,000, a decline of more than 70 percent for an oil-rich nation.

The situation eroded quickly, reaching an early head in the summer of 2015. Prices were skyrocketing because of inflation, caused by the government using newly printed money to pay off debts and make payroll. But the government had accused corporations that ran large grocery chains of “price-gouging.”

I remember reading about this at the time, and in a way I still can’t quite believe it, nearly ten years later. In July 2015, a massive police contingent raided a hoard of food and grocery products in Caracas. They found tons of food and groceries, which they then distributed for free to people in the street, thereby “liberating” the necessities from the hoarders.

The unbelievable part is that the “hoarder” was Empresas Polar, a giant grocery and food retail conglomerate. The “hoard” was a warehouse, a large distribution center where trucks delivered pallets of wholesale food items, and from which shipments went out to local retailers. It’s not really surprising that an enormous building designed to store food would have a lot of food in it.

But once the warehouses were raided, and the contents donated to the public, prices of food immediately tripled, or more, if food could be obtained at all. Groceries all closed, because their supply chains were cut off by the anti-gouging order. Seizing a warehouse full of food meant that a few thousand people got food for “free” for one day, but suppliers immediately tried to get their shipments sent elsewhere, before they could be “liberated” by the “representatives of the people” working for President Nicolas Maduro.

I should emphasize again that I am not trying to make a partisan point. Venezuela, at a time when it was having trouble feeding its population, also imposed very large tariffs on imported agricultural and other imports, thereby raising prices for consumers even further. But those policies pale in significance when compared to the price control fiasco.

Now, for the bad surprise: the US seems to be well on its way this summer, traveling down “The Road to Venezuela.” In a speech right here in my home town of Raleigh, North Carolina, Vice President Harris announced on August 16 that she would place controls on grocery prices.

As attorney general in California, I went after companies that illegally increased prices, including wholesalers that inflated the price of prescription medication and companies that conspired with competitors to keep prices of electronics high.  I won more than $1 billion for consumers.  (Applause.)

So, believe me, as president, I will go after the bad actors.  (Applause.)  And I will work to pass the first-ever federal ban on price gouging on food.

Problems with a federal law on “price-gouging” have been pointed out by others. That would require a benchmark of what the price should be, and a limit on how much grocers could charge. The proposal is also likely a violation of the Tenth Amendment, which reserves “police power” (which surely includes retail point-of-sale prices), to the states, rather than the federal government. On the other hand, interstate commerce might be expanded to encompass these kinds of sales, for large companies at least.

The real problem is the merits of price controls, rather than problems with enforcement. This description, from X (nee Twitter), spells out the generic step-by-step process, accurately identifying what happened in Venezuela and what could happen in the US.  

This article, in the New York Times points out unequivocally that there are good reasons to recognize that intentional price manipulation by grocery chains in the US played at most a minor role:

Consumer demand was very strong. Fed and congressional efforts to boost households and businesses during the pandemic, like the $1,400 payments for individuals Mr. Biden signed as part of the economic rescue plan early in 2021, fueled consumption.

“If prices are rising on average over time and profit margins expand, that might look like price gouging, but it’s actually indicative of a broad increase in demand,” said Joshua Hendrickson, an economist at the University of Mississippi who has written skeptically of claims that corporate behavior is driving prices higher. “Such broad increases tend to be the result of expansionary monetary or fiscal policy — or both.”

Ten years ago, Venezuela set out on a path to economic ruin and grave shortages of basic consumer goods, because of price controls on groceries and other products. Is the US really going to travel on the same road?

Generated by Feedzy