AI Is the Technocratic Elite’s New Excuse for a Power Grab

Funny how the ‘existential’ threats always justify the same solutions: 

more control and bureaucracy.


By Gerard Baker


What’s the bigger threat to humanity: artificial intelligence or experts demanding that something be done about it?


As warnings about the menace to human existence get louder, and calls for action on a global scale more urgent, it seems increasingly likely that whatever else it may be, the AI menace, like every other supposed extinction-level threat man has faced in the past century or so, will prove a wonderful opportunity for the big-bureaucracy, global-government, all-knowing-regulator crowd to demand more authority over our freedoms, to transfer more sovereignty from individuals and nations to supranational experts and technocrats.


If I were cynical I’d speculate that these threats are, if not manufactured, at least hyped precisely so that the world can be made to fit with the technocratic mindset of those who believe they should rule over us, lest the ignorant whims of people acting without supervision destroy the planet.


Nuclear weapons, climate change, pandemics, and now AI—the remedies are always, strikingly, the same: more government; more control over free markets and private decisions, more borderless bureaucracy.


Last week hundreds of scientists, technology entrepreneurs, lawmakers, Davos luminaries and others issued a 23-word statement under the auspices of the Center for AI Safety, demanding unspecified action: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”


It was arrestingly concise, but in its brevity—and its provenance—it offers hints of where this is coming from and where they want it to go. “Risk of extinction” leaps straight to the usual Defcon 1 hysteria that demands immediate action. “Global priority” establishes the proper regulatory geography. Bracketing AI with the familiar nightmares of “pandemics and nuclear war” points to the sorts of authority required.


Many of the signatories also represent something of a giveaway: Oodles of Google execs, Bill Gates, a Democratic politician or two, many of the same people who have breathed the rarefied West Coast air of progressive technocratic orthodoxy for decades.


To be fair, I am sure many of the signatories, and many of those who share their sentiments, are genuinely concerned about the risks of AI and are simply trying to raise a red flag about a matter of real concern—though we should probably note that techno-hysteria through history has rarely proved to be justified. But we also know that the thrust of these alarms always pushes in the same ideological direction.


No less than Albert Einstein famously believed that the only way to prevent a humanity-extinguishing nuclear war was through the creation of a “world government” that would be “able to solve conflicts between nations by a judicial decision.” Einstein was perhaps the greatest scientist of the last century, but I respectfully submit that this was and remains intellectual hooey.


Since he spoke those words almost 80 years ago, the number of nuclear powers in the world has grown significantly. Those powers have been engaged in kinetic military conflicts on every continent from Asia to Latin America. One of those powers is in an all-out war in Europe after invading one of its neighbors. Yet nuclear annihilation has failed to materialize.


I suspect attempts to impose a world government would have been much more likely to result in an extinction-level nuclear war than the exercise by nations of their right to self-determination to resolve conflicts through the usual combination of diplomacy and force.


Climate change is the ne plus ultra of justifications for global regulation. It probably isn’t a coincidence that climate extremism and the demands for mandatory global controls exploded at exactly the moment old-fashioned Marxism was discredited for good in the 1990s. Having failed to impose effective authority over free markets through collectivist dogma, the left suddenly found a climate threat it could use as a golden opportunity to regulate economic activity on a scale larger than anything Karl Marx could have imagined.


As for pandemics, our public-health masters showed by their actions over the past three years that they would like to encase us in a rigid panoply of rules to remediate a supposed extinction-level threat.


None of this is to diminish the challenges posed by AI. Thorough investigation into it, and healthy debate about how to maximize its opportunities and minimize its risks, are essential. We should listen to the concerns of those most intimately familiar with its capabilities. Neither do I dispute that international cooperation has proved a valuable way to mitigate the risks of nuclear war, climate change and pandemics and will surely be necessary as the frontiers of AI advance.


But as we hear the usual demands to regulate “misinformation” and “disinformation,” and the warnings about the nefarious ways in which unscrupulous populists will use AI, it seems this latest panic is inducing primarily a familiar, Pavlovian response from those with a predilection for worldwide rule over our private endeavors.


When confronted with yet another spectacle of self-anointed experts and technocrats demanding global action to create massive new bureaucratic opportunities for themselves and their like-minded friends, here’s my advice: Beware geeks bearing grifts.


So where exactly is AI today? 

See CBS's 60 Minutes report Spring 2023






(43 min).