by Bill Frezza
I’ve recently become hooked on Michael Solona’s new podcast series, “Anatomy of Next: Utopia,” a fascinating analysis of our most advanced and rapidly developing technologies that attempts to debunk the tech-gone-wrong dystopian nightmares that dominate much of the public imagination.
You all know how the story goes. Man invents an artificial intelligence (AI) smarter than himself. AI goes on to invent even smarter AIs. Smarter AIs exponentially become godlike and turn on mankind and either reduce us to slaves or drive us to extinction.
Implausible and Impossible
It ain’t gonna happen. The reason is something called the distributed knowledge problem. Although Solona hasn’t yet addressed it in his series, the Nobel Prize-winning economist F. A. Hayek explained it years ago in his last book, The Fatal Conceit. In short, it is a practical impossibility for anyone to capture and assess the sum total of information generated from countless millions of economic decisions made by independent agents distributed around the world, each seeking to maximize his or her own well-being.
The root of the error, made by all central planners, is the belief that the world is some sort of Newtonian clockwork mechanism. If science could only deduce its operating principles, enlightened rulers could guide humanity toward utopia, or something approaching it. More sophisticated versions of this view add the proviso that effective planning requires the collection of enough data to inform central planners where to apply their wisdom, and adapt their plan as conditions evolve.
Well, aha! What if that evolution spun out of human control, as superintelligent AIs deduce the operating principles of the world and Big Data tell them everything they need to know to control it? If they turned against us, how could we humans stand in the way?
To read the rest of the column click here.