Blast from the Past
And now for something completely different…a blast from the past. More specifically, a blast from my past.
Because it’s from the past, it’s less topical than many of my other postings. It remains relevant, however, because it provides significant insights into my analytic methodology. I take pride in seeing interconnections that many far-more-prominent commentators miss. I try to write only when I think I have something add to the public pool of knowledge. Perhaps the primary reason I believe that I’m able to make such a contribution is that I’ve built myself an exceptionally large toolkit and crafted my own methodology (applicable at various levels of formality).
That said, here’s a teaser: This essay connects the Western canon and Artificial Intelligence (AI) to the political, economic, and social upheavals defining 21st c. life.
In 2005, I launched my first blog, “The Informationist,” as a vehicle for the first of my two books about the information economy.
I had written Digital Phoenix: Why the Information Economy Collapsed and How it will Rise Again (MIT Press, 2005) between late 2001 and early 2003 to try to make sense of the rubble that had been the Dotcom bubble and the three other “first front page” tales of the young information age: the Microsoft trial, the rise of Open Source software, and the Napster-driven music wars.
Digital Phoenix showed that these stories were all manifestations of the same phenomenon: An unprepared world shifting uneasily and unevenly from advanced industrial age to information age.
My basic message was straightforward. A world in transition produces both pain and opportunity. Clever innovators who master the new technologies enrich the lives of millions. At the same time, however, they challenge some very powerful entrenched interests and widespread expectations. Those tensions produce a predictable pattern of struggles implicating technology, business, law, economics, and public policy.
Music provided an early lesson. Though music is one of the world’s oldest businesses producing some of the world’s most popular products, it remains a relatively minor industry. Its contributions to GDP and employment are small and it doesn’t factor into any critical supply chains. It was, however, the very first non-information industry to transition entirely into the information age. Understanding the music wars leading up to the Digital Millennium Copyright Act (DMCA) of 1998 and the 1999 launch of Napster should have taught us what to expect as other areas of life made the same transition.
I considered it critical that our leaders pay attention to those lessons because in the information age, every resource but one would flow faster and at lower cost. That one exception? Human skills. It’s one thing to say that the economy benefits if it replaces 100,000 mining jobs in Appalachia with 100,000 higher-paying tech jobs in the Sun Belt. It’s another thing entirely to retrain and relocate 100,000 miners.
Needless to say, my observations, predictions, warnings, and strategies fell upon deaf ears. They did, however, motivate my blog (I bet you were wondering when we’d get back to that). I launched The Informationist with an essay interleaving my view of the world with the skill set that has allowed me to build a lucrative consulting practice as a strategist, troubleshooter, and problem solver.1
Looking at the challenges facing the world today, I’ve decided that it’s time to revive my 2005 essay “Why The Informationist.” I present it with only light editing for your reading pleasure:
2005 Essay: Why “The Informationist?”
The seeds of The Informationist were planted during Columbia’s 1980 freshman orientation when I was still bright-eyed and bushy-tailed. The professor chosen to share the rich history and tradition driving the College’s required dive into the Western canon—affectionately known to generations of Columbians as Contemporary Civilization, or CC—told us that it had begun as an inquiry into the causes of the first World War.
He paused, leaned forward, looked at the audience, and sympathized: “Now, you’re probably thinking: I may not know what caused World War I, but I’m pretty sure that it wasn’t Plato.”
I was hooked.
Over the next few years, it all flowed together in my mind, as Plato, Jesus, Rav Ashi, Descartes, Locke, Jefferson, Smith, Marx, Mill, Darwin, von Neumann, and numerous others defined the civilization into which I had been born.
About the same time, I discovered computers, computing, and Computer Science. I was never much of a gadget freak, but I fell in love with the stark beauty of algorithmic logic. The ability to focus entirely on process—and to convert an arbitrary set of inputs into logically necessary outputs—just felt right. It struck me as the inherently right way to think about complex issues and solve challenging problems.
I next discovered heuristic programming, AI, and Bayesian Statistics, three related fields devoted to expanding algorithmic thinking from the logically necessary to the merely likely.
The Universal Explanation of Everything
Algorithmic thinking gave me a lens through which to view the philosophical big picture. It dawned upon me that every one of history’s great philosophers had asserted the commonality of the various areas of human inquiry. Some found the common source in theology, some in physics, some in biology, and some in economics—but all attempted to persuade their readers of the centrality of their preferred source.
I also realized that our great philosophers had become increasingly formal over time. Plato hung his insights on a weak framework; Aristotle did much better on that account. Jesus, Paul, and Augustine were informal; Aquinas injected formalism. Maimonides did the same for Rav Ashi. Jefferson operationalized Locke. And so on and so on and so on.
Throughout most of this history, two unanswered questions hung in the air: How much formalism is necessary to gain the insights we seek? How much formalism is possible? In the late 19th Century, Russell and Whitehead set out to solve these dilemmas once and for all. They would devise a set of logical rules sufficiently expressive and formal to reduce all human reasoning to mathematical formulae.
By the middle of the 20th Century though, we understood that their goals were unachievable. Gödel’s incompleteness theorem, Heisenberg’s uncertainty principle, and numerous related discoveries taught us that no matter how hard we tried, some things would elude formal treatment and remain unknowable.
Some like to attribute the unknowable to God, others prefer to assign it less weighty titles, but one way or another, the ancient quest for the Universal Explanation of Everything had come to a screeching halt. We were just going to have to learn to live with uncertainty.
Approaching the Information Age
About the time that we achieved that insight, technologists invented the digital computer. In short order (less than a decade into the computer age), a number of these technologists glommed onto the idea of growing their “computing machines” into “thinking machines.”
Our ancient philosophical quest had been reborn! Rather than trying to explain everything, it would work backwards. The “knowledge representation” tools of AI would define the starting point by telling us how formal our treatment had to be. Any area of human inquiry that we could translate into a formal knowledge base could drift into the realm of our computational thinking machines. Algorithms, typically augmented by probabilistic heuristics, could then manipulate the information thus represented and unlock its implications.
This newborn approach was truly powerful. It promised to be useful even for problems that eluded the requisite level of formalization, for the simple reason that it imposed a new discipline on our own thinking. Algorithms familiarized many of us with the centrality of logical thinking derived from compact axiom sets—an approach that had rarely before extended beyond academic mathematics.
How extreme was this revolution in disciplined thinking? Try this on for size: A research style long known as “exegesis” and associated most closely with Talmudic study was recast as “surfing the web.” The long arcane method of creating a tree-like structure of references and commentaries surged into public awareness and everyday use.
I spent most of the 1980s trying to forge these observations and lessons into something coherent. I saw AI, probability, algorithms, and logic combining into a powerful philosophical methodology with the potential to change the world. I wanted to understand how this methodology could help people make better decisions, businesses devise better strategies, and governments craft better policies.
I applied this methodology to inform my own areas of passionate interest—religion, politics, and foreign affairs (or as I like to style them, God, America, and the World). I saw a world undergoing a confusing and often painful transition from industrial age to information age grasping for a way to understand the scope of the consequent changes. At one level, it all seemed so simple—we had computers. At a far deeper level, we had entered a profoundly new age of human development.
A single sentence defines the information age and reveals how it differs from all earlier epochs: Information is abundant, easy to collect and manipulate, and inexpensive to share. Everything else derives from this single change. Never before has any individual, no matter how erudite, had as ready access to as much information about as many topics as does the least-connected member of the information age. We, a thinking species mired forever in a world of information scarcity have suddenly found ourselves thrust into a world of information abundance.
That simple twist changes everything. Every aspect of life that involves the collection, combination, or communication of information—in other words, every aspect of life—will have to change to accommodate our new reality.
By the mid-1980s I found myself thinking: “I may not know what caused the information age, but I’m pretty sure that it wasn’t Plato.” Unless maybe it was.
By the early 1990s (I can’t remember exactly when), I realized that I’d derived my own philosophical approach: An information-centered, probabilistic, algorithmic, view of the world. I coined the term “informationism,” declared myself “an informationist,” and proceeded to do nothing with either label for many years.
Because there’s nothing new under the sun,2 I set out to find those who had derived and developed similar approaches long before I arrived on the scene. By 2005,3 I had identified several relevant intellectual traditions. I fell easily and naturally into the intersection of cognitive psychology, economics, law, and management. I found many of my own thoughts reflected in the work of the classic economic liberals; the psychologists who study heuristics and biases; the decision analysts who apply Bayesian probability to formal modeling and decision-making; and the legal scholars of the “law and economics” school.
I worked my way into each of these fields, learned their basics, and made my own modest contributions. I consider them all to be dead-on right about many of their essential claims. At the same time though, I remain opposed to fundamentalism in all of its forms. I refuse to adopt every position that a brilliant, insightful writer advocated just because I find his writings brilliant and insightful. Besides, the mere fact that he got to write first means that I have access to more data than he did. I can see what he saw, contemplate what he said, and see what happened next. That should buy me something—and I try to use that which I’ve bought.
Informationist Theory in Political Practice
A few months ago,4 I left the Democratic Party to become an independent. It was tough. It required a lot of soul searching. What drove it was my observation that many of those who call themselves liberals are really illiberal social democrats, while those who consider themselves devotees of the great liberal writers tend to call themselves conservatives. My thinking is far closer to the latter group than to the former.
Though I do have some thoughts about the linguistic confusion, I defer them to a different essay. For present purposes, I’ve distilled what most matters to me: A focus on individual freedom tempered with personal responsibility.
In practical terms, this belief makes me unabashedly pro-market, pro-democracy, and pro-republic.
[2021 Update: In the years since 2005, I have come to appreciate that there are often tradeoffs among markets, republics, and democracy. Republicanism guarantees individual freedom, minority rights, and equality under the law. Markets provide dynamism, prosperity, opportunity, and optimism. Democracy provides a voice in government. None of the three can ever be eliminated—but none can ever be allowed to squelch the other two. In today’s world, it is republicanism that is most threatened—which provides this 2005 essay with a clear hook into the broader theme of American Restoration.]
I see the government’s primary role as the developer of infrastructures within which free citizens can make meaningful decisions. As a general rule, I support policies that spread opportunity, that increase choice, and/or that provide the information necessary for choices to have meaning; I typically oppose policies that do none of the three.
I advocate policies that make individual choice either available or meaningful. I favor social safety nets that help people temper their natural risk aversion by avoiding the full consequences of catastrophe—though never all such consequences. Without at least some sting of failure, education and growth become impossible. Love and respect for humanity compels us to teach people to appreciate the consequences of their decisions and actions; compassion compels us to ensure that a particularly bad day or a bad run of luck educates rather than destroys.
I favor investments in infrastructures that free private actors to improve the efficiency of their transactions and regulations that improve markets by increasing transparency, information flow, and robust competition.
In the areas closest to my own specialization, I advocate regulations that promote innovation by harnessing technology and oppose those that lock in obsolete technologies and the business models based upon them.
Though the centrality I place on individual freedom likely brands me as something of a “classical liberal,” my broad definition of necessary and enabling infrastructures probably means that I don’t fully qualify (it also likely puts me at odds with many of my contemporaries who label themselves libertarian).
Then again, that makes sense. I find myself in only partial agreement with Ludwig von Mises’s definition, “liberalism is applied economics; it is social and political policy based on a scientific foundation.” The science upon which I draw is not economics—at least not in its strictest sense. My political philosophy derives from applied information science. Perhaps that’s why I needed to coin a new word. Informationism is applied information science; it is social and political policy based on a scientific foundation.
After many years of publishing scholarly journal articles, I finally poured myself into a project geared toward a broad audience. Digital Phoenix, which hit the stands May 1, 2005, is a thoroughly informationist book that keeps the philosophy in the background. My book release motivated me to launch The Informationist.
Vanity of vanities, all is vanity.5
I hope you’ve enjoyed this dive into the development of my analytic methodology. If you have, look for additional methodological essays in the future. If you haven’t, don’t worry. I won’t bombard you with methodology. My next Substack entry will focus on current events. I promise.
For more information about Bruce D. Abramson & American Restorationism, visit: www.BruceDAbramson.com
To learn more about how America’s elites destroyed the republic, see: The New Civil War: Exposing Elites, Fighting Utopian Leftism, and Restoring America (RealClear Publishing, 2021).
To learn more about the ideology driving today’s anti-American leftism, see: American Restoration: Winning America’s Second Civil War (Kindle, 2019).
Should you find yourself in need of any of those services, please note that I’m still in business .
Not that I stopped in 2005, but that’s when I wrote this essay.
I made that move in January 2005. I remained an independent until 2010, when I registered as Republican. Perhaps I will tell those stories in a future essay.