Toward a Classical Education for the Information Age
America needs great institutions of higher learning. We don't have them. Fortunately, interest is rising. It's time to reclaim and update Classical Education for the Information Age.
Reforming Education
Over the past couple of months, I’ve found myself drawn back into two related areas of longstanding interest: Reforming higher education and reinvigorating classical education. I believe that interest in both questions is rising. My friends Eric Cohen and Mitch Rocklin, over at Tikvah, recently proposed a wonderful Jewish spin on the latter topic.
On my end, I’ve been living with these issues for decades. My central argument in The New Civil War is that the polarized battle lines dividing America are less left/right than elite/everyone else. More than half the book details the corrupt structures and incentive systems that have destroyed American education. Much of the rest shows how that destruction has emanated outward to corrupt the media, government, and growing number of additional critical American institutions.
The current discussions have moved my thinking in two concrete directions: The first, motivated by the Supreme Court’s recent oral arguments on the Biden Plan to forgive massive student debt led to today’s column for RealClearEducation. The basic argument is simple: Making the universities that induce the debt responsible for ensuring payment would go a long way towards aligning academic incentives with the goals of a quality education.
The second direction reminded me of an issue that got me into lots of trouble back when I was trying to operate within the corrupt academic incentive system: My belief that computer, information, and data science should be central to a Classical Education for the Information Age.
In 2005, in anticipation of launching “The Informationist,” my blog about “life during the transition from the industrial age to the information age,” I pulled together some thoughts that had been gelling for over twenty years. Now, nearly twenty years later, I’ve begun to wonder whether there might now be a responsive audience building.
What follows is my 2005 essay “Why the Informationist?,” edited only lightly to maintain relevance, that casts modern information science squarely within the classical tradition:
Why “The Informationist?”
This blog’s seeds were planted in my mind in 1980, during my freshman orientation at Columbia. The core curriculum of Columbia’s undergraduate program introduced us to the great formative works of Western Civilization. At the time, required classes were considered passé. Most universities had dispensed with the notion entirely, and those few who clung to such outdated educational notions felt compelled to justify their tenacity.
At my orientation, one of Columbia’s many professors dedicated to the program spoke to us about its rich history and tradition. He explained that “Contemporary Civilization” (CC, as generations of Columbians affectionately call it) began as an inquiry into the causes of the first World War. He paused, looked at the audience, and sympathized: “Now, you’re probably thinking: I may not know what caused World War I, but I’m pretty sure that it wasn’t Plato.”
I was hooked. Over the next few years, it all flowed together in my mind—some from my coursework, some from my own reading—as Plato, Jesus, Rav Ashi, Descartes, Locke, Jefferson, Madison, Hamilton, Smith, Marx, Mill, Darwin, von Neumann, and numerous others taught me to appreciate the civilization into which I’d been born.
About the same time, I discovered computers, computing, and computer science. I was never much of a gadget freak, but I fell in love with the stark beauty of algorithmic logic. The ability to focus entirely on process, and to convert an arbitrary set of inputs into logically necessary outputs, just felt right. It struck me as the inherently right way to think about complex issues and to solve challenging problems.
I next discovered heuristic programming, artificial intelligence, and Bayesian statistics, three related fields devoted to expanding algorithmic thinking from the logically necessary to the merely likely. Algorithmic thinking gave me a lens through which to view the philosophical big picture. It dawned upon me that every one of history’s great philosophers had asserted the commonality of the various areas of human inquiry. Some found the common source in theology, some in physics, some in biology, and some in economics—but all attempted to persuade their readers of the centrality of their preferred source.
I also realized something else: our philosophers became increasingly formal over time. Plato hung his insights on a weak framework; Aristotle did much better on that account. Jesus, Paul, and Augustine were informal; Aquinas restored formalism. Maimonides did the same for Rav Ashi. Jefferson, Madison, and Hamilton operationalized Locke. And so on and so on and so on.
Throughout most of this history, two unanswered questions hung in the air: How much formality is necessary to gain the insights we seek? and How much formality is possible? In the late 19th Century, Russell and Whitehead set out to solve these dilemmas once and for all. They would devise a set of logical rules sufficiently expressive and formal to reduce all human reasoning to mathematical formulae.
By the middle of the 20th Century, we understood that their goals were unachievable. Gödel’s incompleteness theorem, Heisenberg’s uncertainty principle, and numerous related discoveries taught us that no matter how hard we tried, some things would elude formal treatment and remain unknowable. Some like to attribute the unknowable to God, others prefer to assign it less weighty titles, but one way or another, the ancient quest for the Universal Explanation of Everything came to a screeching halt. We were just going to have to learn to live with a certain amount of uncertainty.
Just about the time that we achieved that insight, technologists invented the digital computer. In short order (less than a decade into the computer age), a number of these technologists glommed onto the idea of growing their “computing machines” into “thinking machines.”
Our ancient philosophical quest was reborn. Rather than trying to explain everything, it would work backwards. The “knowledge representation” tools of AI defined the starting point by telling us how formal our treatment had to be. Any area of human inquiry that we could translate into a formal knowledge base could drift into the realm of our computational thinking machines. Algorithms, typically augmented by probabilistic heuristics, could then manipulate the basic represented information and unlock its implications.
This newborn approach could prove to be useful even for problems that eluded that level of formalization, however, for the simple reason that it imposed a new discipline on our own thinking. Algorithms familiarized many of us with the centrality of logical thinking derived from compact axiom sets— an approach that had rarely before extended beyond academic mathematics.
I spent most of the 1980s getting these various observations and strands of my thinking to gel into something coherent. I saw AI, probability, algorithms, and logic combining into a powerful philosophical methodology with the potential to change the world.
I wanted to understand how this methodology could help people make better decisions, businesses devise better strategies, and governments craft better public policies. I kept seeing ways that this methodology could inform my own areas of substantive interests—religion, politics, and foreign affairs (or if you prefer, God, America, and the World).
I saw a world undergoing a confusing and often painful transition from industrial age to information age grasping for a way to understand the scope of the consequent changes. At one level, it all seemed so simple—we had computers. At a deeper level, though, we had entered a profoundly new era.
A single sentence defines the information age and reveals how it differs from all earlier epochs: Information is abundant, easy to collect and manipulate, and inexpensive to share. Everything else derives from this single change. Never before has any individual, no matter how erudite, had as ready access to as much information about as many topics as does the least connected member of the information age. We, a thinking species mired forever in a world of information scarcity have suddenly found ourselves thrust into a world of information abundance.
That simple twist changes everything. Every aspect of life that involves the collection, combination, or communication of information—in other words, every aspect of life—must change to accommodate our new reality. By the mid-1980s I found myself thinking: “I may not know what caused the information age, but I’m pretty sure that it wasn’t Plato.” Unless maybe it was.
By the end of the decade (I can’t remember exactly when), I realized that I’d derived my own philosophical approach: An information-centered, probabilistic, algorithmic, view of the world. I began searching far and wide for others who had derived and developed similar approaches, coined the term “informationism,” declared myself “an informationist,” and proceeded to do nothing with either label until today. I ran a quick Google search to see if someone else had snagged my words in the intervening years. They don’t appear to have assumed a conventional meaning likely to cause confusion or anguish. And quite frankly, I still like them.
So why “The Informationist?”
There’s nothing new under the sun. (That’s Ecclesiastes 1:9, for those keeping score at home). As the years have gone by, I’ve expanded both my training and my reading. I’ve discovered intellectual traditions whose axioms I generally accept—and many of whose fundamental insights I’ve derived on my own.
I fell easily and naturally into the netherworld at the intersection of cognitive psychology, economics, law, and management. I found many of my own thoughts reflected in the work of the classic economic liberals; the cognitive psychologists who study heuristics and biases; the decision analysts who apply Bayesian probability to formal modeling and decision-making; and the legal scholars who defined the “law and economics” school of analysis. I worked my way into each of these fields, learned their basics, and made my own modest contributions. I consider them all to be dead-on right about many of their essential claims.
At the same time though, I remain opposed to fundamentalism in all of its forms. I refuse to adopt every position that a brilliant, insightful writer advocated just because I find his or her writings to be brilliant and insightful. Besides, the mere fact that they wrote first means that I have access to more data than they did. I can see what they saw, contemplate what they said, and see what happened next. That should buy me something—and I tend to use that which I’ve bought.
To pick but one example, in the words of Ludwig von Mises, “liberalism is applied economics; it is social and political policy based on a scientific foundation.” Though I believe strongly in social and political policy based on a scientific foundation, the science upon which I draw is not economics—at least not in its strictest sense. My political philosophy derives from applied information science. Perhaps that’s why I needed to coin a new word. Could any word be more apropos than “informationism?” Informationism is applied information science; it is social and political policy based on a scientific foundation.
So why “The Informationist?”
After many years of publishing scholarly journal articles while refusing to do the legwork necessary to get my general-interest essays published, I finally bit the bullet a few years ago and poured myself into a project geared toward a broad audience. Digital Phoenix: Why the Information Economy Collapsed and How it Will Rise Again (MIT Press, 2005), hit the stands on May 1, 2005.
Digital Phoenix is an informationist book, but I tried to keep the philosophy in the background. By and large, it tells the formative stories of the information economy: the Internet investment bubble, the Microsoft trial, the rise and fall of Napster, and the advent of open source.
Along the way, it describes the fundamentals of intellectual property and antitrust law; industrial organization and network economics; and artificial intelligence and software engineering that let these formative stories make sense.
Nevertheless, I tried hard to make all of this material accessible to a general audience. A large part of my pride in the book stems from my conviction that I succeeded. Now that the book is available, I may soon learn whether or not such pride is warranted. More to the point though, Digital Phoenix motivated me to get my act together and to launch The Informationist.
So why “The Informationist?”
Ecclesiastes answered that one as well, right up front in 1:2. Vanity of vanities, all is vanity.
Please subscribe to receive weekly installations of The American Spirit vs. The Great Awokening.
For more information about Bruce D. Abramson & American Restorationism, visit: www.BruceDAbramson.com
To learn more about how America’s elites destroyed the republic, see: The New Civil War: Exposing Elites, Fighting Utopian Leftism, and Restoring America (RealClear Publishing, 2021).
To learn more about the ideology driving today’s anti-American leftism, see: American Restoration: Winning America’s Second Civil War (Kindle, 2019).
To learn more about our work at the American Coalition for Education and Knowledge, visit us at The Coalition for America.
To learn more about how I turn the ideas I discuss here into concrete projects that serve the interests of my clients, donors, and society at large, please e-mail me at bdabramson@pm.me.