Making My Case: Reality Makes Sense
If it's happening, it's happening for a reason. Wokeism is happening. What's the reason?
(continued from Abundance Doesn’t Bring Meaning)
The Home Stretch
I’ve been posting these essays for the better part of a year, now. Early on, I told you a bit about myself. I did so because it’s only fair to let you know who I am and where I’m coming from. It might be nice if all arguments stood or fell entirely on merit, but they don’t. Everything embodies bias. My experiences, background, and personal beliefs all helped shape my arguments. As we near the end of this series, it’s probably appropriate for me to close out with a bit more personal context.
In assessing my case that Wokeism really is a new spiritual tradition, there are a few additional things you should know about me—and about the analytic methodologies I deploy: I tend to believe that reality makes sense. I dislike insanity as an explanation for anything major. That distaste covers explanations rooted in either the insanity of influential world figures or in some form of mass psychosis. I also believe that large conspiracies are rare.
I do, however, believe in mass movements. There’s no doubt that large numbers of people can quickly come to share a set of beliefs. Examples of such movements include religions, political and economic ideologies, investment bubbles, fashions, styles, and pop culture trends. Some of those mass movements are extraordinarily durable, having already lasted centuries or millennia. Others come and go in weeks. Either way, they represent a widely held set of shared beliefs capable of directing individual behavior and the course of history notwithstanding the difficulty that many believers would encounter if asked to explain or justify their beliefs.
When I see things happening that I don’t understand—and that happens a lot—I make certain assumptions about the people driving them. I assume that both the key players and their mass followers are rational. As someone who has studied rational decision-making, I like to back out the two components of each decision. In the technical language of Bayesian analysis, these components are probability and utility. In casual conversation, they translate into beliefs and values. Stated simply, when you—or anyone else—chooses to do one thing over another, you base your decisions on both your beliefs of what is likely to follow your action and upon the values you bring to the table.
To pick a simple example in a political setting, how do you decide which Presidential candidate to support, whether in a wide-open primary or a two-major-party general election? If you’re behaving rationally, you’d ask yourself two questions: Do I like the things this candidate is proposing? How much do I trust this candidate to deliver? The former question is about values; the latter is probabilities.
Suppose, for example, that a candidate promises to raise taxes on the wealthy to ensure that they pay their fair share. If you agree that the wealthy are taking unfair advantage of the rest of society, you’ve found a candidate who shares your values. The next question is whether that candidate can deliver. A promise, for example, to pass a new wealth tax that nearly everyone concedes would be unconstitutional seems like a hollow promise. The probability you place on your candidate delivering is low. Perhaps even more importantly, if (like me) you don’t share that candidate’s values, you probably need to look elsewhere.