If you say something wrong on the Internet, you will usually find out about it very quickly.
I’ve been making some pretty bold statements on the Internet lately, claims about human nature, about politics, about philosophy, about economics, about religion, about art, about sports, and not one person has told me I’m wrong. Very few people have said I’m right, either, but that’s not as weird as nobody telling me I’m wrong.
I’m getting crickets. Why? It’s not that nobody is reading, because I am getting some hits. It’s not that I’m *obviously* wrong, because I’d be ratioed in an instant if I were obviously wrong. It’s probably some combination of:
- I’m right
- I’m wrong, but people have their own problems right now, so they don’t have the bandwidth to bother to engage
- I’m wrong, but in an obscure way that is hard to argue against, so most people can’t tell one way or the other
- I’m wrong, but in a harmless crackpot way, so people don’t want to hurt my feelings
* * *
It’s certainly true that people have many more important things on their mind right now. And rightly so. I’m not writing to complain. I’m writing this because yesterday, I learned something, and I wanted to make a note of it.
I learned something about the third of the above points. That phenomenon, of explaining something in an obscure way, has a name: Inferential Distances.
As the link explains,
In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don’t have to explain to your fellow tribe members what an oasis is, or why it’s a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously.
So if I wanted to explain to someone in the 21st century how Google Maps works, I could just knit together a bunch of steps that you already understand. But if I wanted to explain it to someone from the 13th century, there’s a whole other set of steps I’d have to explain: that the earth is round, what outer space is, how things in space move in orbits, what an artificial satellite is, what an electromagnetic wave is, how you can communicate with a satellite using electromagnetic waves, how you can use a network of satellites to triangulate your position using those electronic waves, what a computer is–wait, what electricity is, what a screen is, how a computer can store data like maps, how a computer can calculate a route from one place on another on a map, how you can give a computer instructions, and a computer can use electronics to sound like it’s talking to give you instructions in return. In other words, to a 13th century person, there are a whole host of inferential steps that have to be understood first before you can even begin to explain WTF Google Maps is. A 21st century person already understands those steps.
My explanation of Google Maps probably contains numerous errors. But a 13th century person isn’t really going to be able to poke holes in my explanation of how Google Maps works. We’re going to get stuck on some silly minor step like “is the earth really round” and not actually get to the point where we address the real, actual flaws in my explanation.
* * *
Everything I’ve been trying to blog about lately hinges on one particular inferential step: the difference in the brain between nondeclarative memories (System 1) and declarative memories (System 2). If you don’t have a good grasp on that difference, everything that follows from it is going to be vague and unclear.
To me, this difference is the single most important fact about human nature.
And therefore, to me, if you are thinking about any aspect of human endeavor, from politics to philosophy to economics to religion to art to sports, and you aren’t thinking through the implications of the difference between nondeclarative/System 1 and declarative/System 2 on those human endeavors, you aren’t think things through as clearly as you could be.
* * *
Now this doesn’t mean you can’t reach the right conclusion without understanding this particular inferential step. You can build a fire without understanding the chemical attributes of oxygen. But understanding how oxygen works opens up a whole host of other creative possibilities to (a) build a better fire, and/or (b) do other things with oxygen beyond just building a fire.
This is what I’m trying to get across. The difference in the brain between nondeclarative memories (System 1) and declarative memories (System 2) is a key that unlocks many doors. There are all sorts of creative ideas that probably wouldn’t come to you if you don’t have that key.
I’m trying, with my recent writing, to apply that key to various areas of contemporary relevance, and see what kind of new rooms open up. I may be applying that key incorrectly, and reaching the wrong conclusions. If so, so be it. I’ll continue to do so, in the hopes that at some point, I’ll reduce the inferential distances enough for people to be able to tell me why I’m wrong.