Rationality as a Coping Mechanism

I think a lot about “complexity”, in an abstract sense, and the mechanisms we have for coping with complex situations. In the physical sciences, there is a concept called “coarse-graining” which essentially means simplifying a system in a way that captures its most relevant properties. So, for instance, if we are using CGI to simulate a flag waving in the wind, we can model the fabric as a grid of idealized masses and springs, and provided our purpose is only to simulate the movement of the fabric, this model will work just fine. We can ignore all the irrelevant properties of the material: ignore the minute vibrations of the molecules, their charges, and their actual position on the Earth. If we needed to know about the material’s waterproofing abilities, or its electrical conductivity, then we would need different models, but we could still achieve a lot without needing to recreate a high-fidelity model that mimics every feature of the real-world system.

In physics, these idealized systems are often very useful, and the first order approximations they provide can usually be further enhanced by corrective terms to improve the precision as necessary. Once the idealized model exists, it is possible to take logical and mathematical steps to arrive at a set of predictions, which can then be tested against experiment.

However, I think that sometimes the scientific mode of thinking can overstep its boundaries. When the rational mode of thinking: i.e. observation > model > logic > prediction is applied outside its domain, then problems arise.

I remember for instance, talking to a physics professor about the odd-even car rule in Delhi. In January 2016, the Indian government trialed an odd-even car scheme. Private vehicles with number plates ending in an odd number could only drive on odd numbered days, and likewise for even plated cars. It’s a tactic used to reduce congestion and pollution on the roads. I mentioned that the scheme had received mixed reviews; some people complained that wealthier residents had simply bought second cars, and they worried that ultimately the scheme would have a negative net effect. My professor counteracted, rationally, that by definition the scheme works, that deductively speaking there would be fewer cars on the roads, because the percentage of people who can afford second cars is small relative to the whole population. He had arrived at his conclusion through a simple deductive process.

Consider the following three concepts / propositions.
(A) The Delhi odd-even car rule [in all it’s detail and complexity]
(B) The odd-even car rule, by definition, reduced the number of cars on the road
(C) The odd-even car rule was successful in reducing pollution and congestion

It seemed to me that my professor made the logical leap from (A) to (B), and that from (B) it is much easier to get to (C) which is not a logically valid step – ‘success’ is much more nuanced than simply reducing the number of cars. Second, once we get to (B) it is much easier to simply throw away (A) because now it’s just a bulky idea that is wasting mental space. I think that this process of making a logical step and then throwing away the original is a trap that lots of analytically minded people fall into.

His conclusion was correct, the scheme did reduce the number of cars on the road, but the danger is that by reaching this conclusion so quickly, my professor had unwittingly simplified the discussion. Instead of talking about the nuanced, complex odd-even scheme which exists in reality, we were talking about a simplified scheme whose only salient feature was that it reduced the number of cars on the road.

The issue is that in reality the effect of extra car purchases was significant enough that when the trial of the scheme was ended the congestion on the roads became much worse, and when the scheme was trialed again in April 2016, the Delhi transport minister reported that there were ~400,000 more cars on the road than there were during the first trial.[1] So it would be a mistake for my professor to conclude from his deductive process that the scheme “worked” outright, and by making his logical step so quickly he put himself much closer to saying the scheme worked.

The trouble is that in mathematical and scientific thinking, it is possible to isomorphically move between logically equivalent statements without any trouble. “P implies Q” is quite literally the exact same statement as “Not Q implies Not P”. However, it is easy to make mistakes when the statements are less logically precise, and usually the rational process from premise to conclusion is not so smooth when talking about real-life matters.

Another way to say this is that there are some systems which cannot be adequately “coarse-grained”. Coarse graining, by definition creates a resulting system which is relatively closed, insofar as the factors which are ignored in the coarse-graining process are deemed to have negligible effect. However, in many real-life cases, it is not clear what those factors are – and this leads to difficulties.

The humanities take a very different approach to complexity. When performing a literary analysis or critiquing a painting, there is definitely some ‘coarse-graining’ that happens – isolating a single feature of the piece and considering it a simplified form. But the larger challenge in the humanities is to somehow comprehend the piece as a whole, and entertain it in its entirety. The best debaters and political pundits are those who seem to have an infinite memory; those who can recall an obscure fact on demand and explain why it is a salient rebuttal to what is being talked about here and now.

In the modern world we are bombarded with so much information that we cannot possibly keep it all in our heads at the same time. And so, a simple coping mechanism is to rationally process the information as it comes in, and then store only the conclusions in mind. I can thus create a structure of relatively neat and clean beliefs which are then tested against new incoming information.

When I listen to hyper-rational people speak, I sometimes get the sense that the gears of their mind are whirring all the time. The logical ramifications of each statement are immediately being computed and fed into some internal machinery. This is why Elon Musk can confidently make the logical leap from “We won’t be able to live on Earth forever” to “We ought to colonize Mars” easily, whereas most of us would be crippled by other concerns which are not strictly relevant to those two statements. It is why Effective Altruists can go from “Artificial Intelligence is a potential existential threat to humanity” to “we ought to dedicate significant resources to making sure AI doesn’t kill us”, while sidelining many of the more pressing and complex concerns of politics, concentration of power, distribution of resources, automation of jobs etc that will arrive in the field of AI long before a General AI, capable of wiping out the human race does.[2] It is also why ‘productivity’ is a thing – the belief that I ought to be able to treat my own self as a predictable, idealized system and be able to optimize it. It is why some people think they can ‘optimize’ social interactions to get the most happiness from them, because they are operating on the belief that social interactions are logical, closed, idealized systems, rather than on the belief that other people are people and ought to be treated as such.

[1] http://auto.ndtv.com/news/why-phase-ii-of-the-odd-even-rule-in-delhi-was-choked-1258978

[2] http://idlewords.com/talks/superintelligence.htm

Subscribe to AWAIS.IO

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe