Bad Analogies
Analogies are bad until they accompany an independent explanation for what is analogous to what, and why. And evolution is unlike engineering.
Most analogies and mental models are bad. David Deutsch wrote:
Arguments by analogy are fallacies. Almost any analogy between any two things contains some grain of truth, but one cannot tell what that is until one has an independent explanation for what is analogous to what, and why.
Most phenomena are not the same. History doesn't repeat itself. Assuming an analogous relationship requires a good explanation as to why the phenomena are analogous.
Mental Models
This is why many “mental models” are error-prone. One claimed benefit of mental models is that they are easy to remember and “simplify the complex into understandable and organizable chunks”.
Because they are not based on hard-to-vary explanations they often achieve the opposite. Some theories are better than others. Better in which ways you might ask? They can be superseding theories, simplifications, or unification of existing theories. Michael Faraday’s and James Clerk Maxwell’s combined theory of electromagnetism is easier to remember than their separate inferior theories of electricity and magnetism. Arabic Numerals are easier to learn than Roman numerals.
Let’s look at an example from Shane Perish, who helped popularize the term of mental models, writing several books about the topic1:
Inertia: An object in motion with a certain vector wants to continue moving in that direction unless acted upon. This is a fundamental physical principle of motion; however, individuals, systems, and organizations display the same effect. It allows them to minimize the use of energy, but can cause them to be destroyed or eroded.
This is making an analogy between Newton’s First Law: Inertia (“An object at rest remains at rest, and an object in motion remains in motion at constant speed and in a straight line unless acted on by an unbalanced force.”) and “status quo bias” (a concept that encapsulates various biases such as familiarity heuristic, loss aversion, and the sunk cost fallacy and the behavioral tendency to reduce cognitive load). Both cause something to “remain how it is”. But is human psychology analogous to objects flying through space? No. At least not without an explanation as to why they are analogous. The separate concepts are a hard-to-vary explanation that cover more territory, are more universal, and are thus superior to this muddled mental model.
Technology Evolution (AI Risk)
Flawed AI doomer/safetyist arguments are based on the assumption that the development of a artificial intelligence is analogous to evolution. As I wrote in my piece on AI augmentation vs automation:
The Scaling Laws hypothesize that LLMs [Large Language Models] will continue to improve with increasing model size, training data, and compute.
Some claim that at the end of these scaling laws lies AGI. This is wrong. It is like saying that if we make cars faster, we’ll get supersonic jets. The error is to assume that the deep learning transformer architecture will somehow magically evolve into AGI (gain disobedience, agency, and creativity). Professor Noam Chomsky also called this thinking emergentist2. Evolution and engineering are not the same.
Until they articulate a hard-to-vary explanation of how current AI technology development is analogous to biological evolution they are basing their argument on the supernatural and unexplained.
Coda
Mental models can be useful to get a basic understand of a phenomenon. But that’s where their benefits end. They don’t help you to achieve a deep understanding of reality.
When arming yourself with concepts that help you to solve problems in the future, side with the deepest most universal theories and discard mediocre self-help books trying to re-package content under a new name.