Fallacy of composition

From TheAlmightyGuru
Jump to: navigation, search

The fallacy of composition is a logical fallacy where what is true for the parts is assumed to be true for the whole. Sometimes this assumption is correct (atoms have mass, therefore, anything composed of atoms also have mass), but it can also be incorrect (atoms are not alive, therefore, anything composed of atoms must also not be alive). This fallacy is often found when referring to people, assuming that what is true for one person is true for all people (this diet worked for me, so it will work for you).

The fallacy of composition is the converse of the fallacy of division, the belief that what is true for the whole is true for its parts. The fallacy of composition is commonly confused with a faulty generalization, where an inference is made before significant information has been gathered.

When you look for examples of this fallacy, authors tend to write obviously false scenarios like, each brick in a building weighs five pounds, therefore the building weighs five pounds. However, where I find that people actually succumb to the fallacy is a failure to consider emergence. For example, individual brain cells aren't conscious, therefore, the brain isn't conscious, therefore there must be something other than the cells of the brain to account for consciousness, a so-called ghost in the machine.

Being a computer programmer, my favorite example of the fallacy of composition is to refer to artificial intelligence employing neural networks. For example, an electronic switch is quite dumb. It can't think and it has no memory, so it can't strategize, which means it couldn't play a strategy-heavy game like chess or go, and it certainly couldn't beat an expert level player. No matter how many switches you have, no matter what configuration you wire them in, a bunch of switches couldn't possibly beat a go champion. Such a statement would have been seen as perfectly obvious only a few decades ago, but, after the invention of the electronic computer, which is really just an assortment of tiny electronic switches in a very specific configuration, we have seen computers beat even the best chess and go players. In fact, the best computer AIs like AlphaZero, have become so good that no human could ever hope to beat them, and they will only see the possibility of loss from other computer AIs.

I think the most interesting thing about the AI example is, unlike say human consciousness arising from brain cells, we know there is no ghost in the machine involved with AI. It's trivial to explain how the most basic aspects of a computer work: a switch is either open or shut, current is either flowing or not. A step up to describe the function of logic gates isn't very difficult to understand either, and even a child can pick up the basics of computer programming. It's not until you scale up to the point of the neural network and having an artificial intelligence "learn" from examples, the data involved starts to become so complex that even those with a doctorate in computer science fail to grasp how the AI solves problems. This means that, even though we can't point to any specific part of a computer AI and say, "this is why it beats the best human players," we still know that their ability to be so good is a result of non-magical emergence.