3.5. Evolution – learning by the species

In Section 2.4, I said that information as such is not a sign and has no meaning. Information turns to a sign and gets a meaning only by the act of understanding. Understanding is to interpret information in a context of knowledge. Understanding is the act by which information turns into signs that have a meaning – a meaning for the being capable of understanding [1]. Smoke is a sign of fire only if you already know that smoke is, in almost all cases. the result of fire. The context of knowledge needed for understanding must be provided.

This section is about the issue of how living things – including such without brain and nervous system – acquire knowledge. In Section 3.1, I already mentioned that kind of knowledge which manifests in ‘right’ behavior allowing to accomplish a goal or purpose, and which is represented in the forms and structures enabling or controlling that behavior. If a living thing, because of information from outside about the given situation, behaves ‘right’ in this situation, so that its chance of survival and procreation is greater or at least not smaller – then we must concede that the living thing understands the situation.

How should a being’s ability to understand a situation be perceived if not by perceiving its appropriate behavior? However, John Searle, by his famous Chinese Room argument, has shown that appropriate behavior of machines is possible without semantic understanding – only on the basis symbol processing, as it takes place in computers. The existence of symbols or signs, as mentioned above, is the result of understanding. In computers as well as in the Chinese Room, it is human understanding which has written their programs or the rules for manipulating Chinese symbols, respectively. All appropriate behavior of machines depends on the human ability of semantic understanding [2].

How does the knowledge comes about which is represented in the structure of a living thing – the knowledge which enables it to understand information autonomously and, thereby, to behave appropriately to its own purposes and goals? Knowledge, naturally, is acquired by learning – ‘learning’ is the term we use referring to the acquirement of knowledge which enables us to understand information right, to behave right, and to do things right.

A kind of learning very common in nature as well as in human culture is learning by reward and penalty. Animals that at least experience pain can learn in this way, even if they have no further phenomenal experience: They can learn to avoid things and situations that are associated with the experience of pain. I call this ‘individual learning’, i.e., the acquirement of knowledge that was not inherited. Only beings with phenomenal experience are capable of individual learning. The advantage of individual learning is that it allows quick adaptation to new environmental conditions.

Individual learning is what we usually mean when we speak of learning. Since it is depending on phenomenal experience, I will deal with it more extensively in a later chapter about subjectivity and consciousness. But it is important to realize that individual learning results in the formation or configuration of material structures that represent the acquired knowledge – mainly structures in the brain, as animals with phenomenal experience always have a brain [3]. Other parts of the body are capable of individual learning at least in some degree; we usually call this adaptation or training effect – think of sportsmen or of the hands of a professional pianist.

Let us now look at the way how living beings learn and acquire knowledge without having a brain, even without having the simplest kind of nervous system. They are not capable of individual learning. The way of learning we find there (but not only there) is usually referred to as biological evolution. I here refer to it as ‘learning by the species’, and its mechanism is simple; to put it crudely: stupids go extinct. ‘Stupids’ are individuals the structure of which is less optimal, making them behave ‘wrong’ more frequently than the average of the species, thus their chance of survival and procreation is smaller on average then that of their ‘clever’ conspecifics.

‘Structure’ means here the entire form and structure of a living thing: shape, internal structure as well as the specific structures that control behaviors, i.e., structures of nervous system and brain, if provided. Furthermore, ‘structure’ includes not only the organization of matter in space, but also, e.g., of energy in time; form/structure is information. All forms and structures, from the roughest downwards to the finest, contribute to a living thing’s appropriate behavior, and they all represent the knowledge of how this behavior can be enabled and controlled.

The evolution of forms and structures is therefore at the same time accumulation of knowledge: knowledge of how these forms and structures – not only spacial but also temporal ones – must be designed so that they accomplish their purposes in the maintenance of life. And there is no difference in this regard between structures in the cell, the DNA sequence, forms and structure of bones, muscles, alimentary tract, nervous system, or structures in the brain. A special feature of brain structures is that they are easier changeable – not only by genetic mutation, but also by sensory input and individual learning.

Indeed, we usually say only of brain structures and perhaps of the DNA sequence that they represent knowledge [4]. But that’s not correct because all forms and all structures that enable us to accomplish a goal or purpose represent knowledge, namely the knowledge of how they must be formed and structured to accomplish their purposes. Therefore, the more and more perfect adaptation of biological forms and structures to the purpose of survival in the course of the evolution is an accumulation of knowledge and, with that, learning.

We find a similar evolution in the realm of artifacts. Compare, for example, early automobiles that resembled motorized carriages with modern cars: The latter, in their superficies as well as in their internal structure (engine, gearbox, electronics, etc.) contain much more knowledge of how the respective purposes can be best fulfilled. Here, again, forms and structures represent knowledge, but it is human knowledge, not the cars’, and it is obvious who learned here: engineers and designers.

However, if we regard the biological evolution as a kind of learning – who learns there? We said that living things without consciousness are not capable of individual learning, and we do not assume that plants, bacteria, or earthworms, for instance, have consciousness. I think the most appropriate solution here is to speak of a ‘learning by the species’. The knowledge acquired in this kind of learning is knowledge of the species. It is genetically transmitted to the individuals, in the forms and structures of which this knowledge is (more or less completely) represented. The knowledge of the species is thus the individuals’ knowledge as well. It enables them to understand their world and to behave ‘right’ therein such that they survive and preserve their species.

Therefore, learning by living things is not only individual learning but also learning by the species, that is, the evolution of behavior-determining structures by mutation and selection. And even if some sorts of structure, e.g., brains play a central role in the control of behavior – all forms and structures of a living thing more or less determine its behavior; thus the evolution of the entirety of form and structure of living things is learning, accumulation of knowledge.

 

to the top

next page



3.5.1. Excursion: machine learning

One may object that there are machines capable of autonomous learning, e.g., by means of artificial neuronal networks. Such networks are interesting in our context because they are characterized by non-symbolic processing (different from computers, but similar to brain structure). The person who trains the network has no insight into the internal change, self-configuration, solution paths of the system, thus the machine seems to generate and to contain a knowledge which is not human knowledge.

Here, our first question should be: what are machines? What is their nature? In Section 1.3, I said that artifacts are products (externalizations) of the human mind, manifestations of human purposes and needs. Tools and machines, so we can further say, are extensions of the human body: The woodpecker makes a hole by its beak – we use chisel and hammer. The spider produces a thread by its spinneret – humans first used the drop spindle (already a simple machine), later the spinning wheel (a more complicated machine), and today we have spinning automates. Birds fly by their wings – we use airplanes, etc. The list could be extended ad libitum.

Machines, so we can generally say, are extensions of our physical (bodily) capabilities. This is true for computers and other ‘intelligent’ machines as well: they are extensions of the human brain, and they enable us to do some sorts of intellectual work more quickly or more precisely, or cheaper, or at places dangerous or difficult to go to for humans, e.g., in outer space. The same is true for machines in general regarding physical work.

If our assumption that machines are artificial extensions of the human body is true, then the training of a machine capable of learning can be compared with the training of the human body including nervous system and brain: The aim in both cases is determined by human need or desire: to become able to do a work or activity as precisely and swiftly as required. When I, for example, learn to play the piano, then structures in my hands as well as in my brain gradually change: muscles become stronger and joints become more flexible, some neuronal networks are strengthened or refined, new interconnections are formed in my brain.

I have no insight in those structural changes in my body and brain when I learn to play the piano. I don’t know what exactly changes in my brain and in which neuronal structures and configurations my new knowledge is represented. I only experience the ability to play becoming better and better (musicality assumed). The same would be in practicing chess, a purely intellectual activity: My ability to play chess would become better (sufficient intelligence assumed), but I wouldn’t know anything about the structural changes in my brain associated with my learning achievement.

The same is the case, when an artificial neuronal network is trained: We observe the machine doing its work better and better, but we have no insight into the internal changes enabling this progress. If I learn to play the piano or at chess, the acquired knowledge (including implicit, procedural, bodily knowledge) is mine, i.e., it’s human knowledge. If, however, a machine is nothing than an extension of the human body, then the knowledge represented in an adaptive machine trained by a human is human knowledge as well.

In all cases of machine learning, the end goal is defined by human needs or desires – the machine may then find the best way to this goal, the most appropriate internal structure or configuration (remember that ‘structure’ in our context means not only the organization of matter in space, but also, e.g., of energy in time). The end goal of machine learning can even be purely scientific, for example, to test the learning ability of an artificial neuronal network – also this is a human purpose. Who is proud when the chess computer defeated the world champion? Not the computer, but its creators: they, in truth, triumphed by means of the machine. Conclusion: Machines do not really learn autonomously, not ‘for own account’, and the knowledge generated in machine learning is human knowledge.

 

to the top

next page


Footnotes

  1. The fact that some signs seem to have objective meaning – e.g., a character means a certain speech sound, a traffic sign means a certain prohibition – is a result of communication leading to the development of sign systems (languages). That is, (virtually) objective meaning eventually depends on convention within a language- or communication community. The fact that one and the same sign can have different meaning in different languages, or even in the same language in different contexts (homonyms) shows that meaning is only virtually objective.  [⇑]
     
  2. Searle, J. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences 3, 417–457  [⇑]
     
  3. It is not ‘only the software’ which changes in the brain by individual learning. There is no difference between hardware and software; everything in the brain is quasi hardware, but is changeable, some parts, e.g., of the cortex, are easier changeable than others. Learning is always associated with structural change, and new knowledge is represented in the structure.  [⇑]
     
  4. Creationists like to refer to the amazing instrumentality in the design of plants and animals, claiming that only an omniscient spirit could have created all that. They are right (at least) insofar as, in fact, a lot of knowledge is represented already in a tiny bacterium, and the question arises: whose knowledge is it?  [⇑]
     

to the top

next page