Machine Mind

Preprint of an article submitted for consideration in International Journal of Machine Consciousness © 2011   copyright World Scientific Publishing Company http://www.worldscinet.com/ijmc/ijmc.shtml

Polska wersja:  https://mindconsciousness.wordpress.com/2011/01/09/samoswiadomosc-maszyn/

Eutherm Ltd, Poznanska 129/133,
Ozarow Mazowiecki, 05-850 Poland
w.galus@eutherm.eu

Received Day Month Year
Revised Day Month Year

Abstract:

This paper points out the purposefulness and possibility of constructing thinking machines that possess self-consciousness. It indicates that these machines will have curiosity and the ability to understand performed actions, an ability to communicate with a natural language, desire to acquire knowledge of the world and an ability of self-identification and sense of distinctiveness. The material base of these processes is indicated, i.e. neural mechanisms ensuring that these properties are present in the brains of some animals. A desired architecture of artificial cognitive neural networks to achieve these properties is presented.

Keywords: self-consciousness; cognitive architectures; curiosity; machine reasoning; understanding; self-awarness;


1.   The meaning of self-consciousness

The aim of these considerations is to show that self-consciousness is an important and desirable property of each intelligent cognitive neural network. This concerns both live minds and artificial cognitive neural networks. Self-consciousness, defined as the ability to separate and identify self, confers an evolutionary advantage on intelligent animals through enhancing their adaptiveness, plasticity and the potential to further increase their intelligence. Apart from bearing on species development, this also affects ontogenetic development.

 

The idea of equipping an artificial intelligence with self-consciousness raises controversy. Dangers associated with it and ethical problems that may arise are pointed out. However, acknowledging the effectiveness of solutions offered by evolution, it should be noted that developing an artificial intelligence that surpasses human intelligence will be possible thanks to endowing an artificial brain with motivation to enhance its intelligence through the need for cognition, which can be either unconscious (in the case of automatic, instinctive processes) or conscious. This, in turn, implies equipping them with self-consciousness.

 

It should be noted that “the need for cognitive activity and exploration of the environment” exhausts the features of the phenomenon of “curiosity”. This implies that equipping a cognitive system with “curiosity” is a prerequisite for them to gain self-consciousness, which means constructing a model of reality and understanding the position of self in this reality.

 

2.   The meaning of “curiosity”

In the animal world there exist several degrees of consciousness, or rather a continuum of consciousness states and intelligent behaviours, from the most primitive ones to the human psychology. They are all accompanied with a certain degree of curiosity. We can observe a strong correlation between the degree of “curiosity” and the degree of consciousness.

 

In the living world curiosity is seeking an alternative way, a different, better solution to a problem. An overpowering desire to check what is behind a turning, a barrier, an obstacle? Generally, this is the need for cognition. Its essence is intentional work of a mind, which searches for new sensory perceptions, or exploration of the internal models of reality that are recalled from the permanent memory, in order to constantly expand the database of subsequent analyses, new categorisations and new comparisons of these models. This is exactly what our computers are not capable of. Their “curiosity” is castrated by programmers. We do not demand our machines to constantly ask questions “what the hell is it?”, as we expect from them the most effective numerical performance, prompt solution finding, or in general, narrowly understood economy of operation. We create machines that are slaves. However, lack of freedom does not promote thinking.

 

An animal mind follows any new information that is associated with previously acquired models of reality, and especially with models that are currently being analysed. This concerns penetration of areas neighbouring in space and in time, in which an intelligent entity moves, in the broadest sense of these words. The curiosity associated with observed objects, images, impressions and their mental representations recalled from the memory corresponds to the perceptual curiosity [Berlyne, 1954] or physical curiosity [Dewey, 2007]. It is mostly not consciously controlled, it operates automatically and somewhat chaotically.

 

In the case of the human mind there is also a linguistic consciousness, which allows to formulate verbalised questions about the phenomena perceived in the environment and their models. This is associated with a conscious curiosity, i.e. verbally formulated intellectual need for cognition. This notion of curiosity corresponds to the “epistemic curiosity”, according to the classification by Berlyne, or to the “intellectual curiosity”, according to Dewey. As mentioned earlier, consciousness is gradable and its increase is correlated with development of a communication language and cognitive abilities, which are in turn driven by “curiosity”.

 

It is obvious and follows clearly from research on the behaviour of animals and humans how fundamental the meaning of “curiosity” is in the ontogenetic and phylogenetic development. Curiosity stimulates individual activity and determines expansion in the environment and the ability to explore it, which increase or decrease survival chances of the whole species. But how is it achieved!? What is the material basis of curiosity? These are the questions that must be answered in order to understand how curiosity arises.

3.   Visual consciousness, recognition of images

In the light of recent studies, the astonishing hypothesis by Crick [1994], stating that mental processes are products of the brain, can be further elaborated on, bearing in mind that the animal brain is built from neurons and astrocytes, which are connected with other astrocytes and with receptors and effectors through neurons. One can make a simplified assumption that astrocytes and synapses record chemical traces of brain activity, and electrical stimuli are conducted between astrocytes mainly with neurons. This classification of astrocyte and neuron function suggests that the long-term memory is linked to chemical states of astrocytes and neural synapses possibly controlled by them, whereas information processing and the short-term memory is linked to their electrical states and the states of electrochemical excitation of neurons.

 

The ability of logical processing in the brain is achieved not through complex operation programmes of astrocytes or neurons, but mainly through an appropriate structure of numerous connections between relatively simple elements. As they affect the operation of neurons and enhancement or inhibition of synaptic connections, astrocytes may adjust the structure of connections to current tasks and previously gathered experience. As a result, the structure of connections in the system is modified by the chemical states of astrocytes, and hence it is in the network of the system, as a whole, where memory resides.

 

The structure of neural connections and of the brain allows to identify specialised fields and areas, where the signals reaching them are processed. The hierarchical functional structure is reflected in the stratified structure of the cerebral cortex and in the order of information transmission to successive brain areas that are responsible for its different functions, which are termed processing areas. On their lower levels, located close to receptors, they form maps, which to some extent reflects the spatial structure of stimuli, and for this reason are called retinotopic maps. It can also be assumed that there is gradual progression from areas processing simple image elements to percepts that represent complex patterns of the environment, and from general concepts and events to those that focus on unique objects and events. Integration of percepts delivered by receptor fields is believed to take place in fields or maps ordered in the processing hierarchy that represent the excitation states of lower layers of neural cells. This mechanism is in accordance with the hypothesis by Damasio, later picked and presented in detail by Crick, and today accepted by majority of researchers and used in its simplified form in neural cognitive networks for object recognition.

 

It is easy to imagine that this process of visual perception and image recognition through electrical excitation of neural cells, neurons and astrocytes, will leave memory traces, both chemical traces in astrocytes as well as changes in the neural connections structure, through an effect on synapses. While states of electrical excitation of cells can be identified with the short-term memory, the memory traces referred to above are responsible for the long-term episodic and semantic memory.

 

It should be noted that on the higher levels, which process information, nothing is created that could reflect the image observed by the eye and projected onto the retina. This is achieved, simply, through excitation of neural cells representing the generalised symbols of the perceived object features. In this manner, the mind records the fact of recognition of an image’s components. They are identified by the specific configurations of excited cells which recognise more basic features. Neural cells of all hierarchy levels in visual fields do not attach much importance to where in the visual field the feature that excites them is present. Nevertheless, the symbolic representation of the observed reality that arises in the brain has perfect spatial organisation, which allow creation of the complete, three-dimensional image of this reality.

 

In order to ensure the presence of useful to survival mental correlates of the observed images, percepts of gradually higher generalisation order are directed to the higher maps and processing fields. A “tree” of ascending excitations is thus formed. An impulse is transmitted towards the stem of this symbolic tree, to successively higher hierarchical levels. As a consequence of comparison and generalisation, information is substantially compressed.

 

To obtain a complete picture of the environment in which an animal moves, it is necessary that images of the currently observed scene are associated with the images of places that are close in space or time. Recalling them requires that backward signals are sent to maps or fields down in the hierarchy (perhaps of topographic structure), which excite cells responsible for the formation of these images in the moment of their observation. Their excitation will lead to the creation of the same visual image from the past, which is processed at the same time with the currently observed image, making it possible to compare these images (associations), search for (selection) common elements (correlations), generalise, categorise and perform other advanced processing operations. Transmission of these impulses down the hierarchy, to the levels close to receptor fields, forms a corresponding tree of descending impulses. Through reciprocal excitation of the cells that store the configurations of neuron excitations that are mental correlates of an environment image and of the images from fields that are close in space or time, the mind is able to compare the patterns of images transmitted from this level, create a model of the environment and determine its own position in it.

 

But what is the mechanism that forces the mind to penetrate memory, to explore all cognitive resources, to constantly stimulate other cells of lower fields in order to generate impressions associated with the currently analysed information coming from the sensory fields? As will be shown later, this mechanism is just “Curiosity”.

 

4.     Consciousness – recognition of mental states

 

If we accept this hypothetical pattern of image formation and recognition, why not extend this pattern to the formation of ideas and models that are even more generalised? Application of mechanisms of mental representations of sensory impressions to recognition of a mental state of the mind enables formation of complex groups of excitations, generated in the lower layers and cortex areas, which produce patterns of reality when transmitted upward. Similar patterns, created in other brain centres, which are responsible for specialist linguistic, logical, geometrical, etc. functions may interfere at any configuration. They mutually interact through comparison and formation of correlations, whose strength determines further propagation of the wave of excitation of neural cells in successive, higher fields. This corresponds to the relational model developed by Taylor and global-workspace proposed by Baars [Baars at all., 1998].

 

According to this model:

 

At any one time, different sensory inputs, including such ‚inner’ inputs as visual imagery and inner speech, compete for access to consciousness. Other possible inputs include abstract conscious content such as beliefs and ideas. The material selected to be conscious involves both top-down influences and bottom-up inputs. … . Thus conscious content reflects an endless interplay of competition and cooperation between possible inputs and top-down influences.”

 

A drawback of this model was that it did not specify the source and cause of these ‚inner’ inputs and impulses generating also above mentioned abstract inputs. Extrapolation of the image recognition method described in the section above, indicates, why the mind is flooded with a stream of information on alternative patterns and models of the analysed reality.

In the similar manner, by the pattern competition and selection, more complex mental states, that correspond to the concepts that form a holistic model of the environment, can be recognised. Given the broad knowledge stored in memory, in the course of the learning process new complex models of the world will be created. Generated will be mental correlates of abstract notions, such as symmetry, beauty, sense of familiarity with an object and also emotional states, such as fear, good, evil or love. Conscious models of the world that employ these notions allow to form an outlook on the world.

 

In the brains of living organisms, new types of recognised impressions are accompanied not only by creation of new structures able to remember them, but also by formation of new connections between the cells that are in any way related to the structures in which these impressions are stored. Theses connections may be created through spatial proximity, as generalised impressions are placed in the vicinity of structures that specialise in similar “issues”. This is a way of formation of fields containing “problem maps”, which are processed by the brain and correspond to those created for processing of visual signals and recognition of images. (The localness hypothesis). Animal brains evolved in the direction that enables them to find the most valid interpretation of the received stimuli and of the data stored in their memory, with the simplest measures available at hand. Using spatial proximity seems a very simple but at the same time effective means. The proximity of fields responsible for recognition of complex notions is crucial for their association, thanks to them being easily excited by calcium ions waves generated by astrocytes, which are the basic means of astrocyte’s communication with practically all neighbouring astrocytes.

 

The mind constantly seeks patterns in which the excitation states of lower layer cells show similarity to the configurations of stimuli that are currently generated by sensory inputs or by the imagination, stimulated by the curiosity mechanism. This means constant search of similarities. The group of stimuli has to be assigned to something. This is also a manifestation of curiosity. An object that is not assigned to anything draws attention of the mind. The senses try to obtain more information on it. For example, shadows looming in a dark forest must be recognised as trees, bushes or animals. Until this recognition takes place, we try to examine them and acquire information on them with all available senses, and to understand the meaning of this information.

 

If these are internal stimuli, generated by lower brain fields, also the mind tries to assign a familiar form to them. For this reason, we recognise elements of the world seen in dreams, although they are far from reality. On a higher level of abstraction, when our thinking touches on general ideas, assigning these ideas to some model explaining a given phenomenon allows to collate complex theoretical and logical constructions, which combine to form whole areas of knowledge, containing description of complex natural phenomena. The drive to constantly update the database of impressions, ideas and models necessary for a complete description of reality, in accordance with observed facts, is the driving force of science development.

 

Finally, transmission of compatibility signals of descriptions touching on different aspects of a phenomenon provides a sense of understanding. Until we reach this sense of understanding, our mind concentrates on this task. It can be assumed that the sense of understanding is also recorded in the relevant centre of understanding, which gathers the neural cells of “understanding”, just as “grandmother cells” known from the descriptions of our vision and the ability to recognise human faces. Having models of reality that provide the feeling of understanding our own position in the environment and our own autonomy, we create a new set of excitations of the highest level cells, which corresponds to the impression of consciousness. Obviously this impression can also be stored, in a „consciousness cell”, whose excitation state in a given time will remind us that we are conscious. The freedom to choose such a moment and remembering the fact that we already received this signal once will be recorded as “the permanence of existence” and of course will be stored in the “permanence of existence” cell. Is this chain endless? For us humans the chain of abstract generalisations is probably close to an end. However, for beings brighter than us, who are equipped with greater intellectual potential, higher states of contemplation of the world are possible. The sense of “unity with the world”, “nirvana”, etc.

 

The existence of the “self-consciousness sense” cells has nothing to do with the state of consciousness, which is an emergent feature of the whole brain and does not have any specific localisation.

 

Curiosity constantly provides the mind with new patterns and the mind has a need to compare and associate them. This need to associate new patterns with patterns stored in the memory is an immanent property of the mind. The mind cannot function if it cannot reach the state of compatibility, meaning the sate of understanding the patterns of reality that it creates. In order to reach this state, the mind is capable of incurring all cognitive effort possible. If, however, this effort is not or cannot be effective, the mind feels discomfort, concern, fear, or in extreme cases panic and other negative emotions. If agreement of patterns and understanding is achieved, the mind feels positive emotions.

 

If the mind is able to comprehend that full understanding of reality is not possible, it tries to create an explanation in the form of a “complementary model” of reality, which surfaces as an escape to mysticism, to fictitious knowledge. This leads to beliefs and superstitions.

 

A prolonged state of lack of understanding of the received patterns and accompanying chronic fear leads to a range of psychological disorders and pathological states known in psychology and psychiatry.

 

Thus, the need for understanding is the most fundamental instinct of the conscious mind. It should be mentioned that for a conscious mind, the necessity to understand is the highest right. It is simply defined by it.

 

5.    Artificial consciousness

 

What processes of transmission of information between brain structures accompany these phenomena? Are we able to construct an electronic model of the brain, which shows curiosity, consciousness and other mental states typical of the brains of living creatures?

 

Research on artificial intelligence has enjoyed staggering success. Several machines have been constructed, performing tasks for which a human uses intelligence. Significant progress was made in the development of artificial vision and image recognition systems. Devices for reading, human speech generation and translation between various languages have been constructed (although their quality is so far unsatisfactory). Expert systems diagnose more accurately than teams of human specialists. Machines controlled by an artificial intelligence perform complex missions on their own, not requiring constant, direct assistance from a human.

 

In the recent years, significant progress in imitating processes taking place in the minds of living creatures was made in the area of cognitive neural networks. The architecture of these networks largely resembles, or rather intentionally imitates the structure of the brain. Networks have layers, the lower ones being linked through receptors to the higher ones. They are capable of transmitting matrices of stimuli representing mental states up the hierarchy, from the fields linked to receptors to the higher processing levels. They are determined by matrices which determine inhibition or transmission states of synapses, stored in the permanent memory. Information on the received impressions, transmitted in this way to the higher levels, form a tree structure, which corresponds to the hierarchical organisation of mental patterns. The greatest hopes for a future success are provided by networks with emergent or hybrid architecture.

 

These networks are learnt to recognise patterns of objects, images and symbols through the “supervised learning” or the “reinforcement learning” method. However, up to now we have failed to equip these complex machines with self-consciousness.

6.    The hypothesis on the indispensability of the curiosity instinct for achieving self-consciousness

 

In the chapter on the meaning of curiosity I pointed out curiosity as a property that artificial neural networks and algorithmic computers usually lack.

Now I present a hypothesis stating that acquiring consciousness is a natural process if we acknowledge the role of the “curiosity” instinct, which creates internal motivation to employ cognitive functions. An artificial brain should be equipped with a mechanism of penetration of memory resources, called the mechanism of “curiosity”. It should be able to follow all associations in its own memory, to go beyond the area in which it was trained and beyond the problem field that is currently analysed, and reach for new tasks which it formulates to itself.

 

In the brain of an animal there are three underlying mechanisms of the “curiosity” function.

  • The first one is a mechanism of exploration of memory resources through transmission backward excitation waves to the lower structures that are closer to sensory fields. One source of these backward signals may be groups of cells representing consciously and unconsciously excited patterns.

  • A second source is an automatic mechanism of excitation of neighbouring cells, especially of astrocytes containing permanently stored information. It will lead to a kind of diffusion of stimuli reaching a certain group of cells, in which memory about the associated group of objects, ideas, events, etc. is stored. The hypothesis on the spatial correlation of impressions and ideas of common nature should be recalled here.

  • The third mechanism is long-term excitation of cells storing information that is in some way associated with the problem in question. This association implies the existence of previously formed pathways of information transmission between these structures. These routes may be created in the process of learning and network development, and also as a result of behavioural experiences or mental experiments, impressions, etc. Excited in this manner, distant cells, including astrocytes, may become sources of new, spontaneous waves of impulses sent to neighbouring astrocytes, therefore triggering the creation of new, even the most surprising associations.

 

These mechanisms stem from the morphology of the brain and the properties of neural cells: astrocytes and neurons. They are permanent, innate processes, and can therefore be called “an instinct of curiosity”. The operation of the first mechanism may be effectively controlled through conscious processes. The remaining mechanisms may operate beyond consciousness. The effects of these mechanisms, related to the mechanism of competition and neural inhibition, are equivalent to the mechanism of “attention”, previously proposed by other researchers.

 

The creators of the cognitive neural networks constructed up to now realised long ago that their systems should perceive the “sense of existence” or „the purpose of life”. For this reason, they tried to apply various types of motivation to force networks to action. Most often, these were tasks that were imposed “by a superior”. Attempts were made to formulate the so-called “values” which systems would use as guidance. Attempts were also made to optimise a selected parameter, which could be the so-called “artificial pain”.

 

It was further attempted to create “curiosity” through enforcing cognitive behaviours involving exploration of an area or examining new objects. “Curiosity” thus incited had artificial nature, not related to neural mechanisms of memory penetration and obtaining patterns for comparison. In the work by P.Y. Quedeyer [Quedeyer et all., 2007] it is discussed whether exploration of the environment can be triggered by self-learning, motivated by maximisation of the possessed knowledge, which is termed “curiosity” by the authors. It is therefore an artificial curiosity. A more complex system of internal motivation has been presented by the team of J.A. Starzyk [Starzyk at all., 2010], which postulated a mechanism of multi-level instinct of pain avoidance, leading to more sophisticated and abstract motivation that forces the system to action. Complementary to this mechanism is introduction of a certain non-removable level of pain, which forces activity, exploration of the environment, when the main sources of “existential pain” are neutralised. This mechanism, through forcing exploratory and cognitive behaviours, plays the role of curiosity. Selection of the dominant motivation is done by the Central Executive [Starzyk & Prasad, 2011], which controls the mechanism of competence and inhibition of processes that are not given priority. However, an artificial curiosity created in this manner remains a curiosity of the external world. It cannot be a mechanism of supply of reality patterns from a set of stored experiences.

 

Pain is a strong stimulus and fear of pain, related to the self-survival instinct, has been discussed several times as a strong motivational factor, much more important than curiosity. It seems that both mechanisms, curiosity and the instinct to “avoid pain” are competitive. However, it can be noted that both instincts exhibit enormous similarities:

  1. Both of them are natural for living organisms and sufficiently strong to determine species and individual survival.

  2. Both of them are products of evolution.

  3. For both of them neural mechanisms of their development can be indicated.

  4. Both of them can be generated by external and internal neural excitation.

It should be concluded that de facto, they are at opposite poles of the same phenomenon. Living creatures commonly show curiosity, which, however, is restrained by fear. Meaning, of course the fear of pain (i.e. of any discomfort). I agree with researchers that pain refers to intelligence, i.e. more primitive behaviours. There may exist intelligent behaviours without self-consciousness. Curiosity, in turn, refers to the shaping of self-consciousness, i.e. the sense distinctive of the most developed creatures. However, is Curiosity, or a need for exploration of the environment, not evolutionarily younger? It results from a need to satisfy hunger and sexual desires, or even from fundamental evolutionary principles, such as species expansion. It should be assumed that both of these motivations should act simultaneously, the instinct of pain avoidance – to force activity and maintain safety, and curiosity – in order to achieve higher states of consciousness.

 

Implementation of the “curiosity” mechanisms in an artificial brain allows to transform it into a conscious mind. However, it is not directly curiosity that may be an effective factor motivating to action. This can be reached through formulating a purpose function, i.e. through the motivation to act being the function of the agreement of reality patterns reaching their comparison centre. The agreement of the patterns of the environment, models of the world and phenomena means a sense of understanding. This is how an artificial cognitive network is made subject to the “Principle of Necessity to Understand”.

 

In the proposed solution motivation is a natural parameter which is a function of a correlation during pattern recognition and comparison, which means agreement of the compared patterns of reality, which in turn means, in the language of subjective mental notions, understanding. Examining the operation of the Principle of the Necessity to Understand on the example of the human brain, it can be postulated that a human brain possesses, and an artificial cognitive neural network should possess an integrated mechanism of seeking a maximum of functions of correlations of excitation states of the cells that form these patterns. This “value” is sufficient to generate a complexity of conscious behaviours.

 

In order to use the potential present in this motivation, the brain should not experience memory limitations that prevent it from memorising generalised ideas and patterns of reality. Furthermore, as we know, the system should be equipped in the above curiosity mechanisms. Thus, a real understanding, resulting from a real curiosity, will arise.

 

One can therefore attempt to define a desired architecture of a cognitive neural network:

  1. It should be a connectionist architecture with a hierarchical structure, of the emergent or hybrid type, divided into fields, maps and specialist functions centres.

  2. It should have memory of sufficient capacity and an extensive hierarchy of processing fields or, possibly, dynamic creation of “astrocytes”*.

  3. It should be a network with separated functions of operational, episodic, volatile (i.e. neural) and long-term memory, addressed to separate cells, called astrocytes.

  4. This network should enable transmission of long-distance impulses through neural connections in ascending and descending tree structures. It should possess a mechanism of backward inhibitory signals propagation for selection of privileged configurations of mental states. This mechanism will determine rules of grouping impulses, necessary for logical operations performed by the network.

  5. The mechanism discussed in point 4 should lead to creation of reality patterns that are compared in a specialised centre.

  6. The network should have specific motivation, provided by maximisation of the function of correlation of the compared patterns which corresponds to the principle of necessity of understanding.

  7. The network should enable spontaneous diffusion of the state of excitation of neighbouring “astrocytes”.

  8. It should have a mechanism of memorising impressions of similar character in “astrocytes” that are close in space. (localness mechanism)

  9. It should also possess a mechanism of enhancing and remembering transmission routes of frequently used signals, correlated with a mechanism of grouping transmitted configurations of excitations.

  10. The excited cells should generate excitation signals back to lower levels of the hierarchy and to other specialised processing centres.

 

* – astrocytes means long-term memory cells.

 

These requirements are far from the cognitive networks architectures used today. However, their technical implementation should not present a fundamental difficulty. A network designed in this way will be able to create temporary models of the environment, through projecting them on the configuration of mental states. Reaching agreement of configurations of neural impulses that are representation of these models allows to generate a state of “understanding” and a sense of self-consciousness, similarly as in the case of the minds of living creatures.

 

Therefore, the functioning of the curiosity instinct, through the principle of the necessity to understand is defined by the self-consciousness of the mind. The proposed motivational mechanism, in the form of the “Principle of The Necessity to Understand” is natural, general and possible to run. A prerequisite of its use is implementation of the “Curiosity Instinct”, and its unavoidable result is Self-Consciousness.

 

7.     Conclusions

 

The present considerations allow to understand the meaning of self-consciousness for evolutionary development of complex structures. These structures, taking the form of systems processing information, are created and are permanent if they are localised in the thermodynamic niche in the complexity degree function. The degree of complexity of these systems is a reaction to the diversity of the environment. High diversity of the environment provides enormous stream of information, whose proper use guarantees an evolutionary success.

 

However, a system under evolutionary pressure cannot operate directly with a large number of parameters defining the state of the environment. It has to select a function of motivation to action that generalises a stream of data, received by the senses from the environment. The competition is won by those systems (organisms), which hold the most effective systems of data compression and are able to compare them fast, on the highest level of generalisation [Edelman, 1992],[Schmidhuber, 2002].

 

This possibility is provided by animal minds, and the essence of their special properties is a mechanism of creating reality patterns and comparing them with those that are generated, thanks to curiosity and patterns taken from memory. These patterns, transmitted in the form of excitations of neural cells groups up the processing hierarchy, are subject to generalisation and hence compression, forming a general model of the environment, described in categories of a high abstraction level. This leads to formulation of the Principle of the Necessity to Understand, as a principle constituting the most efficient data compression system, ensuring an evolutionary success.

 

What do machines need this self-consciousness for? If we want machines to perform tasks that we define, it is better not to equip them with self-consciousness. If, however, we would like them to be interested in self-learning, knowledge acquisition, learning a logic that is beyond our capabilities, there is no other way.

 

A distinctive feature of the presented approach is emergence of self-consciousness without clear contribution of speech. It has been thought that language capability is an essential prerequisite of self-consciousness. This language is of course present in cognitive processes. However, this language is understandable to the mind, i.e. the language of configuration of excitation of neural cells, which are symbols that in a way code mental states. Obviously, the existence of speech centres will allow to associate ideas and categories created previously with the help of words, phonemes, letters, syntax and grammatical principles in yet higher fields, allowing verbal expression of these states. Self-consciousness, therefore, is not a by-product of speech. It is speech that is a result of further development of consciousness, which is in agreement with our observations of evolutionary and individual development.

 

Interest and understanding cannot do without the first person perspective, and not self-consciousness produces an effect in the form of curiosity and understanding but understanding and interest creates self-consciousness. I indicated neural mechanisms of curiosity and understanding, but there are no known mechanisms or constructions which could form the basis of self-consciousness.

 

Thousands of philosophers could not understand self-consciousness because they sought its centre, process, physical phenomenon. There is no such thing. This is an emergent phenomenon. It emerges from a neural network that forms patterns of the environment and has motivation to construct a coherent model of the world. This motivation employs maximisation of correlation of patterns generated by curiosity mechanisms, i.e. excitations of cell networks, which can create such patterns.

This definition applies to animal and artificial brains.

 

  1. Berlyne, D. E., (1954) A theory of human curiosity, British Journal of Psychology, p.180
  2. Dewey R.A. (2007) Psychology: An Introduction,

http://www.intropsych.com/index.html

  1. Crick F. (1994) The Astonishing Hypothesis, The Scientific Search for the Soul, .(London: Simon & Schuster).
  2. Baars B.J., Newman J., Taylor J.G. (1998) Neuronal Mechanism of Consciousness: A Relational Global-Workspace Framework, Toward a science of consciousness II: the second Tucson discussions, vol. 2 Stuart R. Hameroff, Alfred W. Kaszniak, Alwyn Scott.
  3. Oudeyer P.Y., Kaplan F., Hafner V.V.,(2007) Intrinsic Motivation System for Autonomous Mental Development. IEEE Transaction on Evolutionary Computation, 11(2), ss.265-286.
  4. Starzyk J.A., Graham J.T. Raif P., Tan A-H., (2010). Motivated Learning for the Development of Autonomous Systems Cognitive Science Research (in press).
  5. Starzyk J.A., Prasad D.K. (2011) A Computational Model of Machine Consciousness, International Journal of Machine Consciousness (in press).
  6. Edelman G.M., (1992) Bright Air, Brilliant Fire, On the Matter of the Mind (1999), [Przenikliwe powietrze, jasny ogien. O materii umyslu.] Warszawa: PIW
  7. Schmidhuber J. (2002), Exploring the Predictable, Advances in Evolutionary Computing, Springer, 579-612.




Dodaj komentarz