Cognition: Comparison
Please note this is a comparison between Version 3 by Robert Friedman and Version 2 by Rita Xu.

Cognition is the acquisition of knowledge by the mechanical process of information flow in a system. In animal cognition, input is received by the various sensory modalities and the output may be a motor or other action. The sensory information is internally transformed to a set of representations which is the basis for cognitive processing. This is in contrast to the traditional definition that is based on mental processes and a metaphysical description.

  • animal cognition
  • cognitive processes
  • physical processes
  • mental processes

1. Introduction

1.1. ThA Scie Manyntific Definitions of Cognition

Common definitions of cognition often include the phrase mental process or acquisition of knowledge. Reference to mental processing descends from an assignment of non-material substances to the act of thinking. Philosophers, such as the Cartesians and Platonists, have written on this topic, including the relationship between mind and matter. This perspective further involves concepts such as consciousness and intentionality. However, these ideas are based on metaphysical explanations and not on a modern scientific interpretation

A dictionary will often define cognition as the mental process for the acquisition of knowledge. However, this view originated with the assignment of mental processes to the act of thinking. The mental processes are a metaphysical description of cognition which includes the concepts of consciousness and intentionality.

[1].

The metaphysical approach is exemplified by the philosopher Plato and his Theory of Forms, a hypothesis of how knowledge is acquired. The idea is that a person is aware of an object, such as a kitchen table, by comparison with an internal representation of that object’s true form. The modern equivalent of this hypothesis is that our recognition of an object is by the similarity of its measurable properties with its true form. According to this theory, these true and perfect forms originate in the non-material world.

However, face recognition in primates shows that an object’s measured attributes are not compared against a true form, but instead that recognition is from a comparison between stored memory and a set of linear metrics of the object [2]. These findings agree with studies of artificial neural networks, an analog of cerebral brain structure, where objects are recognized as belonging to a category without prior knowledge of the true categories

This also includes the concept that objects in nature are reflections of a true form.

Instead, a material description of cognition is restrictive to the physical processes of nature. An example is from the studies of primate face recognition where the measurable properties of facial features are the unit of recognition.

[3].

The theory of true forms originates from a thinking of a perfectly designed world with deterministic processes, while a theory absent of true forms may instead depend on probabilistic processes. The rise of probabilistic thinking in natural science has coincided with modern statistical methods and explanations of natural phenomena at the atomic level

This perspective also excludes the concept that there is an innate knowledge of objects, so instead cognition forms a representation of an object from its constituent parts.

[4].

A modern experimental biologist would approach a study of the mind from a material perspective, such as by the study of the cells and tissue of brain matter. This approach is dependent on reduction of the complexity of a problem. An example is from economics, where an individual is generalized as a single type and consequently the broader theories of population behavior are based on this assumption

Therefore, the physical processes of cognition are probabilistic in nature since a specific object may vary in its parts.

1.2. Mechanical Perspective of Cognition

Most work in the sciences accepts a mechanical perspective of the information processes in the brain. However, the traditional perspective, including the descriptions of mental processing, is retained by some academic disciplines. For example, there is a conjecture about the relationship between the human mind and any simulation of it.

[5]. There is a similar approach in Newtonian physics where an object’s spatial extent is simplified as a single point in space.

Since some natural phenomena are not tractable to mechanistic study, concepts exist that are not solely based on material and physical causes. However, it is necessary to base science theory of brain function on natural mechanisms while disallowing mental causation. There are exceptions where the physical world is visually indescribable and solely dependent on mathematical description, but these occurrences are typically not applicable to the investigation of life at the cellular level.

1.2. Mechanical Perspective of Cognition

Even though a mechanical perspective of neural systems is not controversial, there remains a non-mechanical and metaphysical perspective concerning our sensory perception of the world. An example is the philosophical conjecture about the relationship between the human mind and any simulation of it

This idea is based on assumptions about intentionality and the act of thinking. However, a physical process of cognition is instead generated by neuronal cells instead of a dependence on a non-material process.

[6]. This conjecture is based on assumptions about intentionality and the act of thinking. However, others have presented scientific evidence where these assumptions do not hold true

Another example concerns the intention to move a body limb, such as in the act of reaching for a cup. Instead, studies have replaced the assignment of intentionality with a material interpretation of this motor action, and additionally showed that the relevant neuronal activity occurs before a perceptual awareness of the motor action.

[7]. One example is the mechanism for an intent to move a body limb, such as in the act of walking. Whereas the traditional perspective expects a mental process of thinking that leads to the generation of these body movements, instead the mechanistic perspective is that a neuronal cell is the generator of the intent of a body movement

Across the sciences, the neural systems are studied at the different biological scales, including from the molecular level up to the higher level which involves the information processing.

[8].

While a metaphysical explanation for phenomena is applicable to some areas of knowledge, such as in the study of ethics, these explanations are not informative of nature where the physical processes are expected. In the case of neural systems, the neurons, their connections, and the neural processes are measurable by their properties, so their phenomena are assignable to material causes instead of mental causes. Further, there is a hierarchy of cellular organization that describes the brain where each level of this hierarchy is associated with a particular scientific approach [9]. An example is at the cellular level where the neurons are studied by the methods of cellular anatomy. This area of study also includes the mechanisms for neuron formation and communication between neurons.

Neural systems may be studied at a higher-level perspective, such as at the level of brain tissue or how information is communicated throughout the neural system

The higher level perspective is of particular interest since there is an analogous system in the neural network models of computer science.

[10]. The information processing of the brain is particularly relevant since it has a close analog with the artificial neural network architectures of computer science [11][12]. However, the lower levels of biological organization are not as comparable, such as where an artificial neural system is firmly based on an abstract and simplified concept of a neuronal cell and its synaptic functions.

2. Mechanisms of Visual Cognition

2

 However, at the lower level, the artificial system is based on an abstract model of neurons and their synapses, so this level is less comparable to an animal brain.

1.13. Stochastic Processes in Biology

Vision is the better studied of the sensory systems in primates

cope of this Definition

For this description of cognition, the definition is restricted to a set of mechanical processes. The process of cognition is also approached from a broad perspective along with a few examples from the visual system.

2. Cognitive Processes in the Visual System

2.1. Probabilistic Processes in Nature

The visual cortical system occupies about one-half of the cerebral cortex. Along with language processing in humans, vision is a major source of sensory information from the outside world. The complexity of these systems reveals a powerful evolutionary process. This process is observed across all cellular life and has led to numerous novelties. Biological evolution depends on physical processes, such as mutation and population exponentiality, and a geological time scale to build the complex systems in organisms.

An example of this complexity is studied in the formation of the camera eye. This type of eye is evolved from a simpler organ, such as an eye spot, and this occurrence required a large number of adaptations over time.[12][13]. Also, the camera eye evolved independently in vertebrate animals and cephalopods. This shows that animal evolution is a strong force for change, but may restricted by the genetic code and the phenotypes of an organism, particularly its cellular organization and structure.

The evolution of cognition is a similar process to the origin of the camera eye. The probabilistic processes that led to complexity in the camera eye will also drive the evolution of cognition and the organization and structure of an animal brain.

2.2. Abstract Encoding of Sensory Information

“The biologically plausible proximate mechanism of cognition originates from the receipt of high dimensional information from the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays range across the electromagnetic spectra, but the retinal cells are specific to a small subset of all possible light rays”.

[1]

Figure 1 shows an abstract view of a sheet of neuronal cells that receives information form the outside world. This information is specifically processed by cell surface receptors and communicated downstream to a neural system. The sensory neurons and their receptors may be abstractly considered as a set of activation values that changes over time, a dynamical process.

Neurosci 02 00010 g001 550

Figure 1. An abstract representation of information that is received by a sensory organ, such as the light rays that are absorbed by neuronal cells along the surface of the retina of a camera eye.

The question is how the downstream processes of cognition work. This includes how knowledge is generalized, also called transfer learning, from the sensory input data.[14]

. It is particularly relevant since the visual processes occupy one-half of the cerebral cortex [15]. There is theory from the cognitive sciences that both vision and language are the major drivers for acquiring knowledge and perception of the world. It may seem daunting to imagine that our vivid awareness of a scene is built upon levels of basic physical processes. However, cellular life has generated a high degree of complexity by layering physical processes, such as mutation and population exponentiality, over an evolutionary time scale.

This problem of causation of complex phenomena has occurred in explanations for the origin of the camera eye. The formation of a camera eye that has transformed from a simpler organ, such as an eye spot, requires a model with a very large number of advantageous modifications over time

A part of the problem is solved by segmenting the world and identifying objects with resistance to viewpoint (Figure 2).

[16]

 There is a model from computer science

[4]

 that is designed to overcome much of this problem. This approach includes the sampling of visual data, including the parts of objects, and then encoding the information in an abstract form. This encoding scheme includes a set of discrete representational levels of an unlabeled object, and then employs a consensus-based approach to match these representations to a known object.

 

Neurosci 02 00010 g002 550

Figure 2. The first panel is a visual drawing of the digit nine (9), while the next panel is the same digit but transformed by rotation of the image.

3. Models of General Cognition

3.1. Mathematical Description of Cognition

Experts in the sciences have investigated the question on whether there is an algorithm that describes brain computation.[17]

. A casual observer of the different forms of eyes, such as for this case, would find it difficult to imagine a material process that could design a functional camera eye from a simpler form. The experienced observer would instead invoke biological processes, such as random morphological change

It was concluded that this is an unsolved problem of mathematics, even though every natural process is potentially representable by a model. Further, they identified the brain as a nonlinear dynamical system. The information flow is a complex phenomenon and is analogous to that of the physics of fluid flow. Another expectation is that this system is high dimensional and not represented by a simple set of math equations.

[17] and selection for those changes that favor an increase in the rate of offspring production. The result is the potential for a complex adaptation.

Further evidence that the formation of a camera eye is within the reach of natural processes is provided by the analogous camera eye in a lineage of invertebrate cephalopods. This resulted from an adaptation that occurred independent of the origin of the vertebrate camera eye. Yet, another case of Darwinian evolution is in the optimized refractive index of the camera eye lens. This adaptation occurred by modifications that led to recruitment of protein molecules from other uses to the lens of the eye [18].

There is another case of independent evolution as observed in the neural circuity of animals. The circuit for motion detection in the visual field has converged on a similar design in two different eye forms, both the invertebrate compound eye and the mammalian camera eye

 They further suggested that a more empirical approach to explaining the system is a viable path forward.

The artificial system, like in the deep learning architecture, has a lot of potential for an empirical understanding of cognition. This is expected since artificial systems are built from parts and interrelationships that are known, whereas in nature the history of the neural system is obscured, and the understanding of its parts require experimentation that is often imprecise and confounded with error.

3.2. Encoding of Knowledge in Cognitive Systems

It is possible to hypothesize about a model of object representation in the brain and its artificial analog, a deep learning system. First, these cognitive systems are expected to encode objects by their parts, or elements, a topic that is covered above.[3][4] Second, it is expected that this is a stochastic process, and in the artificial system the encoding scheme is in the weight values that are assigned to the interconnections among nodes of a network. It is further expected that the brain functions similarly at this level, given that the systems are based on nonlinear dynamics and distributed representations of objects.[17][19]

. These examples show evolutionary convergence on a similar physical design and evolution’s potential for forming complex biological systems. In addition, the process of evolutionary convergence is dependent on developmental constraint on the kinds of modifications, otherwise the chance of convergence on a single design is expectedly low.

These are all examples of natural engineering of life forms by stochastic processes. They are not deterministic processes since they are not directed toward a final goal, but instead the adaptations are continually undergoing change by genetic and phenotypic causes.

The neural system of the brain is a direct analog of the above processes. The organ is considered highly complex and our perceptions are not easily translated to cellular level mechanisms. However, by the same probabilistic processes, the neurons and their interconnections have evolved into a cognitive system that is capable of complex computation with large amounts of sensory data. These cognitive processes include the identification of visual objects, encoding of sensory data to an efficient format, and pattern matching of visual objects to memory.

2.2. Abstract Encoding of Sensory Input

The biologically plausible proximate mechanism of cognition originates from the receipt of high dimensional information from the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays range across the electromagnetic spectra, but the retinal cells are specific to a small subset of all possible light rays.

From an abstract perspective, the surface that receives the visual input is a two-dimensional sheet of cells where each cell has an activation value at a point in time (Figure 1). Over a length of time, the distribution of these activations is undergoing change, so the neural system is reporting from a dynamic state of activations. This view at the visual surface is representative of both the spatial and temporal components of the proximate cause of vision.

Figure 1. An abstract representation of data that are received by a sensory organ, such as light rays that are absorbed by cells along the surface of the retina of a camera eye. The drawing shows the spatial pattern, but there is also a temporal dimension since this sensory input data are changing over time.

This representation of sensory data is similar to that received by artificial neural network systems. These artificial systems are capable of identifying objects in a visual scene and labeling them by their membership to a category of related objects. This also shows analogous function between the artificial process and natural cognition

[20]

These encoding schemes are expected to be abstract and not of a deterministic design based on a top-down only process. Since cognition is also considered a nonlinear dynamical system, the encoding of the representations is expected to be highly distributed among the parts of the neural network.[17][20]

.

The open problem has been generalizing this knowledge (transfer learning) that is acquired from processing sensory input data. This is the essential problem for artificial systems in emulating cognition in animals. However, there is recent work that employs artificial models of transfer learning

 This is testable in an artificial system and in the brain.

Further, a physical interpretation of cognition requires the matching of patterns to generalize knowledge of the outside world. This is consistent with a view of the cognitive systems as statistical machines with a reliance on sampling for achieving robustness in its output. With the advances in deep learning methods, such as the invention of the transformer architecture

[21]

, it is possible to sample and search for exceedingly complex sequence patterns. Also, the sampling of the world occurs within a sensory modality, such as from visual or language data, and this is complemented by a sampling among the modalities which potentially leads to increased robustness in the output.

[22].

A related problem is in identifying an object where the viewpoint is variable. It is addressed by a model

3.3. Future Directions of Study

The major question is the interpretability of animal cognition as compared to the artificial system. However, this is an assumption that they are currently designed in the same way. It is known that the mammalian brain is highly dynamic, including in the variability of the rates in sensory input processing and in the specific pattern of downstream activation of the internal representations.[17] This variability is not feasibly modeled in the artificial systems since there are constraints on hardware efficiency.[17] This is an impediment to development of an analogous system to animal cognition, given its continuity of sensory feedback, such as by the mechanisms of attention. Having an artificial system that includes an overlay architecture with “fast weights” is expected to provide this form of true recursion in learning.

[317] that is designed for biological realism, along with a robust architecture for sampling the parts of an object. This approach includes the sampling of visual data which are then encoded in an abstract format, a vector of number values. Specifically, this sampling occurs across blocks of columns in a visual scene. Further, each column consists of a set of vectors where each vector is assigned to a discrete category by its level of representation of the input data (Figure 2). These processed data are then utilized for finding columns of similarity that correspond to the parts of an object, a consensus-based approach toward establishing a robust identification of an object.

Figure 2. A model for processing of visual objects. The first panel shows a visual scene. The next panel shows an open circle which represents a region with a potential object. The third panel is an enlargement of this region. The final panel contains three open diagonal shapes that are abstract representations of the information in the image. They are ordered from bottom to top by low to high level of abstraction.

Previous approaches to artificial systems have often overfit the network model to a training data set. Overfitting hinders the generalizability of the final model

Since the artificial systems are continuing to scale in capability and power, it is a reasonable path to continue to empirically explore the sources of error in the neural network systems. This requires establishing how the system is working at a low level of operation. This bias problem is also addressed by studies that compare past results with data that includes additional sensory modalities, such as with visual or language based data. Another is to establish unbiased measures for the reliability of results. In the case of animal cognition, there is no immunity to a bias problem as shown by examples that illustrate this bias in human perception, such as in language processing.

[23]—in this case, the model is a network of nodes interconnected with weight values. The overfitting problem leads to loss of transferability of the model to other applications. Nature solves this problem by a set of processes. One is the visual processing for spatial and temporal invariance of an object in a scene

Another area of interest is the segmenting of the representational levels of sensory input, a process that is successful in animals and presumably in their ability to generalize knowledge.[4][24] Animal cognition appears to rely on higher level of representations of the outside world.

[25]. This leads to a more generalized form of the object than otherwise.

A second and complementary method is to neurally code the object by metrics that are abstract and generalizable. This reflects the example where a photograph of a cat is encoded so that it matches to both another photograph and a pencil sketch of the cat. This generalizability in identifying objects is now possible in the case of artificial systems

The findings in the artificial systems also inform and provide hypotheses for experiments in animal cognition.

[26]. Additionally, this generalizability leads to corrections for the variability in an object’s form, such as change in its orientation, deobfuscation against the background, or detection based on a partial view (Figure 3).

Figure 3. (a) The first panel shows a photograph of a visual scene that contains a table along with other objects. The second panel in (a) is the same scene but transformed so that it appears as a pencil sketch drawing; (b) The first panel is a visual drawing of the digit nine (9), while the next panel is the same digit but transformed by rotation of the image.

 For both types of systems, the capability to generalize knowledge is founded on the concept that the information in the world around us is compressible along with repeatability in its structure and patterns.

Yet another approach in artificial systems is by reinforcement learning,[27] a method that has been used to approximate the sensorimotor capability of animals. This method is different from an animal which is capable of frequent learning and updates to its neural system. In this case, the dependence of cognition on an interface with an outside world is characterized as the phenomenon of embodiment, so it is an embodied cognition, even if the outside world is created in a simulation.[17][28] This may be interpreted as a problem of robot design given its dependence on the outside world for development.

References

  1. Vlastos, G. Parmenides’ theory of knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 141-150.
  2. Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013–1028.Vlastos, G. Parmenides’ theory of knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.
  3. Hinton, G. How to represent part-whole hierarchies in a neural network. arXiv 2021, arXiv:2102.12627.Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013-1028.
  4. Jeans, J.H. Physics and Philosophy; Cambridge University Press: Cambridge, UK, 1942.Hinton, G. How to represent part-whole hierarchies in a neural network. 2021, arXiv:2102.12627.
  5. Smith, A. An Inquiry into the Nature and Causes of the Wealth of Nations, 1st ed.; A. Strahan and T. Cadell: London, UK, 1776.Searle, J.R.; Willis, S. Intentionality: An essay in the philosophy of mind. Cambridge University Press, Cambridge, UK, 1983.
  6. Searle, J.R.; Willis, S. Intentionality: An Essay in the Philosophy of Mind; Cambridge University Press: Cambridge, UK, 1983.Huxley, T.H. Evidence as to Man's Place in Nature. Williams and Norgate, London, UK, 1863.
  7. Haggard, P. Sense of agency in the human brain. Nat. Rev. Neurosci. 2017, 18, 196–207.Haggard, P. Sense of agency in the human brain. Nat Rev Neurosci 2017, 18, 196-207.
  8. Huxley, T.H. Evidence as to Man’s Place in Nature; Williams and Norgate: London, UK, 1863.Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados trans. Nicolas Moya, Madrid, Spain, 1899.
  9. Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados; Nicolas Moya: Madrid, Spain, 1904.Kriegeskorte, N.; Kievit, R.A. Representational geometry: integrating cognition, computation, and the brain. Trends Cognit Sci 2013, 17, 401-412.
  10. Kriegeskorte, N.; Kievit, R.A. Representational geometry: Integrating cognition, computation, and the brain. Trends Cognit. Sci. 2013, 17, 401–412.Hinton, G.E. Connectionist learning procedures. Artif Intell 1989, 40, 185-234.
  11. Hinton, G.E. Connectionist learning procedures. Artif. Intell. 1989, 40, 185–234.Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw 2015, 61, 85-117.
  12. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117.Paley, W. Natural Theology: or, Evidences of the Existence and Attributes of the Deity, 12th ed., London, UK, 1809.
  13. Yang, Z.; Purves, D. The statistical structure of natural light patterns determines perceived light intensity. Proc. Natl. Acad. Sci. USA 2004, 101, 8745–8750.Darwin, C. On the origin of species. John Murray, London, UK, 1859.
  14. Cichy, R.M.; Pantazis, D.; Oliva, A. Resolving human object recognition in space and time. Nat. Neurosci. 2014, 17, 455–462.Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. 2021, arXiv:2103.01937.
  15. Prasad, S.; Galetta, S.L. Anatomy and physiology of the afferent visual system. In Handbook of Clinical Neurology; Kennard, C., Leigh, R.J., Eds.; Elsevier: Amsterdam, The Netherlands, 2011; pp. 3–19.Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. In Proceedings of the IEEE, 2021.
  16. Paley, W. Natural Theology: Or, Evidences of the Existence and Attributes of the Deity, 1st ed.; R. Faulder: London, UK, 1802.Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog Neurobiol 1997, 51, 167-194.
  17. Darwin, C. On the Origin of Species; John Murray: London, UK, 1859.Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, May 17, 2021. Panelists: Blum, L; Gallant, J; Hinton, G; Liang, P; Yu, B.
  18. Tardieu, A.; Delaye, M. Eye lens proteins and transparency: From light transmission theory to solution X-ray structural analysis. Annu. Rev. Biophys. Biophys. Chem. 1988, 17, 47–70.Gibbs, J.W. Elementary Principles in Statistical Mechanics. Charles Scribner's Sons, New York, NY, 1902.
  19. Borst, A.; Helmstaedter, M. Common circuit design in fly and mammalian motion vision. Nat. Neurosci. 2015, 18, 1067–1076.Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences 2010, 14, 357-364.
  20. DiCarlo, J.J.; Zoccolan, D.; Rust, N.C. How does the brain solve visual object recognition? Neuron 2012, 73, 415–434.Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel distributed processing: explorations in the microstructure of cognition; Rumelhart, D.E., McClelland, J.L., PDP research group, Eds., Bradford Books: Cambridge, Mass, 1986.
  21. Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. arXiv 2021, arXiv:2103.01937.Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. 2017, arXiv:1706.03762.
  22. Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. Proc. IEEE 2021, 1–22.Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. 2021, arXiv:2102.10772.
  23. Hawkins, D.M. The problem of overfitting. J. Chem. Inf. Comput. Sci. 2004, 44, 1–12.Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, 1986, pp. 1-24.
  24. Yates, A.J. Delayed auditory feedback. Psychol. Bull. 1963, 60, 213–232.Chase, W.G.; Simon, H.A. Perception in chess. Cognitive psychology 1973, 4, 55-81.
  25. Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog. Neurobiol. 1997, 51, 167–194.Pang, R.; Lansdell, B.J.; Fairhall, A.L. Dimensionality reduction in neuroscience. Curr Biol 2016, 26, R656-R660.
  26. Goh, G.; Cammarata, N.; Voss, C.; Carter, S.; Petrov, M.; Schubert, L.; Radford, A.; Olah, C. Multimodal Neurons in Artificial Neural Networks. Distill 2021.Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proceedings of the National Academy of Sciences 2021, 118.
  27. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; Lillicrap, T.; Simonyan, K.; Hassabis, D. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140-1144.
  28. Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. 2019, arXiv:1912.00312.
More