Cognition: Comparison
Please note this is a comparison between Version 1 by Robert Friedman and Version 78 by Robert Friedman.

Higher Cognition: A Mechanical Perspective

Cognition is the acquisition of knowledge by the mechanical process of information flow in a system. In cognition, input is received by the sensory modalities and the output may occur as a motor or other response. The sensory information is internally transformed to a set of representations, which is the basis for downstream cognitive processing. This is in contrast to the traditional definition based on mental processes, a phenomenon of the mind that originates in past ideas of philosophy.

  • animal cognition
  • cognitive processes
  • physical processes
  • mental processes

1. DefiInitroduction

1.1. The Many Definitions of Cognition

Common definitions of cognition often include the phrase mental process or acquisition of knowledge. Reference to mental processing descends from an assignment of non-material substances to the act of thinking. Philosophers, such as the Cartesians and Platonists, have written on this topic, including the relationship between mind and matter. This perspective further involves concepts such as consciousness and intentionality. However, these ideas are based on metaphysical explanations and not on a modern scientific interpretation [1].

The metaphysical approach is exemplified by the philosopher Plato and his Theory of Forms, a hypothesis of how knowledge is acquired. The idea is that a person is aware of an object, such as a kitchen table, by comparison with an internal representation of that object’s true form. The modern equivalent of this hypothesis is that our recognition of an object is by the similarity of its measurable properties with its true form. According to this theory, these true and perfect forms originate in the non-material world.

However, face recognition in primates shows that an object’s measured attributes are not compared against a true form, but instead that recognition is from a comparison between stored memory and a set of linear metrics of the object [2]. These findings agree with studies of artificial neural networks, an analog of cerebral brain structure, where objects are recognized as belonging to a category without prior knowledge of the true categories [3].

The theory of true forms originates from a thinking of a perfectly designed world with deterministic processes, while a theory absent of true forms may instead depend on probabilistic processes. The rise of probabilistic thinking in natural science has coincided with modern statistical methods and explanations of natural phenomena at the atomic level [4].

A modern experimental biologist would approach a study of the mind from a material perspective, such as by the study of the cells and tissue of brain matter. This approach is dependent on reduction of the complexity of a problem. An example is from economics, where an individual is generalized as a single type and consequently the broader theories of population behavior are based on this assumption [5]. There is a similar approach in Newtonian physics where an object’s spatial extent is simplified as a single point in space.

Since some natural phenomena are not tractable to mechanistic study, concepts exist that are not solely based on material and physical causes. However, it is necessary to base science theory of brain function on natural mechanisms while disallowing mental causation. There are exceptions where the physical world is visually indescribable and solely dependent on mathematical description, but these occurrences are typically not applicable to the investigation of life at the cellular level.

1.2. Mechanical Perspective of Cognition

Even though a mechanical perspective of neural systems is not controversial, there remains a non-mechanical and metaphysical perspective concerning our sensory perception of the world. An example is the philosophical conjecture about the relationship between the human mind and any simulation of it [6]. This conjecture is based on assumptions about intentionality and the act of thinking. However, others have presented scientific evidence where these assumptions do not hold true [7]. One example is the mechanism for an intent to move a body limb, such as in the act of walking. Whereas the traditional perspective expects a mental process of thinking that leads to the generation of these body movements, instead the mechanistic perspective is that a neuronal cell is the generator of the intent of a body movement [8].

While a metaphysical explanation for phenomena is applicable to some areas of knowledge, such as in the study of ethics, these explanations are not informative of nature where the physical processes are expected. In the case of neural systems, the neurons, their connections, and the neural processes are measurable by their properties, so their phenomena are assignable to material causes instead of mental causes. Further, there is a hierarchy of cellular organization that describes the brain where each level of this hierarchy is associated with a particular scientific approach [9]. An example is at the cellular level where the neurons are studied by the methods of cellular anatomy. This area of study also includes the mechanisms for neuron formation and communication between neurons.

Neural systems may be studied at a higher-level perspective, such as at the level of brain tissue or how information is communicated throughout the neural system [10]. The information processing of the brain is particularly relevant since it has a close analog with the artificial neural network architectures of computer science [11,12]. However, the lower levels of biological organization are not as comparable, such as where an artificial neural system is firmly based on an abstract and simplified concept of a neuronal cell and its synaptic functions.

2. Mechanisms of Visual Cognition

1.1. A Scientific Definition of Cognition

Dictionaries commonly refer to cognition as a set of mental processes for acquiring knowledge [1][2]. However, this view originates from the assignment of mental processes to the act of thinking and is anchored in philosophical descriptions of the mind, including the concepts of consciousness and intentionality [1][3][4]. This also presumes that objects of nature are reflections of true and determined forms, and creates a division between the substances of matter and that of the mind.
Instead, a material description of cognition is restricted to the physical processes available to nature. An example is the study of primate face recognition, where the measurements of facial features serve as the basis of object recognition [5]. This perspective also excludes the concept that there is an innate and prior knowledge of objects, so therefore, cognition would form a representation of objects from their constituent parts [6][7]. Likewise, there is not an expectation that the physical processes of cognition are functionally deterministic.
The following sections on cognition focus on an informational perspective. For example, information flow as a physical process is a fundamental cause of cognition, so this scale of interest is insightful in forming expectations about cognitive processes. These expectations do not exclude the other levels of the biological hierarchy which yield an insight into brain function, such as in the action of individual neurons of regions in the primate brain and their effect on motor function [8][9].

1.2. Mechanical Perspective of Cognition

Scientific work generally acknowledges a mechanical description of information and the physical processes as drivers of cognition. However, a perspective based on the duality of physical and mental processing is retained to a small degree in the academic world. For example, there is a conjecture about the relationship between the human mind and a simulation of it [10]. This idea is based on assumptions about intentionality and the act of thinking. In contrast with this view, a physical process of cognition is defined by the generation of an action in neuronal cells without dependence on non-material processes [11].
Another result of physical limits on cognition is observed in the intention of moving a body limb, such as a person reaching for an object across a table. Instead, studies have replaced the assignment of intentionality with a material interpretation of this action, and have shown that the relevant neural activity occurs before awareness of the associated motor action [12].
Across the natural sciences, the neural system has been studied at various biological scales, including at the molecular level and at the higher level in the case of information processing [13][14]. At this higher-level perspective, the neural systems are functionally analogous to the deep learning models of computer science, as both are based on information and the flow of information [15][16]. This allows for a comparative approach for understanding cognitive processes. However, at the lower scale, the artificial neural system is dependent on an abstract model of neurons and the network, so here, the animal neural system is not likely comparable.

1.3. Scope of this Definition

The definition of cognition as used here is restricted to a set of mechanical processes [1]. Moreover, cognitive processing is described from a broad perspective, with some examples from the visual system along with insights from the deep learning approaches in computer science.
This is an informational perspective of cognition, since this scale has explanatory power in explaining the causes of knowledge. The other scales, both the large and the small, seem less tractable for constructing explanations of cognition. At the larger scale, if we consider the mental processes as occurrences of the mind, then the phenomena of cognition are subject to mere interpretation, guided by perception and impression, and are not restricted to the true designs and processes of nature. At the lower scale, modeling cognition is less tractable as an activity at the level of individual neuronal cells, given the complexity of the corresponding experiments. These notions on scale and perspective are pivotal in finding explanations for the phenomena of nature [17].
There are also insights from other perspectives, but the definition that follows is not a systematic review of studies of the mind or a broad survey of empirical knowledge across the cognitive sciences; instead, it is a narrow survey of the physical processes and the phenomena of information of higher cognition. This definition is also for a general audience and academic workers outside the science of cognition. However, within the practice of cognitive science, the technical terms may be defined in a different context, consistent with the stricter definitions as recommended by this entry [8][18].

1.4. Organization of Cognition as a Science

The following sections represent the categories of cognition and its processes. However, they do not reflect the true divisions in cognition, since the cognitive processes are not fully understood at a mechanistic level. Instead, the divisions are based on commonly used boundaries in thinking about cognition, such as in the division between sensory perception and higher reasoning regarding concepts.

The last section on conceptual knowledge is a synthesis of ideas from the previous sections, and serves the purpose of yielding an insight into the general properties of higher cognition. The overarching theme of the sections is that the neural network and its information flow are the foundation for a deeper understanding of the cognitive processes.

1.5. Definition of the Terminology

This entry uses terminology from science and engineering that requires further clarification. An example is a (mental) representation. In this case, a representation is commonly defined as information that corresponds to an idea or image. This is a particular case where there is reference to the mind, but this term is also a reminder that the origin of these phenomena is in the brain itself. They are encoded in the neural network of the brain.

Another term is “probabilistic”, as a description of a process. This refers to a process that is expected to vary and potentially lead to different outcomes. Representations are expected to occur by this process, so their properties will vary among individuals.

The reference to deep learning originates in the field of engineering. This is a many-layered neural network that is particularly suited for learning about the abovementioned representations. These artificial neural networks share a network-like organization with that of the brain in animals. The other terms and their use are expected to follow their commonly accepted meanings as found in a dictionary of scientific or common words. For example, the biosphere of the Earth refers to that portion of the planet shaped by biological and geological processes. These processes are dynamic, so they have changed over time and across the surface of the Earth.

Lastly, the informational processes have been described as physical processes because information flow is a phenomenon of the physical world. Therefore, matter and energy are required for this phenomenon to occur. The proximate mechanism of the flow of information in the brain is in the electrochemical dynamics that occur among neuronal cells, involving the movement of the ions of chemical elements that generate an electromotive force (voltage), and the diffusion of molecular-level neurotransmitters. The neural system is also influenced by humoral factors, such as the chemical messengers known as hormones.

2. Visual Cognition

2.1. Evolution and Probabilistic Processes

The processes of vision occupy about one-half of the cerebral cortex of the human brain [19]. Similar to the many sensory forms of language processing in humans, vision is a major source of input and recognition of the outside world. The complexity of the sensory systems reveals an important aspect of the evolutionary process, as observed across cellular life, along with their countless forms and novelties. Evolution depends on physical processes, such as mutation and population exponentiality, along with a dependence on geological time scales for building biological complexity, as observed at all scales of life. These effects have also formed and shaped the biosphere of the Earth.
This vast complexity across living organisms is revealed by deconstruction of the camera eye in animals. This novel form emerged over time from a simpler one, such as an eye spot, and depended on a sequence of adaptations over time [20][21]. These rare and unique events did not hinder the independent formation of the camera eye, as it occurs in both the lineage of vertebrates and the unrelated lineage of cephalopods. This is an example of evolution as a powerful generator of change in physical traits, although counterforces restrict evolution from searching across an infinite number of possible novelties, including constraints that are found in the genetic code and those of the physical processes that shape these traits.
The evolution of cognition and neural systems are expected to occur by a similar probabilistic process to that theorized in the origin and design of the camera eye. An alternative to this bottom-up design in nature is to suggest a set of non-probabilistic processes and a top-down design consistent with determinism. For the hypothesis of determinism in nature, there is an expectation of true and perfect forms, as Plato theorized, but this hypothesis is not favorable for descriptions of activity in the brain.
Therefore, with the probabilistic view of evolution and the force of natural selection, the neural systems are expected to show a large degree of optimality in their design, as observed across the other biological systems [22]—especially since neural systems co-adapt with the sensory systems. However, this optimality is also constrained by the limits of molecular, cellular, and population processes [23]. This is not an assertion that biological systems are perfectly optimal, but that they are reasonably efficient in their structure and function. This view is particularly supported by observations of anatomical features across vertebrate species, and their adaptations for specific environments, such as those observed in the skeletal design of whales versus horses.

2.2. Abstract Encoding of Sensory Input

“The biologically plausible mechanism of cognition originates from the high-dimensional information in the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays may range across the electromagnetic spectra, but the retinal cells are specific to a small subset of these light rays” [3].

2.1. Stochastic Processes in Biology

Vision is the better studied of the sensory systems in primates [13,14]. It is particularly relevant since the visual processes occupy one-half of the cerebral cortex [15]. There is theory from the cognitive sciences that both vision and language are the major drivers for acquiring knowledge and perception of the world. It may seem daunting to imagine that our vivid awareness of a scene is built upon levels of basic physical processes. However, cellular life has generated a high degree of complexity by layering physical processes, such as mutation and population exponentiality, over an evolutionary time scale.

This problem of causation of complex phenomena has occurred in explanations for the origin of the camera eye. The formation of a camera eye that has transformed from a simpler organ, such as an eye spot, requires a model with a very large number of advantageous modifications over time [16,17]. A casual observer of the different forms of eyes, such as for this case, would find it difficult to imagine a material process that could design a functional camera eye from a simpler form. The experienced observer would instead invoke biological processes, such as random morphological change [17] and selection for those changes that favor an increase in the rate of offspring production. The result is the potential for a complex adaptation.

Further evidence that the formation of a camera eye is within the reach of natural processes is provided by the analogous camera eye in a lineage of invertebrate cephalopods. This resulted from an adaptation that occurred independent of the origin of the vertebrate camera eye. Yet, another case of Darwinian evolution is in the optimized refractive index of the camera eye lens. This adaptation occurred by modifications that led to recruitment of protein molecules from other uses to the lens of the eye [18].

There is another case of independent evolution as observed in the neural circuity of animals. The circuit for motion detection in the visual field has converged on a similar design in two different eye forms, both the invertebrate compound eye and the mammalian camera eye [19]. These examples show evolutionary convergence on a similar physical design and evolution’s potential for forming complex biological systems. In addition, the process of evolutionary convergence is dependent on developmental constraint on the kinds of modifications, otherwise the chance of convergence on a single design is expectedly low.

These are all examples of natural engineering of life forms by stochastic processes. They are not deterministic processes since they are not directed toward a final goal, but instead the adaptations are continually undergoing change by genetic and phenotypic causes.

The neural system of the brain is a direct analog of the above processes. The organ is considered highly complex and our perceptions are not easily translated to cellular level mechanisms. However, by the same probabilistic processes, the neurons and their interconnections have evolved into a cognitive system that is capable of complex computation with large amounts of sensory data. These cognitive processes include the identification of visual objects, encoding of sensory data to an efficient format, and pattern matching of visual objects to memory.

2.2. Abstract Encoding of Sensory Input

The biologically plausible proximate mechanism of cognition originates from the receipt of high dimensional information from the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays range across the electromagnetic spectra, but the retinal cells are specific to a small subset of all possible light rays.

From an abstract perspective, the surface that receives the visual input is a two-dimensional sheet of cells where each cell has an activation value at a point in time (Figure 1). Over a length of time, the distribution of these activations is undergoing change, so the neural system is reporting from a dynamic state of activations. This view at the visual surface is representative of both the spatial and temporal components of the proximate cause of vision.

Figure 1 shows the above view, in abstract form, as a sheet of neuronal cells that receive sensory input from the outside world. The input is processed by cell surface receptors and communicated downstream for neural system processing. The sensory neurons and their receptors can be imagined as a set of activation values that are undergoing change over time, and abstractly described as a dynamic system, in which change occurs in the dimensions of space and time.
Neurosci 02 00010 g001 550

Figure 1. An abstract representation of information that is received by a sensory organ, such as the light rays absorbed by neuronal cells across the retinal surface of the camera eye [3].

The information processing of the sensory organs is tractable for scientific study, but the downstream cognitive processes are less understood at a mechanistic level. The cognitive processes include the generalizing of knowledge, also referred to as transfer learning, which is a higher level of organization than that constructed from the sensory input [7][24][25]. Transfer learning is dependent on segmentation (division) of the sensory world and identification of sensory objects (such as visual or auditory) with resistance to variation in viewpoint or perspective (

An abstract representation of data that are received by a sensory organ, such as light rays that are absorbed by cells along the surface of the retina of a camera eye. The drawing shows the spatial pattern, but there is also a temporal dimension since this sensory input data are changing over time.

This representation of sensory data is similar to that received by artificial neural network systems. These artificial systems are capable of identifying objects in a visual scene and labeling them by their membership to a category of related objects. This also shows analogous function between the artificial process and natural cognition [20].

The open problem has been generalizing this knowledge (transfer learning) that is acquired from processing sensory input data. This is the essential problem for artificial systems in emulating cognition in animals. However, there is recent work that employs artificial models of transfer learning [21,22].

A related problem is in identifying an object where the viewpoint is variable. It is addressed by a model [3] that is designed for biological realism, along with a robust architecture for sampling the parts of an object. This approach includes the sampling of visual data which are then encoded in an abstract format, a vector of number values. Specifically, this sampling occurs across blocks of columns in a visual scene. Further, each column consists of a set of vectors where each vector is assigned to a discrete category by its level of representation of the input data (Figure 2). These processed data are then utilized for finding columns of similarity that correspond to the parts of an object, a consensus-based approach toward establishing a robust identification of an object.

Figure 2) [26].

Neurosci 02 00010 g002 550

. A model for processing of visual objects. The first panel shows a visual scene. The next panel shows an open circle which represents a region with a potential object. The third panel is an enlargement of this region. The final panel contains three open diagonal shapes that are abstract representations of the information in the image. They are ordered from bottom to top by low to high level of abstraction.

Previous approaches to artificial systems have often overfit the network model to a training data set. Overfitting hinders the generalizability of the final model [23]—in this case, the model is a network of nodes interconnected with weight values. The overfitting problem leads to loss of transferability of the model to other applications. Nature solves this problem by a set of processes. One is the visual processing for spatial and temporal invariance of an object in a scene [24,25]. This leads to a more generalized form of the object than otherwise.

A second and complementary method is to neurally code the object by metrics that are abstract and generalizable. This reflects the example where a photograph of a cat is encoded so that it matches to both another photograph and a pencil sketch of the cat. This generalizability in identifying objects is now possible in the case of artificial systems [26]. Additionally, this generalizability leads to corrections for the variability in an object’s form, such as change in its orientation, deobfuscation against the background, or detection based on a partial view (Figure 3).

Figure 2. The first panel is a drawing of the digit nine (9) while the next panel is the same digit as transformed by rotation [3].

In computer science, there is a model [6] designed for the segmentation and robust recognition of objects. This approach includes sampling of the sensory input, the identification of the parts of sensory objects, and encoding of the information in an abstract form for presentation to the downstream neural processes. The encoding scheme is expected to include a set of discrete representational levels of unlabeled (unidentified) objects and then uses a probabilistic approach for matching these representations to known objects in the memory. Without the potential for a labeled memory that describes an object, then there is no opportunity for knowledge of the object and a basis for knowledge in general.
Information is the proximate cause of cognition, and the laws of thermodynamics determine how information flows in any physical system, whether in a biological context or an artificial analog [27]. At other spatial scales, the physical processes in the brain are not homologous with an artificial neural network, such as at the level of neurons, where the intricacies of cellular processes are not shared with an artificial one. However, our history is filled with examples of engineers replicating the large-scale designs of nature, including lakes, bridges, and the construction of underwater vessels. The designs are similar at a physical scale because both natural and artificial forms are constrained by physical processes.

3. General Cognition

3.1. Algorithmic Description

Experts have investigated the question of whether an algorithm can explain brain computation [28]. They concluded that this is an unsolved problem, even though natural processes are inherently representable by a quantitative model. However, information flow in the brain is a product of a non-linear dynamical system, a complex phenomenon that is analogous to the physics of fluid flow, a complexity that may exceed the limits of computational work. Similarly, these systems are highly complex and not easily mirrored by simple mathematical descriptions [28][29]. Experts recommend an empirical approach for disentangling these kinds of complex systems, since they are not considered very tractable at a theoretical level.
An artificial neural system, such as in the deep learning architectures, has strong potential for testing hypotheses on higher cognition. The reason is that engineered systems are built from parts and relationships that are known, whereas in nature, the origin and history of the system is obscured by time and a large number of events; in this case, acquiring scientific knowledge likely requires extensive experimentation that is often confounded with error, including from sources that are known and unknown.

3.2. Encoding of Knowledge

It is possible to hypothesize about a model of object representation in the brain and its artificial analog in the deep learning systems. First, these cognitive systems are expected to encode objects by their parts, the basic elements of an object [5][6][7]. Second, it is expected that the process is stochastic, a probabilistic process, as in all other natural processes.
The neural network system is, in its essence, a programmable system [30], encoded with weight values along the connections in the network and activation values at the nodes. It is expected that the brain functions analogously at the level of information processing, since these systems are both based on non-linear dynamic principles of an interconnected network of nodes and a distribution of the representations of objects [7][28][31][32]. Furthermore, the encoding schemes in the network are likely to be abstract and generated by probabilistic processes.
Moreover, a physical interpretation of cognition requires the matching of patterns for the generalization of knowledge. This is consistent with a view of cognition as a statistical machine with a reliance on sampling for robust information processing [33]. With advancement in the deep learning methods, such as the invention of the transformer architecture [7][34][35], it is possible to sample and search for exceedingly complex patterns in a sequence of information, including in the case of object detection across a visual scene [36]. This sampling of the world occurs across the sensory modalities, such as those in vision and hearing, which are the sources of information for processing and constructing the internal representations [37].

3.3. Representation of Common-Sense Concepts

Microsoft Research released a deep learning method based on the transformer architecture, along with the inclusion of curated and structured data, to achieve some degree of parity with people in common-sense reasoning [38]. Their example of this kind of reasoning is described by a question on what people do while playing a guitar. The common-sense answer is for people to sing. This association is not a naive one, since the concept of singing is not a property of a guitar. Their achievement of parity with people is possible by the addition of the curated and structured data.
Their finding showed that an online corpus by itself is insufficient for a full knowledge of concepts. The conventional transformer architecture is dependent on and limited by the information inherent in a sequence of data for downstream representation of conceptual knowledge. In their case, the missing component was the curation and structure in the data, and the results showed a competitive capability for building concepts from representations as derived from input data.
The use of a large sample of representations that correspond to an abstract or non-abstract object or an event is expected to further increase robustness in a model of higher cognition [39]. Our knowledge of concepts is expected to form in the same manner. If there are incomplete or missing parts of a concept, then a person will have difficulty in forming the whole concept and applying it during problem solving.

3.4. Future Directions in Cognitive Science

3.4.1. Dyn

(a) The first panel shows a photograph of a visual scene that contains a table along with other objects. The second panel in (

amics of Cognition

Is higher cognition as interpretable as a deep learning system? This question arises from the difficulty of disentangling the mechanisms of an animal neural system, whereas it is possible to record the changing states of an artificial system, since its underlying design is known. If the artificial system is analogous, then it is possible to gain insight into the natural forms of cognition [7][40]. However, the assumption for this analogy may not hold. For example, it is known that the mammalian brain is highly dynamic, such as in the rates of sensory input and the downstream activation of internal representations [28]. These dynamic properties are not easily modeled in deep learning systems, a constraint of hardware design and efficiency [28]. This has been an impediment to the design of an artificial system that is approximate to higher cognition, although there are concepts for modeling these dynamics, such as an architecture that includes “fast weights” and provides a form of true recursion across a neural network [7][28]. This allows for a self-referential system that can continue to adapt to new experience. Recently, there have been studies on this architecture to address the performance problem [41][42].
The artificial neural networks continue to scale in size and efficiency. This work has been accompanied by empirical approaches for exploring the sources of error in these systems, and this effort is dependent on a thorough understanding about the construction of the models. One avenue for increasing the robustness in the output is by combining many sources of sensory data, such as from the visual domain and senses that are associated with the natural language domains, where the communication of language is not restricted to the written form. Another approach is to establish unbiased measures in the reliability of model output [28][36]. Likewise, error in information processing is not resistant to bias in animals, such as in human cognition, where there are well-documented biases in speech perception [43].
These approaches are a foundation for emulating the modularity and breadth of function in higher cognition. For achieving this aim, meta-learning methods can create a formal, modular [44], and structured framework for combining disparate sources of data. This scalable approach would lead to building complex information systems and reflect the higher cognitive processes [45][46].

3.4.2. Generalization of Knowledge

Another area of interest is the property of generalization in a model of higher cognition. This property may be better understood by a study of the processes that form the internal representations from sensory input [6][47][48]. Further, in an abstract context, generalizability is based on the premise that information on the outside world is compressible, such as in its repeatability of the patterns of sensory information, so that it is possible for any system to classify objects and therefore obtain knowledge of the world.
There is also the question of how to reuse knowledge outside the environment where it is learned, “being able to factorize knowledge into pieces which can easily be recombined in a sequence of computational steps, and being able to manipulate abstract variables, types, and instances” [7]. Therefore, it is relevant to have a model of cognition that includes the higher-level representations based on the parts of objects, whether derived from sensory input or internal to the neural network. However, the dynamic and various states of the internal representations are also contributors to the processes of higher reasoning.

3.4.3. Embodiment in Cognition

Lastly, there is uncertainty on the dependence of cognition on the outside world. This dependence has been characterized as the phenomenon of embodiment, i.e., that the occurrence of cognition is dependent on an animal or similar form, so the natural form of cognition is also an embodied cognition, even in the case where the world is a machine simulation [28][49][50]. In essence, this is a property of a robotic and mechanical system, where its functions are fully dependent on specific input and output from the world. Although a natural system receives input, produces output, and learns at a time scale constrained by the physical world, an artificial system is not as constrained, such as in the case of reinforcement learning [50][51][52], a method that can also reconstruct sensorimotor function in animals. Moreover, an artificial system is not restricted to a single bodily form in its functions.
Deepmind [50] developed artificial agents in a three-dimensional space that learn in a continually changing world. The method uses a deep reinforcement learning method in conjunction with dynamic generation of environments that lead to the unique arrangement of each world. Each of the worlds contains artificial agents that learn to handle tasks and receive rewards for completing specific objectives. An agent observes a pixel image of an environment along with receiving a “text description of their goal” [50]. Task experience is sufficiently generalizable that the agents are capable of adapting to tasks that are not yet known from prior experience. This reflects an animal that is embodied in a world and is learning interactively by the performance of physical tasks. It is known that animals navigate and learn from the world around them, so the above approach is a meaningful experiment within a virtual world. However, the above approach has fragility for tasks outside of its distribution of prior learned experiences.

4. Abstract Reasoning

4.1. Abstract Reasoning as a Cognitive Process

Abstract reasoning is often associated with a process of thought, but the elements of the process are ideally represented as physical processes. This restriction constrains explanations of the emergence of abstract reasoning, as in the formation of new concepts in an abstract world. Moreover, a process of abstract reasoning may be compared against the more intuitive forms of cognition as found in vision and speech perception. Without sensory input, the layers of the neural system are not expected to encode new information by a pathway, as is expected in the recognition of visual objects. Therefore, it is expected that any information system is dependent on an external input for learning, an essential process for the formation of experiential knowledge.
It follows that abstract reasoning is formed from an input source as received by the neural system. If there is no input that is relevant to a pathway of abstract reasoning, then the system is not expected to encode that pathway. This also leads to the hypothesis of whether abstract reasoning is composed of one or more pathways, and the contribution of other unrelated pathways in cognition. It is probable that there is no sharp division between abstract reasoning and the other types of reasoning, and the likelihood that there is more than one pathway of abstract reasoning, as exemplified in the case of solving puzzles that require the manipulation of objects in the visual world.
Another hypothesis is on whether the main source of abstract objects is the internal representations. If true, then a model of abstract reasoning would involve the true forms of abstract objects, in contrast to the recognition of an object by reconstruction from sensory input in the neural network system.
Since abstract reasoning is dependent on an input source, there is an expectation that deep learning methods modeling the non-linear dynamics are sufficient to model one or more pathways involved in abstract reasoning. This reasoning involves the recognition of objects that are not necessarily sensory objects with definable properties and relationships. As with the training process to learn sensory objects, it is expected that there is a training process to learn about the forms and properties of abstract objects. This class of problem is of interest, since the universe of abstract objects is boundless, and their properties and interrelationships are not constrained by the essential limits of the physical world.

4.2. Models of Abstract Reasoning

A model of higher cognition includes abstract reasoning [7]. This is a pathway or pathways that are expected to learn the higher-level representations of sensory objects, such as from vision or hearing, and for which the input is processed and generative of a generalizable rule set. These may include a single rule or a sequence of rules. One model is for the deep learning system to learn the rule set, such as in the case of puzzles solvable by a logical operation [53]. This is likely the basis for a person playing a chess game by memorizing prior patterns of information and events on the game board, which lead to general knowledge of the game system as a kind of world model.
Similarly, another kind of visual puzzle is the Rubik’s Cube. However, in this case, the final state of the puzzle is known, where each face of the cube will share a single and unique color. Likewise, if there is a detectable rule set, then there must be patterns of information that allow the construction of a generalized rule set.
The pathway to a solution can include the repeated testing of potential rule sets against an intermediate or final state of the puzzle. This iterative process may be approached by a heuristic search algorithm [7]. However, these puzzles are typically low-dimensional as compared with abstract verbal problems, as in inductive reasoning. The acquisition of rule sets for verbal reasoning requires a search for patterns in a higher-dimensional space. In either of these cases of pattern searching, whether complex or simple, they are dependent on the detection of patterns that represent a set of rules.
It is simpler to imagine a logical operation as the pattern that offers a solution, but it is expected that inductive reasoning involves higher-dimensional representations than a simple operator that combines Boolean values. It is also probable that these representations are dynamic, so there is potential to sample from a set of many valid representations.

4.3. Future Directions in Abstract Reasoning

4.3.1. Embodiment in a Virtual and Abstract World

While the phenomenon of embodiment refers to an occupant of the three-dimensional world, this is not necessarily a complete model for reasoning on abstract concepts. However, it is plausible that at least some abstract concepts are solvable in a virtual three-dimensional world. Similarly, Deepmind showed a solution to visual problems across a generated set of three-dimensional worlds [50].
A population and distribution of tasks are also elements in Deepmind’s approach. They show that learning a task distribution leads to knowledge for solving tasks outside the prior task distribution [50][51]. This leads to the potential for generalizability in solving tasks, along with the promise that increased complexity across the worlds would lead to further expansion in the knowledge of tasks.
However, the problem of abstract concepts extends beyond the conventional sensory representations as formed by higher cognition. Examples include visual puzzles with solutions that are abstract and require the association of patterns that extend beyond the visual realm, along with the symbolic representations from the areas of mathematics [54][55].
By combining these two approaches, it is possible to construct a world that is not a reflection of the three-dimensional space as inhabited by animals, but to construct a virtual world of abstract objects and sets of tasks instead [51]. The visual and symbolic puzzles, such as in the case of chess and related boardgames [52], are solvable by deep learning approaches, but the machine reasoning is not generalized across a space of abstract environments and objects.
The question is whether the abstract patterns used to solve chess are also useful in solving other kinds of puzzles. It seems a valid hypothesis that there is at least some overlap in the use of abstract reasoning between these visual puzzles and the synthesis of knowledge from other abstract objects and their interactions [50], such as in solving problems by the use of mathematical symbols and their operators [55][56]. Since humans are capable of abstract thought, it is plausible that the generation of a distribution of general abstract tasks would lead to a working system for solving a wider set of abstract problems [57].
If, instead of a dynamic generation of three-dimensional worlds and objects, there is a vast and dynamic generation of abstract puzzles, for example, then the deep reinforcement learning approach could be trained on solving these problems and acquiring knowledge of these tasks [50]. The question is whether the distribution of these applicable tasks is generalizable to an unknown set of problems (those unrelated to the original task distribution), and the compressibility of the space of tasks. This hypothesis is further supported by a recent study [57].

4.3.2. Reinforcement Learning and Generalizability

Google Research showed that an unmodified reinforcement learning approach is not necessarily robust for acquiring knowledge of tasks outside the trained task distribution [51]. Therefore, they introduced an approach that incorporates a measurement of similarity among worlds that are generated by a reinforcement learning procedure. This similarity measure is estimated by behavioral similarity, corresponding to the salient features by which an agent finds success in any given world. Given that these salient features are shared among the worlds, the agents have a path for generalizing knowledge for success in worlds outside their experience. Procedurally, the salient features are acquired by a contrastive learning procedure, i.e., a method for unlabeled clustering of samples, and embeds these values of behavioral similarity in the neural network itself [58].
This reinforcement learning approach is dependent on both a deep learning framework and an input data source. The source of input is typically in a two- or three-dimensional artificial environment where an artificial agent learns to accomplish tasks within the confines of the worlds and their rules [50][51]. One approach is to represent the salient features of tasks and the worlds in a neural network. As Google Research showed [51], the process requires an additional step in extracting the salient information for creating better models of the tasks and worlds. They found that this method was more robust in the generalization of tasks. Similarly, in higher cognition, it is expected that the salient features used to generalize tasks are stored in the neuronal network.
Therefore, a naive input of visual data from a two-dimensional environment is not an efficient means of coding tasks that consistently generalize across environments. To capture the high-dimensional information in a set of related tasks, Google Research extended the reinforcement learning approach to better capture the task distribution [51], and it may be possible to mimic this approach by similar methods. These task distributions provide structured data for representing the dynamics of tasks among worlds, and therefore generalize and encode the high-dimensional and dynamic features in a low-dimensional form.
It is difficult to imagine the relationship between two different environments. The game of checkers and that of chess appear as different game systems. Encoding the dynamics of each of these in a deep learning framework may show that they relate in an unintuitive and abstract way [50]. This concept is expressed in the article cited above [51], indicating that short paths of a larger pathway may provide the salient and generalizable features. In the case of boardgames, the salient features may not correspond to a naive perception of visual relatedness. Likewise, our natural form of abstract reasoning shows that patterns are captured in these boardgames, and these patterns are not entirely recognized by a single rule set at the level of our awareness, but, instead, are likely represented at a high-dimensional level in the neural network itself.
For emulation of a process of reasoning, extracting the salient features from a pixel image is a complex problem, and the pathway may involve many sources of error. Converting images to a low-dimensional form, particularly for the salient subtasks, allows for a greater expectation of the generalization and repeatability in the patterns of objects and events. Where it is difficult to extract the salient features of a system, it is possible to translate and reduce the objects and events in the system to text-based descriptors, a process that has been studied and lends itself to interpretation [57][58][59][60].
Lastly, since the higher cognitive processes involve the widespread use of dynamic representations, it is plausible that the tasks are not merely generalizable but may originate in the varied sensory and memory systems. Therefore, the tasks would be expressed by the different sensory forms, although the low-dimensional representations are more generalizable, providing a better substrate for the recognition of patterns, and are essential for a process of abstract reasoning.

5. Conceptual Knowledge

5.1. Knowledge by Pattern Combination and Recognition

In the 18th century, the philosopher Immanuel Kant suggested that the synthesis of prior knowledge leads to new knowledge [61]. This theory of knowledge extended the concept of objects from a set of perfect forms to a recombination of forms, leading to a boundless number of mental representations. This was the missing concept to explain the act of knowing. Therefore, the forces of knowledge were no longer dependent on description outside the realm of matter, or on hypotheses based on an unbounded complexity of material interactions.
It is possible to divide these objects and forms of knowledge into two categories: sensory and abstract. The sensory objects are ideally constructed from sensory input, even though this assumption is not universal. Instead, perception may refer to the construction of these sensory objects, along with any error occurring in their associated pathways. In comparison, the abstract object is ideally a true form. An ideal example is a mathematical symbol, such as an operator for the addition of numbers [55]. However, an abstract object may coincide with sensory objects, such as an animal and its taxonomic relationship to other forms of animals.
Therefore, one hypothesis is that the objects of knowledge are instead a single category, but that the input used to form the object is from at least two sources, including sensations from the outside world and the representation of objects as stored in the memory.
A hypothetical example is from chess. A person is not able to calculate each game piece and position given all events on the board. Instead, the decision-making is largely dependent on boardgame patterns with respect to the pieces and positions. However, the observable patterns as compared with all possible patterns is strongly bounded. One solution is in the hypothesis that the patterns also exist as internal representations that are synthesized and formed into new patterns not yet observed. Evidence for this hypothesis is in the predictive coding of sensory input, namely that this compensatory action allows a person to perceive elements of a visual scene or speech a short time prior to its occurrence [33]. This same predictive coding pathway may apply to internal representations, such as in chess gameboard patterns, and the ability to recombine prior objects of knowledge. The process of creating new forms and patterns would allow a person to greatly expand upon the number of observable patterns in a world.
To summarize, the process of predictive coding of sensory information should also apply to the reformation of internal representations. This is a force of recombination that is expected to lead to a very large number of forms in the memory, and is used for the detection of objects and forms that have not yet been observed. Knowledge by synthesis of priors has the potential to generate a multitude of forms that are consistent with the extent of human thought. In this case, the cognitive ether of immeasurability or incomputability is not necessary for explaining higher cognition and its processes.

5.2. Models of Generalized Knowledge

Evidence is mounting in support of a deep learning model, namely the transformer, for sampling data and constructing high-dimensional representations [35][46][55][57][58][60][62][63][64]. A study by Google Research employed a decision transformer architecture to show transfer learning in tasks that occurred in a fixed and controlled setting (Atari) [58]. This work supports the concept that generalized patterns occur in an environment with the potential for resampling those patterns in other environments. The experimental control of the environmental properties is somewhat analogous to the cognitive processes that originate in a single embodied source [49][50]. Altogether, the sampling of patterns is from the population of all possible patterns that occur in the system. A sufficiently large sample of tasks is expected to lead to knowledge of the system. The system may be thought of as a physical system; in this case, it is a visual space of two dimensions.
In another study, Deepmind questioned whether task-based learning can occur across multiple embodied sources, such as the patterns derived from torque-based tasks (a robot arm) and those from a set of captioned images [57]. Their results showed evidence of transfer learning across heterogeneous sources, and indicated that their model is expected to scale in power with an increase in data and model size.
These studies are complemented by Chan and others [62]. This insightful work showed convincing evidence of the superior performance of the transformer architecture in handling a sequence data model. They further revealed the importance of distributional qualities and dynamics in the training dataset, and its relationship to the properties of natural language data [62].
These computational studies illustrate proof that model performance continues to scale with model size [57][58]. These models for generalized task learning occur in a particular setting. It is possible to consider the setting as a physical system, such as in a particular simulation or in our physical world [50][64][65][66]. With a robust sampling of tasks in a controlled physical system, it is possible to learn the system and transfer the knowledge of tasks from the known to those unknown [50][64][65][66]. This is a form of pattern sampling that is robust in its representation of the population of all patterns that occur in a system. Deepmind has searched for these patterns in a system by deep reinforcement learning while optimizing the approach by simultaneously searching for the shortest path toward learning the system [65]. This method is, in essence, learning a world model and forming a base set of cognitive processes for downstream use.
Since images with text descriptors lead to generalized task learning [64], then video with text descriptors [63] is expected to enhance the model with a temporal dimension, and reflect tasks that are dynamic in time [66]. OpenAI developed a deep learning method that receives input as video data, but with a minimal number of associated text labels, and is as capable as a person in learning tasks and modeling a world (Minecraft) [66]. There is also a question on the difference between simple and complex tasks. However, the tasks may be decomposed into their parts and patterns, although OpenAI’s reinforcement learning system is achieving this aim without prior identification of these patterns [66].

5.3. Knowledge as a Physical Process

The informational processes in machine systems are analogous to those in the brain. They are both systems constrained by the physical world and its rules. While the machine systems are tractable for empirical studies of information flow and the acquisition of knowledge, the biological systems are not nearly as tractable. There is limited understanding of the specifics of how neurons work and how they organize to encode information, such as in memory formation in mammals [18].
For example, biology is dependent on the instruments of neurobiology for capturing the dynamics of a single neuronal cell’s activity, while a quantifiable behavior or action is observed over time [8][9][67]. The biological studies also require extensive experimentation for verification and insight; otherwise, a single experiment or a few experiments will result in unsupported interpretations and conclusions on the dynamic pathway or pathways of the neural system [68][69]. As in the history of studies in ecology, phenomenological approaches are better replaced by those built on theory and quantitative models, along with prudence in forming robust hypotheses, not mere questions, in the natural sciences. This problem is not restricted to natural science, but also applies to the science of engineering [35].

) is the same scene but transformed so that it appears as a pencil sketch drawing; (b) The first panel is a visual drawing of the digit nine (9), while the next panel is the same digit but transformed by rotation of the image.