Cognition: Comparison
Please note this is a comparison between Version 13 by Robert Friedman and Version 78 by Robert Friedman.

Higher Cognition: A Mechanical Perspective

Cognition is the acquisition of knowledge by the mechanical process of information flow in a system. In animal cognition, input is received by the svarious sensory modalities and the output may occur as a be a motor or other responseaction. The sensory information is internally transformed to a set of representations, which is the basis for downstream cognitive processing. This is in contrast to the traditional definition that is based on mental processes, a phenomenon of the mind that originates in past ideas of philosophy and a metaphysical description.

  • animal cognition
  • cognitive processes
  • physical processes
  • mental processes

1. DefiInition of Cognitroduction

1.1. A Scientific Definition of Cognition

1.1. A Scientific Definition of Cognition

Dictionaries commonly refer to cognition as a set of mental processes for acquiring knowledge

A dictionary will often define cognition as the mental process for the acquisition of knowledge. However, this view originated with the assignment of mental processes to the act of thinking. The mental processes are a metaphysical description of cognition which includes the concepts of consciousness and intentionality.

[1][2]. However, this view originates from the assignment of mental processes to the act of thinking and is anchored in philosophical descriptions of the mind, including the concepts of consciousness and intentionality [1]

This also includes the concept that objects in nature are reflections of a true form.

Instead, a material description of cognition is restrictive to the physical processes of nature. An example is from the studies of primate face recognition where the measurable properties of facial features are the unit of recognition.

[3]

This perspective also excludes the concept that there is an innate knowledge of objects, so instead cognition forms a representation of an object from its constituent parts.

[4]. This also presumes that objects of nature are reflections of true and determined forms, and creates a division between the substances of matter and that of the mind.
Instead, a material description of cognition is restricted to the physical processes available to nature. An example is the study of primate face recognition, where the measurements of facial features serve as the basis of object recognition [5]. This perspective also excludes the concept that there is an innate and prior knowledge of objects, so therefore, cognition would form a representation of objects from their constituent parts

 Therefore, the physical processes of cognition are probabilistic in nature since a specific object may vary in its parts.

1.2. Mechanical Perspective of Cognition

Most work in the sciences accepts a mechanical perspective of the information processes in the brain. However, the traditional perspective, including the descriptions of mental processing, is retained by some academic disciplines. For example, there is a conjecture about the relationship between the human mind and any simulation of it.

[6]

This idea is based on assumptions about intentionality and the act of thinking. However, a physical process of cognition is instead generated by neuronal cells instead of a dependence on a non-material process.

[7]. Likewise, there is not an expectation that the physical processes of cognition are functionally deterministic.
The following sections on cognition focus on an informational perspective. For example, information flow as a physical process is a fundamental cause of cognition, so this scale of interest is insightful in forming expectations about cognitive processes. These expectations do not exclude the other levels of the biological hierarchy which yield an insight into brain function, such as in the action of individual neurons of regions in the primate brain and their effect on motor function

Another example concerns the intention to move a body limb, such as in the act of reaching for a cup. Instead, studies have replaced the assignment of intentionality with a material interpretation of this motor action, and additionally showed that the relevant neuronal activity occurs before a perceptual awareness of the motor action.

[8]

Across the sciences, the neural systems are studied at the different biological scales, including from the molecular level up to the higher level which involves the information processing.

[9].

1.2. Mechanical Perspective of Cognition

Scientific work generally acknowledges a mechanical description of information and the physical processes as drivers of cognition. However, a perspective based on the duality of physical and mental processing is retained to a small degree in the academic world. For example, there is a conjecture about the relationship between the human mind and a simulation of it [10]. This idea is based on assumptions about intentionality and the act of thinking. In contrast with this view, a physical process of cognition is defined by the generation of an action in neuronal cells without dependence on non-material processes

The higher level perspective is of particular interest since there is an analogous system in the neural network models of computer science.

[11].
Another result of physical limits on cognition is observed in the intention of moving a body limb, such as a person reaching for an object across a table. Instead, studies have replaced the assignment of intentionality with a material interpretation of this action, and have shown that the relevant neural activity occurs before awareness of the associated motor action [12].

 However, at the lower level, the artificial system is based on an abstract model of neurons and their synapses, so this level is less comparable to an animal brain.

1.3. Scope of this Definition

For this description of cognition, the definition is restricted to a set of mechanical processes. The process of cognition is also approached from a broad perspective along with a few examples from the visual system.

Across the natural sciences, the neural system has been studied at various biological scales, including at the molecular level and at the higher level in the case of information processing

2. Cognitive Processes in the Visual System

2.1. Probabilistic Processes in Nature

The visual cortical system occupies about one-half of the cerebral cortex. Along with language processing in humans, vision is a major source of sensory information from the outside world. The complexity of these systems reveals a powerful evolutionary process. This process is observed across all cellular life and has led to numerous novelties. Biological evolution depends on physical processes, such as mutation and population exponentiality, and a geological time scale to build the complex systems in organisms.

An example of this complexity is studied in the formation of the camera eye. This type of eye is evolved from a simpler organ, such as an eye spot, and this occurrence required a large number of adaptations over time.

[13][14]. At this higher-level perspective, the neural systems are functionally analogous to the deep learning models of computer science, as both are based on information and the flow of information

. Also, the camera eye evolved independently in vertebrate animals and cephalopods. This shows that animal evolution is a strong force for change, but may restricted by the genetic code and the phenotypes of an organism, particularly its cellular organization and structure.

The evolution of cognition is a similar process to the origin of the camera eye. The probabilistic processes that led to complexity in the camera eye will also drive the evolution of cognition and the organization and structure of an animal brain.

2.2. Abstract Encoding of Sensory Information

“The biologically plausible proximate mechanism of cognition originates from the receipt of high dimensional information from the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays range across the electromagnetic spectra, but the retinal cells are specific to a small subset of all possible light rays”.

[15][16]. This allows for a comparative approach for understanding cognitive processes. However, at the lower scale, the artificial neural system is dependent on an abstract model of neurons and the network, so here, the animal neural system is not likely comparable.

1.3. Scope of this Definition

The definition of cognition as used here is restricted to a set of mechanical processes [1]. Moreover, cognitive processing is described from a broad perspective, with some examples from the visual system along with insights from the deep learning approaches in computer science.
This is an informational perspective of cognition, since this scale has explanatory power in explaining the causes of knowledge. The other scales, both the large and the small, seem less tractable for constructing explanations of cognition. At the larger scale, if we consider the mental processes as occurrences of the mind, then the phenomena of cognition are subject to mere interpretation, guided by perception and impression, and are not restricted to the true designs and processes of nature. At the lower scale, modeling cognition is less tractable as an activity at the level of individual neuronal cells, given the complexity of the corresponding experiments. These notions on scale and perspective are pivotal in finding explanations for the phenomena of nature [17].
There are also insights from other perspectives, but the definition that follows is not a systematic review of studies of the mind or a broad survey of empirical knowledge across the cognitive sciences; instead, it is a narrow survey of the physical processes and the phenomena of information of higher cognition. This definition is also for a general audience and academic workers outside the science of cognition. However, within the practice of cognitive science, the technical terms may be defined in a different context, consistent with the stricter definitions as recommended by this entry [8][18].

1.4. Organization of Cognition as a Science

The followi

Figure 1 shows an abstract view of a sheet of neuronal cells that receives information form the outside world. This information is specifically processed by cell surface receptors and communicated downstream to a neural system. The sensory neurons and their receptors may be abstractly considered as a set of activation values that changes over time, a dynamical process.

Neurosci 02 00010 g001 550

Figure 1. Ang sectionsabstract represent the categories of cognition and its processes. However, they do not reflect the truation of information that is received by a sensory organ, such as the light rays that are absorbed by neuronal cells along the surface of the retina of a camera eye.

The question is how the downstream processes of cognition work. This includes how knowledge is generalized, also called transfer learning, from the sensory input data.[5][15][16] A part of the problem is solved by segmenting the world and identifying objects with resistance to viewpoint (Figure 2).[17] There is a model from computer science[4] that is designed to overcome much of this problem. This approach includes the sampling of visual data, including the parts of objects, and then encoding the information in an abstract form. This encoding scheme includes a set of discrete representational levels of an unlabeled object, and then employs a consensus-based approach to match these representations to a known object.

 

Neurosci 02 00010 g002 550

Figure 2. The dfivisions in cognition, sincrst panel is a visual drawing of the digit nine (9), while the cognitive processes are not fully understood at a mechanistic levelnext panel is the same digit but transformed by rotation of the image. In

3. Models of General Cognition

3.1. Mathematical Description of Cognition

Experts in the sciences have investigated the question on whether there is an algorithm that describes brain computation.[18] It was concluded that this is an unsolved problem of mathematics, even though every natural process is potentially representable by a model. Further, they identified the brain as a nonlinear dynamical system. The information flow is a complex phenomenon and is analogous to that of the physics of fluid flow. Another expectation is that this system is high dimensional and not represented by a simple set of math equations.[18][19] They further suggested that a more empirical approach to explaining the system is a viable path forward.

The artificial system, like in the deep learning architecture, has a lot of potential for an empirical understanding of cognition. This is expected since artificial systems are built from parts and interrelationships that are known, whereas in nature the history of the neural system is obscured, and the understanding of its parts require experimentation that is often imprecise and confounded with error.

3.2. Encoding of Knowledge in Cognitive Systems

It is possible to hypothesize about a model of object representation in the brain and its artificial analog, a deep learning system. First, these cognitive systems are expected to encode objects by their parts, or elements, a topic that is covered above.[3][4][5] Second, it is expected that this is a stochastic process, and in the artificial system the encoding scheme is in the weight values that are assigned to the interconnections among nodes of a network. It is further expected that the brain functions similarly at this level, given that the systems are based on nonlinear dynamics and distributed representations of objects.[5][18][20][21]

These encoding schemes are expected to be abstract and not of a deterministic design based on a top-down only process. Since cognition is also considered a nonlinear dynamical system, the encoding of the representations is expected to be highly distributed among the parts of the neural network.[18][21] This is testable in an artificial system and in the brain.

Further, a physical interpretation of cognition requires the matching of patterns to generalize knowledge of the outside world. This is consistent with a view of the cognitive systems as statistical machines with a reliance on sampling for achieving robustness in its output. With the advances in deep learning methods, such as the invention of the transformer architecture[5][22], it is possible to sample and search for exceedingly complex sequence patterns. Also, the sampling of the world occurs within a sensory modality, such as from visual or language data, and this is complemented by a sampling among the modalities which potentially leads to increased robustness in the output.[23]

3.3. Abstract Reasoning as a Cognitive Process

Absterad, the divisions are based on commonly used boundaries inct reasoning has been associated with a process of thinking about cognition, such as in the division between sensory perception and higher, but the elements of this process are ideally modeled by physical processes in the brain. This framework offers constraints on the emergence of reasoning regardingabout abstract conceptss, such as ideas.

T Further, last section on conceptual knowledge is a synthesis of ideas from the previous sections, and serves the purpose of yielding an insight into the general properties of higher a process of abstract reasoning may be contrasted with visual and speech recognition. Both these sensory forms of recognition occur by sensory input to the neural systems of the brain. Without this sensory input, then the layers of the neural systems are not expected to encode a corresponding pathway, such as for recognition. The overarching t of visual objects. It is necessary that these systems are trained on input sources.

Themrefore of the sections is that, it follows that abstract reasoning is formed from input sources which are received by the neural network and itssystems of the brain. If there is no input of information flow are the foundation for a deeper understanding of the cognitivthat resembles a pathway of abstract reasoning, then the brain is not expected to substantially encode that particular pathway. This leads to the hypothesis on whether abstract reasoning is mainly a single pathway or a series of many pathways, and the contribution from other pathways, such as from the processes.

1.5. Definition of the Terminology

Thi of non-abstract entry uses terminology from science and engineering that requires further clarificationreasoning. It is probable that there is no sharp division between abstract reasoning and the other forms of reasoning, and there are likely many types of abstract reasoning, including the reasoning used for solving a class of related visual puzzles.

Another example is a (mental)hypothesis is whether the input sources for abstract reasoning occur mainly from the internal representation. Ins of the brain. If this case, a representation is commonly defined as information that corresponds to an idea or image. This is a particulis true, then a model of abstract reasoning would mainly involve the true forms of abstract objects, in contrast to the recognition of an object from sensory data by object reconstruction.

Of greater case where there is reference to the mind, but this term is also a reminder that the origin of theseertainty is that abstract reasoning is dependent on an input source, so there is an expectation that deep learning methods, the modeling of non-linear dynamics of a phenomena is in the brain itself. They are encoded in the neural network of the brain.

Anon, are sufficient to model one or more pathways of abstract reasoning. This form of reasoning involves recognition of objects that are not necessarily sensory objects, with definable properties and interrelationships. As with the training prother term is “probabilistic”, as a description of a processcess to learn of sensory objects and their properties, it is expected that there is a training process to learn about the forms and properties of abstract objects. This refers to a process that is expected to varyclass of problem is of interest since the universe of abstract objects is large and their properties and interrelationships not constrained by the external and potentiallyhysical world.

4. Future Directions - General Cognition

4.1. Dynamic Properties of Cognition

One question has been whether animal cognition is as interpretable as a deep learning system. This arose because of the difficulty in disentangling the mechanisms of the animal brain, whereas it is possible to record the changing states in an artificial system since the design is bottom-up. If the artificial system is similar enough, then it is possible to gain insight into the mechanisms of animal cognition.[5][24] However, a problem in this assumption may occur. For example, it is known that the mammalian brain is highly dynamic, such as in the rates of sensory input processing and the downstream activation of the internal representations.[18] These dynamic properties are not feasibly modeled in current artificial systems since there are constraints on hardware design and its efficiency.[18] This is an impediment to design of an artificial system that is approximate of animal cognition. Having an artificial system that includes an overlay architecture with “fast weights” is expected to provide this form of true recursion in processing information from the outside world.[5][18]

Since the artificial neural network systems continue to scale in capability, it is reasonable to continue to use an empirical approach to explore any sources of error, whether inherent in the method or a result that is not expected. This requires a thorough understanding on how these models work at all levels. One strategy for producing robust output has been to combine the various kinds of sensory information, such as both visual and language data. Another strategy has been to establish unbiased measures of the reliability of output from a model. It should be noted that animal cognition is not immune to error either. In the case of human cognition, there is a bias problem in perception of speech.[25]

4.2. Generalization of Knowledge

Another area of importance is the property of generalization in the case of a model of cognition. This goal could be approached by processing the particular levels of representation of sensory input, the presumed process that occurs in animals and their ability to generalize knowledge.[4][26][27] In a larger context, this generalizability is based on the premise that information of the outside world is compressible, such as in the repeatability of patterns in the information.

There is also the question of how to reuse knowledge outside the environment where it is learned, "being able to factorize knowledge into pieces which can easily be recombined in a sequence of computational steps, and being able to manipulate abstract variables, types, and instances".[5] It seems relevant to have a model of cognition that describes high level representations of these "pieces" of the whole, even in the case of an abstract object. However, the dynamic states of internal representations in cognition may contribute to the processes of abstract reasoning.

5. Future Directions - Abstract Reasoning

5.1. Abstract Reasoning in General

A model of high level of cognition includes the process of abstract reasoning.[5] This is a pathway or pathways that are expected to learn the high level representations in sensory information, such as visual or auditory, so that novel input generates output that is based on a set of rules. These rule sets are also expected to have generalized applicability. The rule set may include a single rule or multiple rules that occur in a sequence. One method for a solution is to have a deep learning system learn the rule set, such as in the case of a visual puzzle which is solved by use of a logical operation.[28] This is likely similar to one of the major ways that a person masters the game of chess, a memorization of priors for patterns and events of chess pieces on the game board.

Another kind of visual puzzle is a Rubik's Cube. However, in this case the puzzle has a known final state where each face of the cube shares a unique color. In the general case of visual puzzles, if there is no detectable rule set to solve the puzzle, then a person or a machine system should conclude that no rule set exists. If there is a detectable rule set, then there must be patterns of information, including missing information, that allow detection of the rule set. It is also possible that a particular rule set or those with many steps are not solvable by a person.

The pathway to a solution should include repeated testing of potential rule sets against an intermediate or final state of the puzzle. This iterative process may be approached by an heuristic search algorithm.[5] However, these puzzles are typically low dimensional as compared to abstract verbal problems, such as in the general process of inductive reasoning. The acquisition of the rule sets for verbal reasoning require a search for patterns in this higher dimensional space. In either of these cases of pattern searching, whether complex or simple, they are dependent on the detection of patterns that represent the rule sets.

It is simpler to imagine a logical operation as the pattern that provides a solution, but it is expected that a process of inductive reasoning involves higher dimensional representations than an operator that combines boolean values. It is also probable that these representations are dynamic in a person, so that there is a possibility to sample the space of valid representations.

6. Future Directions - Embodied Cognition

6.1. Phenomenon of Embodiment in Cognition

Lastly, there is a question on the dependence of animal cognition on the outside world. This dependence has been characterized as the phenomenon of embodiment, so the cognition is an embodied cognition, even in the case where the outside world is a machine simulation of it.[18][29][30] This is essentially a property of a robot, where its functioning is dependent on input and output from the outside world. Although a natural system would receive input, produce output, and thus learn from the world along some constrained time scale, a somewhat alternative approach in an artificial system is reinforcement learning,[30][31] a method that has been used to approximate the sensorimotor capability of animals.

A paper by Deepmind[30] describes artificial agents in a three-dimensional space where they learn from a world that is undergoing change over time. The method uses a deep reinforcement learning approach combined with a dynamic generation of environments that comprise each world. Each world has artificial agents that learn to undergo tasks for completing an objective. Knowledge of these tasks is sufficiently generalizable that the agents are often adapted to handle tasks that are not yet known in a newly generated world. This overall approach reflects an animal that is embodied within its three-dimensional world and learns to interact in the world by performing and learning tasks. It is known that animals navigate and learn from the world around them, so it is a thoughtful experiment to create similar circumstances in a virtual one.

6.2. Embodiment in a Virtual and Abstract World

While lthead to different outcomes. Representations are expected to occur by this process, so their properties will vary among individuals.

T above section reflects on the importance of modeling the phenomenon of embodiment in its three-dimensional world, this is not necessarily a model for reasoning about abstract concepts. It is very plausible that the visual concepts in the abstreference to deep learning originates in the fielact are modeled by a virtual world in three-dimensions, and the paper on this topic by Deepmind[30] shof engineering. This is a many-layered neural nws evidence of solving visual problems in its set of possible three-dimensional worlds.

The populatwork that is particularly suited forion of tasks and its distribution are constructed by Deepmind's machine learning about the abovementioned representations. These artificial neural networks share a network-like organization with thatpproach. They show that learning a large distribution of tasks can provide sufficient knowledge to solve tasks that are not necessarily within the confines of the brain in animals. The other termoriginal task distribution. This furthers shows a generalizability in solving tasks and their use are expected to foll promise that increasingly complex worlds would lead to a greater knowledge of task handling.

However, their commonly accepted meanings as found in a dictionary of scientific or common words. For e problem of abstract concepts extends beyond the visual and conventional sensory representations as formed by the processes of cognition. Example, the biosphere of the Earth refers to that pors include visual puzzles with solutions that are abstract and require the association of the planet shaped by biological and geological processepatterns that extend beyond the visual realm, and the symbolic representations and their relationships in mathematics.[32]

Combining Tthese processes are dynamic, so they have changed over time and across the surfatwo approaches, it is possible to construct a world that is not a reflection of the three-dimensional space inhabited by animals, but instead to construct a virtual world that consists of abstract objects and the performance of the Earth.

Lasks. The visual and stly, the informational processeymbolic puzzles, such as in the case of chess and related boardgames[31], have been described as physical processes because information flow is a phenomenon of the physical world. Therefore, matter and energy are required for this phenomenon to occur. The proximate mechanism of the flow of information in the brain ire solvable by deep learning approaches, but the machine reasoning is not generalized across a space of abstract environments and objects. The question is whether the abstract patterns to solve chess are also used in solving other puzzles. It seems a valid hypothesis that there is at least overlap in the use of abstract reasoning between these kinds of puzzles and synthesizing knowledge from abstract objects and their relationships, such as in the electrochemical dynamics that occur among neuronal cells, involving the movemesynthesis of ideas and in solving problems by the use of mathematical symbols and operators. Since we are capable of abstract thought, it is plausible that generating a set and distribution of general abstract tasks will lead to a working system that solves abstract problems in general.

If inst of the ions of chemical elements that ead of a dynamic generation of three-dimensional worlds and objects, there is a vast and dynamic generate an electromotive force (voltage), andion of worlds of abstract puzzles, for example, then the diffusion of molecular-level neurotransmittereep reinforcement learning approach could train on solving these abstract problems and acquire knowledge of abstract tasks. The neural system is also influenced by humoral factors, such asquestion is whether the distribution of these applicable tasks is generalizable to an unknown set of problems, those unrelated to the original task distribution, and likewise about the chemical messengers known as hormones.

2. Visual Cognition

2.1. Evolution and Probabilistic Processes

The processes of vision occupy about one-half of the cerebral cortex of the human brain [19]. Similar to the many sensory forms of language processing in humans, vision is a major source of input and recognition of the outside world. The complexity of the sensory systems reveals an important aspect of the evolutionary process, as observed across cellular life, along with their countless forms and novelties. Evolution depends on physical processes, such as mutation and population exponentiality, along with a dependence on geological time scales for building biological complexity, as observed at all scales of life. These effects have also formed and shaped the biosphere of the Earth.
This vast complexity across living organisms is revealed by deconstruction of the camera eye in animals. This novel form emerged over time from a simpler one, such as an eye spot, and depended on a sequence of adaptations over time [20][21]. These rare and unique events did not hinder the independent formation of the camera eye, as it occurs in both the lineage of vertebrates and the unrelated lineage of cephalopods. This is an example of evolution as a powerful generator of change in physical traits, although counterforces restrict evolution from searching across an infinite number of possible novelties, including constraints that are found in the genetic code and those of the physical processes that shape these traits.
The evolution of cognition and neural systems are expected to occur by a similar probabilistic process to that theorized in the origin and design of the camera eye. An alternative to this bottom-up design in nature is to suggest a set of non-probabilistic processes and a top-down design consistent with determinism. For the hypothesis of determinism in nature, there is an expectation of true and perfect forms, as Plato theorized, but this hypothesis is not favorable for descriptions of activity in the brain.
Therefore, with the probabilistic view of evolution and the force of natural selection, the neural systems are expected to show a large degree of optimality in their design, as observed across the other biological systems [22]—especially since neural systems co-adapt with the sensory systems. However, this optimality is also constrained by the limits of molecular, cellular, and population processes [23]. This is not an assertion that biological systems are perfectly optimal, but that they are reasonably efficient in their structure and function. This view is particularly supported by observations of anatomical features across vertebrate species, and their adaptations for specific environments, such as those observed in the skeletal design of whales versus horses.

2.2. Abstract Encoding of Sensory Input

“The biologically plausible mechanism of cognition originates from the high-dimensional information in the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays may range across the electromagnetic spectra, but the retinal cells are specific to a small subset of these light rays” [3].
Figure 1 shows the above view, in abstract form, as a sheet of neuronal cells that receive sensory input from the outside world. The input is processed by cell surface receptors and communicated downstream for neural system processing. The sensory neurons and their receptors can be imagined as a set of activation values that are undergoing change over time, and abstractly described as a dynamic system, in which change occurs in the dimensions of space and time.
Neurosci 02 00010 g001 550

Figure 1. Anompressibility of the task space. Since the advanced cognitive processes may involve a diverse abstractet of dynamic representation of information that is received by a sensory organ, such as the light rays absorbed by neuronal cells across the retinal surface of the camera eyes, commonly referred to as mental representations, it is plausible that tasks are generalizable and may originate in the information systems of speech, vision, and memory. Therefore, [3].

The information processing of the sensory organs is tractable for scientific study, but the downstream cognitive processes are less understood at a mechanistic level. The cognitive processes include the generalizing of knowledge, also referred to as transfer learning, which is a higher level of organization than that constructed from the sensory input [7][24][25]. Transfer learning is dependent on segmentation (division) of the sensory world and identification of sensory objects (such as visual or auditory) with resistance to variation in viewpoint or perspective (Figure 2) [26].

Neurosci 02 00010 g002 550

Figure 2. Tthe firtaskst panel is a drawing of the digit nine (9) while the next panel is the same digit as transformed by rotation [3] may be expressed by different forms, although the lower dimensional representations are more generalizable, providing a better substrate for the recognition of patterns, and essential for any general process of abstract reasoning.

In computer science, there is a model [6] designed for the segmentation and robust recognition of objects. This approach includes sampling of the sensory input, the identification of the parts of sensory objects, and encoding of the information in an abstract form for presentation to the downstream neural processes. The encoding scheme is expected to include a set of discrete representational levels of unlabeled (unidentified) objects and then uses a probabilistic approach for matching these representations to known objects in the memory. Without the potential for a labeled memory that describes an object, then there is no opportunity for knowledge of the object and a basis for knowledge in general.
Information is the proximate cause of cognition, and the laws of thermodynamics determine how information flows in any physical system, whether in a biological context or an artificial analog [27]. At other spatial scales, the physical processes in the brain are not homologous with an artificial neural network, such as at the level of neurons, where the intricacies of cellular processes are not shared with an artificial one. However, our history is filled with examples of engineers replicating the large-scale designs of nature, including lakes, bridges, and the construction of underwater vessels. The designs are similar at a physical scale because both natural and artificial forms are constrained by physical processes.

3. General Cognition

3.1. Algorithmic Description

Experts have investigated the question of whether an algorithm can explain brain computation [28]. They concluded that this is an unsolved problem, even though natural processes are inherently representable by a quantitative model. However, information flow in the brain is a product of a non-linear dynamical system, a complex phenomenon that is analogous to the physics of fluid flow, a complexity that may exceed the limits of computational work. Similarly, these systems are highly complex and not easily mirrored by simple mathematical descriptions [28][29]. Experts recommend an empirical approach for disentangling these kinds of complex systems, since they are not considered very tractable at a theoretical level.
An artificial neural system, such as in the deep learning architectures, has strong potential for testing hypotheses on higher cognition. The reason is that engineered systems are built from parts and relationships that are known, whereas in nature, the origin and history of the system is obscured by time and a large number of events; in this case, acquiring scientific knowledge likely requires extensive experimentation that is often confounded with error, including from sources that are known and unknown.

3.2. Encoding of Knowledge

It is possible to hypothesize about a model of object representation in the brain and its artificial analog in the deep learning systems. First, these cognitive systems are expected to encode objects by their parts, the basic elements of an object [5][6][7]. Second, it is expected that the process is stochastic, a probabilistic process, as in all other natural processes.
The neural network system is, in its essence, a programmable system [30], encoded with weight values along the connections in the network and activation values at the nodes. It is expected that the brain functions analogously at the level of information processing, since these systems are both based on non-linear dynamic principles of an interconnected network of nodes and a distribution of the representations of objects [7][28][31][32]. Furthermore, the encoding schemes in the network are likely to be abstract and generated by probabilistic processes.
Moreover, a physical interpretation of cognition requires the matching of patterns for the generalization of knowledge. This is consistent with a view of cognition as a statistical machine with a reliance on sampling for robust information processing [33]. With advancement in the deep learning methods, such as the invention of the transformer architecture [7][34][35], it is possible to sample and search for exceedingly complex patterns in a sequence of information, including in the case of object detection across a visual scene [36]. This sampling of the world occurs across the sensory modalities, such as those in vision and hearing, which are the sources of information for processing and constructing the internal representations [37].

3.3. Representation of Common-Sense Concepts

Microsoft Research released a deep learning method based on the transformer architecture, along with the inclusion of curated and structured data, to achieve some degree of parity with people in common-sense reasoning [38]. Their example of this kind of reasoning is described by a question on what people do while playing a guitar. The common-sense answer is for people to sing. This association is not a naive one, since the concept of singing is not a property of a guitar. Their achievement of parity with people is possible by the addition of the curated and structured data.
Their finding showed that an online corpus by itself is insufficient for a full knowledge of concepts. The conventional transformer architecture is dependent on and limited by the information inherent in a sequence of data for downstream representation of conceptual knowledge. In their case, the missing component was the curation and structure in the data, and the results showed a competitive capability for building concepts from representations as derived from input data.
The use of a large sample of representations that correspond to an abstract or non-abstract object or an event is expected to further increase robustness in a model of higher cognition [39]. Our knowledge of concepts is expected to form in the same manner. If there are incomplete or missing parts of a concept, then a person will have difficulty in forming the whole concept and applying it during problem solving.

3.4. Future Directions in Cognitive Science

3.4.1. Dynamics of Cognition

Is higher cognition as interpretable as a deep learning system? This question arises from the difficulty of disentangling the mechanisms of an animal neural system, whereas it is possible to record the changing states of an artificial system, since its underlying design is known. If the artificial system is analogous, then it is possible to gain insight into the natural forms of cognition [7][40]. However, the assumption for this analogy may not hold. For example, it is known that the mammalian brain is highly dynamic, such as in the rates of sensory input and the downstream activation of internal representations [28]. These dynamic properties are not easily modeled in deep learning systems, a constraint of hardware design and efficiency [28]. This has been an impediment to the design of an artificial system that is approximate to higher cognition, although there are concepts for modeling these dynamics, such as an architecture that includes “fast weights” and provides a form of true recursion across a neural network [7][28]. This allows for a self-referential system that can continue to adapt to new experience. Recently, there have been studies on this architecture to address the performance problem [41][42].
The artificial neural networks continue to scale in size and efficiency. This work has been accompanied by empirical approaches for exploring the sources of error in these systems, and this effort is dependent on a thorough understanding about the construction of the models. One avenue for increasing the robustness in the output is by combining many sources of sensory data, such as from the visual domain and senses that are associated with the natural language domains, where the communication of language is not restricted to the written form. Another approach is to establish unbiased measures in the reliability of model output [28][36]. Likewise, error in information processing is not resistant to bias in animals, such as in human cognition, where there are well-documented biases in speech perception [43].
These approaches are a foundation for emulating the modularity and breadth of function in higher cognition. For achieving this aim, meta-learning methods can create a formal, modular [44], and structured framework for combining disparate sources of data. This scalable approach would lead to building complex information systems and reflect the higher cognitive processes [45][46].

3.4.2. Generalization of Knowledge

Another area of interest is the property of generalization in a model of higher cognition. This property may be better understood by a study of the processes that form the internal representations from sensory input [6][47][48]. Further, in an abstract context, generalizability is based on the premise that information on the outside world is compressible, such as in its repeatability of the patterns of sensory information, so that it is possible for any system to classify objects and therefore obtain knowledge of the world.
There is also the question of how to reuse knowledge outside the environment where it is learned, “being able to factorize knowledge into pieces which can easily be recombined in a sequence of computational steps, and being able to manipulate abstract variables, types, and instances” [7]. Therefore, it is relevant to have a model of cognition that includes the higher-level representations based on the parts of objects, whether derived from sensory input or internal to the neural network. However, the dynamic and various states of the internal representations are also contributors to the processes of higher reasoning.

3.4.3. Embodiment in Cognition

Lastly, there is uncertainty on the dependence of cognition on the outside world. This dependence has been characterized as the phenomenon of embodiment, i.e., that the occurrence of cognition is dependent on an animal or similar form, so the natural form of cognition is also an embodied cognition, even in the case where the world is a machine simulation [28][49][50]. In essence, this is a property of a robotic and mechanical system, where its functions are fully dependent on specific input and output from the world. Although a natural system receives input, produces output, and learns at a time scale constrained by the physical world, an artificial system is not as constrained, such as in the case of reinforcement learning [50][51][52], a method that can also reconstruct sensorimotor function in animals. Moreover, an artificial system is not restricted to a single bodily form in its functions.
Deepmind [50] developed artificial agents in a three-dimensional space that learn in a continually changing world. The method uses a deep reinforcement learning method in conjunction with dynamic generation of environments that lead to the unique arrangement of each world. Each of the worlds contains artificial agents that learn to handle tasks and receive rewards for completing specific objectives. An agent observes a pixel image of an environment along with receiving a “text description of their goal” [50]. Task experience is sufficiently generalizable that the agents are capable of adapting to tasks that are not yet known from prior experience. This reflects an animal that is embodied in a world and is learning interactively by the performance of physical tasks. It is known that animals navigate and learn from the world around them, so the above approach is a meaningful experiment within a virtual world. However, the above approach has fragility for tasks outside of its distribution of prior learned experiences.

4. Abstract Reasoning

4.1. Abstract Reasoning as a Cognitive Process

Abstract reasoning is often associated with a process of thought, but the elements of the process are ideally represented as physical processes. This restriction constrains explanations of the emergence of abstract reasoning, as in the formation of new concepts in an abstract world. Moreover, a process of abstract reasoning may be compared against the more intuitive forms of cognition as found in vision and speech perception. Without sensory input, the layers of the neural system are not expected to encode new information by a pathway, as is expected in the recognition of visual objects. Therefore, it is expected that any information system is dependent on an external input for learning, an essential process for the formation of experiential knowledge.
It follows that abstract reasoning is formed from an input source as received by the neural system. If there is no input that is relevant to a pathway of abstract reasoning, then the system is not expected to encode that pathway. This also leads to the hypothesis of whether abstract reasoning is composed of one or more pathways, and the contribution of other unrelated pathways in cognition. It is probable that there is no sharp division between abstract reasoning and the other types of reasoning, and the likelihood that there is more than one pathway of abstract reasoning, as exemplified in the case of solving puzzles that require the manipulation of objects in the visual world.
Another hypothesis is on whether the main source of abstract objects is the internal representations. If true, then a model of abstract reasoning would involve the true forms of abstract objects, in contrast to the recognition of an object by reconstruction from sensory input in the neural network system.
Since abstract reasoning is dependent on an input source, there is an expectation that deep learning methods modeling the non-linear dynamics are sufficient to model one or more pathways involved in abstract reasoning. This reasoning involves the recognition of objects that are not necessarily sensory objects with definable properties and relationships. As with the training process to learn sensory objects, it is expected that there is a training process to learn about the forms and properties of abstract objects. This class of problem is of interest, since the universe of abstract objects is boundless, and their properties and interrelationships are not constrained by the essential limits of the physical world.

4.2. Models of Abstract Reasoning

A model of higher cognition includes abstract reasoning [7]. This is a pathway or pathways that are expected to learn the higher-level representations of sensory objects, such as from vision or hearing, and for which the input is processed and generative of a generalizable rule set. These may include a single rule or a sequence of rules. One model is for the deep learning system to learn the rule set, such as in the case of puzzles solvable by a logical operation [53]. This is likely the basis for a person playing a chess game by memorizing prior patterns of information and events on the game board, which lead to general knowledge of the game system as a kind of world model.
Similarly, another kind of visual puzzle is the Rubik’s Cube. However, in this case, the final state of the puzzle is known, where each face of the cube will share a single and unique color. Likewise, if there is a detectable rule set, then there must be patterns of information that allow the construction of a generalized rule set.
The pathway to a solution can include the repeated testing of potential rule sets against an intermediate or final state of the puzzle. This iterative process may be approached by a heuristic search algorithm [7]. However, these puzzles are typically low-dimensional as compared with abstract verbal problems, as in inductive reasoning. The acquisition of rule sets for verbal reasoning requires a search for patterns in a higher-dimensional space. In either of these cases of pattern searching, whether complex or simple, they are dependent on the detection of patterns that represent a set of rules.
It is simpler to imagine a logical operation as the pattern that offers a solution, but it is expected that inductive reasoning involves higher-dimensional representations than a simple operator that combines Boolean values. It is also probable that these representations are dynamic, so there is potential to sample from a set of many valid representations.

4.3. Future Directions in Abstract Reasoning

4.3.1. Embodiment in a Virtual and Abstract World

While the phenomenon of embodiment refers to an occupant of the three-dimensional world, this is not necessarily a complete model for reasoning on abstract concepts. However, it is plausible that at least some abstract concepts are solvable in a virtual three-dimensional world. Similarly, Deepmind showed a solution to visual problems across a generated set of three-dimensional worlds [50].
A population and distribution of tasks are also elements in Deepmind’s approach. They show that learning a task distribution leads to knowledge for solving tasks outside the prior task distribution [50][51]. This leads to the potential for generalizability in solving tasks, along with the promise that increased complexity across the worlds would lead to further expansion in the knowledge of tasks.
However, the problem of abstract concepts extends beyond the conventional sensory representations as formed by higher cognition. Examples include visual puzzles with solutions that are abstract and require the association of patterns that extend beyond the visual realm, along with the symbolic representations from the areas of mathematics [54][55].
By combining these two approaches, it is possible to construct a world that is not a reflection of the three-dimensional space as inhabited by animals, but to construct a virtual world of abstract objects and sets of tasks instead [51]. The visual and symbolic puzzles, such as in the case of chess and related boardgames [52], are solvable by deep learning approaches, but the machine reasoning is not generalized across a space of abstract environments and objects.
The question is whether the abstract patterns used to solve chess are also useful in solving other kinds of puzzles. It seems a valid hypothesis that there is at least some overlap in the use of abstract reasoning between these visual puzzles and the synthesis of knowledge from other abstract objects and their interactions [50], such as in solving problems by the use of mathematical symbols and their operators [55][56]. Since humans are capable of abstract thought, it is plausible that the generation of a distribution of general abstract tasks would lead to a working system for solving a wider set of abstract problems [57].
If, instead of a dynamic generation of three-dimensional worlds and objects, there is a vast and dynamic generation of abstract puzzles, for example, then the deep reinforcement learning approach could be trained on solving these problems and acquiring knowledge of these tasks [50]. The question is whether the distribution of these applicable tasks is generalizable to an unknown set of problems (those unrelated to the original task distribution), and the compressibility of the space of tasks. This hypothesis is further supported by a recent study [57].

4.3.2. Reinforcement Learning and Generalizability

Google Research showed that an unmodified reinforcement learning approach is not necessarily robust for acquiring knowledge of tasks outside the trained task distribution [51]. Therefore, they introduced an approach that incorporates a measurement of similarity among worlds that are generated by a reinforcement learning procedure. This similarity measure is estimated by behavioral similarity, corresponding to the salient features by which an agent finds success in any given world. Given that these salient features are shared among the worlds, the agents have a path for generalizing knowledge for success in worlds outside their experience. Procedurally, the salient features are acquired by a contrastive learning procedure, i.e., a method for unlabeled clustering of samples, and embeds these values of behavioral similarity in the neural network itself [58].
This reinforcement learning approach is dependent on both a deep learning framework and an input data source. The source of input is typically in a two- or three-dimensional artificial environment where an artificial agent learns to accomplish tasks within the confines of the worlds and their rules [50][51]. One approach is to represent the salient features of tasks and the worlds in a neural network. As Google Research showed [51], the process requires an additional step in extracting the salient information for creating better models of the tasks and worlds. They found that this method was more robust in the generalization of tasks. Similarly, in higher cognition, it is expected that the salient features used to generalize tasks are stored in the neuronal network.
Therefore, a naive input of visual data from a two-dimensional environment is not an efficient means of coding tasks that consistently generalize across environments. To capture the high-dimensional information in a set of related tasks, Google Research extended the reinforcement learning approach to better capture the task distribution [51], and it may be possible to mimic this approach by similar methods. These task distributions provide structured data for representing the dynamics of tasks among worlds, and therefore generalize and encode the high-dimensional and dynamic features in a low-dimensional form.
It is difficult to imagine the relationship between two different environments. The game of checkers and that of chess appear as different game systems. Encoding the dynamics of each of these in a deep learning framework may show that they relate in an unintuitive and abstract way [50]. This concept is expressed in the article cited above [51], indicating that short paths of a larger pathway may provide the salient and generalizable features. In the case of boardgames, the salient features may not correspond to a naive perception of visual relatedness. Likewise, our natural form of abstract reasoning shows that patterns are captured in these boardgames, and these patterns are not entirely recognized by a single rule set at the level of our awareness, but, instead, are likely represented at a high-dimensional level in the neural network itself.
For emulation of a process of reasoning, extracting the salient features from a pixel image is a complex problem, and the pathway may involve many sources of error. Converting images to a low-dimensional form, particularly for the salient subtasks, allows for a greater expectation of the generalization and repeatability in the patterns of objects and events. Where it is difficult to extract the salient features of a system, it is possible to translate and reduce the objects and events in the system to text-based descriptors, a process that has been studied and lends itself to interpretation [57][58][59][60].
Lastly, since the higher cognitive processes involve the widespread use of dynamic representations, it is plausible that the tasks are not merely generalizable but may originate in the varied sensory and memory systems. Therefore, the tasks would be expressed by the different sensory forms, although the low-dimensional representations are more generalizable, providing a better substrate for the recognition of patterns, and are essential for a process of abstract reasoning.

5. Conceptual Knowledge

5.1. Knowledge by Pattern Combination and Recognition

In the 18th century, the philosopher Immanuel Kant suggested that the synthesis of prior knowledge leads to new knowledge [61]. This theory of knowledge extended the concept of objects from a set of perfect forms to a recombination of forms, leading to a boundless number of mental representations. This was the missing concept to explain the act of knowing. Therefore, the forces of knowledge were no longer dependent on description outside the realm of matter, or on hypotheses based on an unbounded complexity of material interactions.
It is possible to divide these objects and forms of knowledge into two categories: sensory and abstract. The sensory objects are ideally constructed from sensory input, even though this assumption is not universal. Instead, perception may refer to the construction of these sensory objects, along with any error occurring in their associated pathways. In comparison, the abstract object is ideally a true form. An ideal example is a mathematical symbol, such as an operator for the addition of numbers [55]. However, an abstract object may coincide with sensory objects, such as an animal and its taxonomic relationship to other forms of animals.
Therefore, one hypothesis is that the objects of knowledge are instead a single category, but that the input used to form the object is from at least two sources, including sensations from the outside world and the representation of objects as stored in the memory.
A hypothetical example is from chess. A person is not able to calculate each game piece and position given all events on the board. Instead, the decision-making is largely dependent on boardgame patterns with respect to the pieces and positions. However, the observable patterns as compared with all possible patterns is strongly bounded. One solution is in the hypothesis that the patterns also exist as internal representations that are synthesized and formed into new patterns not yet observed. Evidence for this hypothesis is in the predictive coding of sensory input, namely that this compensatory action allows a person to perceive elements of a visual scene or speech a short time prior to its occurrence [33]. This same predictive coding pathway may apply to internal representations, such as in chess gameboard patterns, and the ability to recombine prior objects of knowledge. The process of creating new forms and patterns would allow a person to greatly expand upon the number of observable patterns in a world.
To summarize, the process of predictive coding of sensory information should also apply to the reformation of internal representations. This is a force of recombination that is expected to lead to a very large number of forms in the memory, and is used for the detection of objects and forms that have not yet been observed. Knowledge by synthesis of priors has the potential to generate a multitude of forms that are consistent with the extent of human thought. In this case, the cognitive ether of immeasurability or incomputability is not necessary for explaining higher cognition and its processes.

5.2. Models of Generalized Knowledge

Evidence is mounting in support of a deep learning model, namely the transformer, for sampling data and constructing high-dimensional representations [35][46][55][57][58][60][62][63][64]. A study by Google Research employed a decision transformer architecture to show transfer learning in tasks that occurred in a fixed and controlled setting (Atari) [58]. This work supports the concept that generalized patterns occur in an environment with the potential for resampling those patterns in other environments. The experimental control of the environmental properties is somewhat analogous to the cognitive processes that originate in a single embodied source [49][50]. Altogether, the sampling of patterns is from the population of all possible patterns that occur in the system. A sufficiently large sample of tasks is expected to lead to knowledge of the system. The system may be thought of as a physical system; in this case, it is a visual space of two dimensions.
In another study, Deepmind questioned whether task-based learning can occur across multiple embodied sources, such as the patterns derived from torque-based tasks (a robot arm) and those from a set of captioned images [57]. Their results showed evidence of transfer learning across heterogeneous sources, and indicated that their model is expected to scale in power with an increase in data and model size.
These studies are complemented by Chan and others [62]. This insightful work showed convincing evidence of the superior performance of the transformer architecture in handling a sequence data model. They further revealed the importance of distributional qualities and dynamics in the training dataset, and its relationship to the properties of natural language data [62].
These computational studies illustrate proof that model performance continues to scale with model size [57][58]. These models for generalized task learning occur in a particular setting. It is possible to consider the setting as a physical system, such as in a particular simulation or in our physical world [50][64][65][66]. With a robust sampling of tasks in a controlled physical system, it is possible to learn the system and transfer the knowledge of tasks from the known to those unknown [50][64][65][66]. This is a form of pattern sampling that is robust in its representation of the population of all patterns that occur in a system. Deepmind has searched for these patterns in a system by deep reinforcement learning while optimizing the approach by simultaneously searching for the shortest path toward learning the system [65]. This method is, in essence, learning a world model and forming a base set of cognitive processes for downstream use.
Since images with text descriptors lead to generalized task learning [64], then video with text descriptors [63] is expected to enhance the model with a temporal dimension, and reflect tasks that are dynamic in time [66]. OpenAI developed a deep learning method that receives input as video data, but with a minimal number of associated text labels, and is as capable as a person in learning tasks and modeling a world (Minecraft) [66]. There is also a question on the difference between simple and complex tasks. However, the tasks may be decomposed into their parts and patterns, although OpenAI’s reinforcement learning system is achieving this aim without prior identification of these patterns [66].

5.3. Knowledge as a Physical Process

The informational processes in machine systems are analogous to those in the brain. They are both systems constrained by the physical world and its rules. While the machine systems are tractable for empirical studies of information flow and the acquisition of knowledge, the biological systems are not nearly as tractable. There is limited understanding of the specifics of how neurons work and how they organize to encode information, such as in memory formation in mammals [18].
For example, biology is dependent on the instruments of neurobiology for capturing the dynamics of a single neuronal cell’s activity, while a quantifiable behavior or action is observed over time [8][9][67]. The biological studies also require extensive experimentation for verification and insight; otherwise, a single experiment or a few experiments will result in unsupported interpretations and conclusions on the dynamic pathway or pathways of the neural system [68][69]. As in the history of studies in ecology, phenomenological approaches are better replaced by those built on theory and quantitative models, along with prudence in forming robust hypotheses, not mere questions, in the natural sciences. This problem is not restricted to natural science, but also applies to the science of engineering [35].

References

  1. Merriam-Webster Dictionary (an Encyclopedia Britannica Company: Chicago, IL, USA). Available online: https://www.merriam-webster.com/dictionary/cognition (accessed on 27 July 2022).Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 141-150.
  2. Cambridge Dictionary (Cambridge University Press: Cambridge, UK). Available online: https://dictionary.cambridge.org/us/dictionary/english/cognition (accessed on 27 July 2022).Vlastos, G. Parmenides’ theory of knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.
  3. Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 10.Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013-1028.
  4. Vlastos, G. Parmenides Theory of Knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.Hinton, G. How to represent part-whole hierarchies in a neural network. 2021, arXiv:2102.12627.
  5. Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013–1028.Bengio, Y.; LeCun, Y.; Hinton G. Deep Learning for AI. Commun ACM 2021, 64, 58-65.
  6. Hinton, G. How to represent part-whole hierarchies in a neural network. arXiv 2021, arXiv:2102.12627.Searle, J.R.; Willis, S. Intentionality: An essay in the philosophy of mind. Cambridge University Press, Cambridge, UK, 1983.
  7. Bengio, Y.; LeCun, Y.; Hinton, G. Deep Learning for AI. Commun. ACM 2021, 64, 58–65.Huxley, T.H. Evidence as to Man's Place in Nature. Williams and Norgate, London, UK, 1863.
  8. Streng, M.L.; Popa, L.S.; Ebner, T.J. Modulation of sensory prediction error in Purkinje cells during visual feedback manipulations. Nat. Commun. 2018, 9, 1099.Haggard, P. Sense of agency in the human brain. Nat Rev Neurosci 2017, 18, 196-207.
  9. Popa, L.S.; Ebner, T.J. Cerebellum, Predictions and Errors. Front. Cell. Neurosci. 2019, 12, 524.Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados trans. Nicolas Moya, Madrid, Spain, 1899.
  10. Searle, J.R.; Willis, S. Intentionality: An Essay in the Philosophy of Mind; Cambridge University Press: Cambridge, UK, 1983.Kriegeskorte, N.; Kievit, R.A. Representational geometry: integrating cognition, computation, and the brain. Trends Cognit Sci 2013, 17, 401-412.
  11. Huxley, T.H. Evidence as to Man’s Place in Nature; Williams and Norgate: London, UK, 1863.Hinton, G.E. Connectionist learning procedures. Artif Intell 1989, 40, 185-234.
  12. Haggard, P. Sense of agency in the human brain. Nat. Rev. Neurosci. 2017, 18, 196–207.Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw 2015, 61, 85-117.
  13. Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados Trans; Nicolas Moya: Madrid, Spain, 1899.Paley, W. Natural Theology: or, Evidences of the Existence and Attributes of the Deity, 12th ed., London, UK, 1809.
  14. Kriegeskorte, N.; Kievit, R.A. Representational geometry: Integrating cognition, computation, and the brain. Trends Cogn. Sci. 2013, 17, 401–412.Darwin, C. On the origin of species. John Murray, London, UK, 1859.
  15. Hinton, G.E. Connectionist learning procedures. Artif. Intell. 1989, 40, 185–234.Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. 2021, arXiv:2103.01937.
  16. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117.Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. In Proceedings of the IEEE, 2021.
  17. Descartes, R. Meditations on First Philosophy; Moriarty, M., Translator; Oxford University Press: Oxford, UK, 2008.Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog Neurobiol 1997, 51, 167-194.
  18. Friedman, R. Themes of advanced information processing in the primate brain. AIMS Neurosci. 2020, 7, 373.Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, May 17, 2021. Panelists: Blum, L; Gallant, J; Hinton, G; Liang, P; Yu, B.
  19. Prasad, S.; Galetta, S.L. Anatomy and physiology of the afferent visual system. In Handbook of Clinical Neurology; Kennard, C., Leigh, R.J., Eds.; Elsevier: Amsterdam, The Netherlands, 2011; pp. 3–19.Gibbs, J.W. Elementary Principles in Statistical Mechanics. Charles Scribner's Sons, New York, NY, 1902.
  20. Paley, W. Natural Theology: Or, Evidences of the Existence and Attributes of the Deity, 12th ed.; R. Faulder: London, UK, 1809.Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences 2010, 14, 357-364.
  21. Darwin, C. On the Origin of Species; John Murray: London, UK, 1859.Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel distributed processing: explorations in the microstructure of cognition; Rumelhart, D.E., McClelland, J.L., PDP research group, Eds., Bradford Books: Cambridge, Mass, 1986.
  22. De Sousa, A.A.; Proulx, M.J. What can volumes reveal about human brain evolution? A framework for bridging behavioral, histometric, and volumetric perspectives. Front. Neuroanat. 2014, 8, 51.Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. 2017, arXiv:1706.03762.
  23. Slobodkin, L.B.; Rapoport, A. An optimal strategy of evolution. Q. Rev. Biol. 1974, 49, 181–200.Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. 2021, arXiv:2102.10772.
  24. Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. arXiv 2021, arXiv:2103.01937.Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proceedings of the National Academy of Sciences 2021, 118.
  25. Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. Proc. IEEE 2021, 109, 612–634.Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, 1986, pp. 1-24.
  26. Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog. Neurobiol. 1997, 51, 167–194.Chase, W.G.; Simon, H.A. Perception in chess. Cognitive psychology 1973, 4, 55-81.
  27. Friedman, R. A Perspective on Information Optimality in a Neural Circuit and Other Biological Systems. Signals 2022, 3, 25.Pang, R.; Lansdell, B.J.; Fairhall, A.L. Dimensionality reduction in neuroscience. Curr Biol 2016, 26, R656-R660.
  28. Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, 17 May 2021. Panelists: Blum, L., Gallant, J., Hinton, G., Liang, P., Yu, B. Available online: https://sites.google.com/view/conceptualdlworkshop/home (accessed on 17 May 2021).Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning, PMLR, 2018.
  29. Gibbs, J.W. Elementary Principles in Statistical Mechanics; Charles Scribner’s Sons: New York, NY, USA, 1902.Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. 2019, arXiv:1912.00312.
  30. Schmidhuber, J. Making the World Differentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments; Technical Report FKI-126-90; Technical University of Munich: Munich, Germany, 1990.Team, E.L., Stooke, A., Mahajan, A., Barros, C., Deck, C., Bauer, J., Sygnowski, J., Trebacz, M., Jaderberg, M., Mathieu, M. and McAleese, N. Open-ended learning leads to generally capable agents. 2021, arXiv:2107.12808.
  31. Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A.; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends Cogn. Sci. 2010, 14, 357–364.Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; Lillicrap, T.; Simonyan, K.; Hassabis, D. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140-1144.
  32. Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition; Rumelhart, D.E., McClelland, J.L., PDP Research Group, Eds.; Bradford Books: Cambridge, MA, USA, 1986.Schuster, T., Kalyan, A., Polozov, O. and Kalai, A.T. Programming Puzzles. 2021, arXiv:2106.05784.
  33. Friston, K. The history of the future of the Bayesian brain. NeuroImage 2012, 62, 1230–1233.
  34. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762.
  35. Phuong, M.; Hutter, M. Formal Algorithms for Transformers. arXiv 2022, arXiv:2207.09238.
  36. Chen, T.; Saxena, S.; Li, L.; Fleet, D.J.; Hinton, G. Pix2seq: A language modeling framework for object detection. arXiv 2021, arXiv:2109.10852.
  37. Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. arXiv 2021, arXiv:2102.10772.
  38. Xu, Y.; Zhu, C.; Wang, S.; Sun, S.; Cheng, H.; Liu, X.; Gao, J.; He, P.; Zeng, M.; Huang, X. Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. arXiv 2021, arXiv:2112.03254.
  39. Zeng, A.; Wong, A.; Welker, S.; Choromanski, K.; Tombari, F.; Purohit, A.; Ryoo, M.; Sindhwani, V.; Lee, J.; Vanhoucke, V.; et al. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. arXiv 2022, arXiv:2204.00598.
  40. Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proc. Natl. Acad. Sci. USA 2021, 118, e2016569118.
  41. Irie, K.; Schlag, I.; Csordás, R.; Schmidhuber, J. A Modern Self-Referential Weight Matrix That Learns to Modify Itself. arXiv 2022, arXiv:2202.05780.
  42. Schlag, I.; Irie, K.; Schmidhuber, J. Linear transformers are secretly fast weight programmers. In Proceedings of theInternational Conference on Machine Learning, PMLR 139, Virtual, 24 July 2021; pp. 9355–9366.
  43. Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, USA, 1986; pp. 1–24.
  44. Mittal, S.; Bengio, Y.; Lajoie, G. Is a Modular Architecture Enough? arXiv 2022, arXiv:2206.02713.
  45. Ha, D.; Tang, Y. Collective Intelligence for Deep Learning: A Survey of Recent Developments. arXiv 2021, arXiv:2111.14377.
  46. Mustafa, B.; Riquelme, C.; Puigcerver, J.; Jenatton, R.; Houlsby, N. Multimodal Contrastive Learning with LIMoE: The Language-Image Mixture of Experts. arXiv 2022, arXiv:2206.02770.
  47. Chase, W.G.; Simon, H.A. Perception in chess. Cogn. Psychol. 1973, 4, 55–81.
  48. Pang, R.; Lansdell, B.J.; Fairhall, A.L. Dimensionality reduction in neuroscience. Curr. Biol. 2016, 26, R656–R660.
  49. Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. arXiv 2019, arXiv:1912.00312.
  50. Open-Ended Learning Team; Stooke, A.; Mahajan, A.; Barros, C.; Deck, C.; Bauer, J.; Sygnowski, J.; Trebacz, M.; Jaderberg, M.; Mathieu, M.; et al. Open-ended learning leads to generally capable agents. arXiv 2021, arXiv:2107.12808.
  51. Agarwal, R.; Machado, M.C.; Castro, P.S.; Bellemare, M.G. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. arXiv 2021, arXiv:2101.05265.
  52. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140–1144.
  53. Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In Proceedings of the International Conference on Machine Learning, PMLR 80, Stockholm, Sweden, 15 July 2018.
  54. Schuster, T.; Kalyan, A.; Polozov, O.; Kalai, A.T. Programming Puzzles. arXiv 2021, arXiv:2106.05784.
  55. Lewkowycz, A.; Andreassen, A.; Dohan, D.; Dyer, E.; Michalewski, H.; Ramasesh, V.; Slone, A.; Anil, C.; Schlag, I.; Gutman-Solo, T.; et al. Solving Quantitative Reasoning Problems with Language Models. arXiv 2022, arXiv:2206.14858.
  56. Drori, I.; Zhang, S.; Shuttleworth, R.; Tang, L.; Lu, A.; Ke, E.; Liu, K.; Chen, L.; Tran, S.; Cheng, N.; et al. A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level. arXiv 2021, arXiv:2112.15594.
  57. Reed, S.; Zolna, K.; Parisotto, E.; Colmenarejo, S.G.; Novikov, A.; Barth-Maron, G.; Gimenez, M.; Sulsky, Y.; Kay, J.; Springenberg, J.T.; et al. A Generalist Agent. arXiv 2022, arXiv:2205.06175.
  58. Lee, K.H.; Nachum, O.; Yang, M.; Lee, L.; Freeman, D.; Xu, W.; Guadarrama, S.; Fischer, I.; Jang, E.; Michalewski, H.; et al. Multi-Game Decision Transformers. arXiv 2022, arXiv:2205.15241.
  59. Chen, L.; Lu, K.; Rajeswaran, A.; Lee, K.; Grover, A.; Laskin, M.; Abbeel, P.; Srinivas, A.; Mordatch, I. Decision Transformer: Reinforcement Learning via Sequence Modeling. Adv. Neural Inf. Process. Syst. 2021, 34, 15084–15097.
  60. Fei, N.; Lu, Z.; Gao, Y.; Yang, G.; Huo, Y.; Wen, J.; Lu, H.; Song, R.; Gao, X.; Xiang, T.; et al. Towards artificial general intelligence via a multimodal foundation model. Nat. Commun. 2022, 13, 1–13.
  61. Kant, I.; Smith, N.K. Immanuel Kant’s Critique of Pure Reason; Translated by Norman Kemp Smith; Macmillan & Co: London, UK, 1929.
  62. Chan, S.C.; Santoro, A.; Lampinen, A.K.; Wang, J.X.; Singh, A.; Richemond, P.H.; McClelland, J.; Hill, F. Data Distributional Properties Drive Emergent In-Context Learning in Transformers. arXiv 2022, arXiv:2205.05055.
  63. Seo, P.H.; Nagrani, A.; Arnab, A.; Schmid, C. End-to-end Generative Pretraining for Multimodal Video Captioning. arXiv 2022, arXiv:2201.08264.
  64. Yan, C.; Carnevale, F.; Georgiev, P.; Santoro, A.; Guy, A.; Muldal, A.; Hung, C.; Abramson, J.; Lillicrap, T.; Wayne, G. Intra-agent speech permits zero-shot task acquisition. arXiv 2022, arXiv:2206.03139.
  65. Guo, Z.D.; Thakoor, S.; Pîslar, M.; Pires, B.A.; Altche, F.; Tallec, C.; Saade, A.; Calandriello, D.; Grill, J.; Tang, Y.; et al. BYOL-Explore: Exploration by Bootstrapped Prediction. arXiv 2022, arXiv:2206.08332.
  66. Baker, B.; Akkaya, I.; Zhokhov, P.; Huizinga, J.; Tang, J.; Ecoffet, A.; Houghton, B.; Sampedro, R.; Clune, J. Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos. arXiv 2022, arXiv:2206.11795.
  67. Traniello, I.M.; Chen, Z.; Bagchi, V.A.; Robinson, G.E. Valence of social information is encoded in different subpopulations of mushroom body Kenyon cells in the honeybee brain. Proc. R. Soc. B 2019, 286, 20190901.
  68. Bickle, J. The first two decades of CREB-memory research: Data for philosophy of neuroscience. AIMS Neurosci. 2021, 8, 322.
  69. Piller, C. Blots on a field? Science 2022, 377, 358–363.
More