Cognition: History
View Latest Version
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor:

Cognition is the acquisition of knowledge by the mechanical process of information flow in a system. In animal cognition, input is received by the sensory modalities and the output may occur as a motor or other response. The sensory information is internally transformed to a set of representations which is the basis of cognitive processing. This is in contrast to the traditional definition that is based on mental processing and originated from metaphysical philosophy.

  • animal cognition
  • cognitive processes
  • physical processes
  • mental processes

1. Definition of Cognition

1.1. A Scientific Definition of Cognition

Dictionaries commonly define cognition as a mental process for the acquiring of knowledge. However, this view originated from the assignment of mental processes to the act of thought. These mental processes originate from a metaphysical description of cognition that includes the concepts of consciousness and intentionality.[1][2] This also assumes that objects in nature are reflections of a true and determined form.

Instead, a material description of cognition is restricted to the physical processes available in nature. An example is from a study of primate face recognition where measurements of facial features serve as the basis for object recognition.[3] This perspective also excludes the concept that there is an innate and prior knowledge of objects, so instead cognition forms a representation of objects from their constituent parts,[4][5] and the physical processes of cognition are not inherently  and functionally deterministic.

1.2. Mechanical Perspective of Cognition

Scientific work generally accedes to a mechanical description of information processing in the brain. However, a perspective based on the duality of physical and mental processing is retained in various academic disciplines. For example, there is a conjecture about the relationship between the human mind and a simulation of it.[6] This idea is based on assumptions about intentionality and the act of thought. Instead, a physical process of cognition is defined by the generation of action by neuronal cells without dependence on non-material processes.[7]

Another result of physical limits of cognition is observed in the intention of moving a body limb, such as a person reaching for an object on a table. Instead, studies have replaced the assignment of intention with a material interpretation of this action, and shows that the relevant neural activity occurs before the perceptual awareness of the motor action.[8]

Across the natural sciences, the neural system is studied at various biological scales, including at the molecular level to the higher level of information processing and synthesis.[9][10] At this higher order perspective, the neural systems are functionally analogous to the deep learning models of computer science.[11][12] This allows for a comparative approach for understanding cognitive processes. However, at the lower scale, the artificial neural system is dependent on an abstract model of neurons and the network, so at this scale the animal neural system is not likely comparable.

1.3. Scope of this Definition

The definition of cognition as used here is restricted to a set of mechanical processes. Also, the cognitive processing is described from a broad perspective with examples from the visual system and the deep learning approaches from computer science.

2. Cognitive Processes in the Visual System

2.1. Probabilistic Processes in Nature

The visual cortical system occupies about one-half of the cerebral cortex. Along with language processing in humans, vision is a major source of sensory input and recognition of the outside world. The complexity of the sensory systems reveals an important aspect of the evolutionary process, as observable across cellular organisms along with their formation of countless novelties. Evolution depends on physical processes, such as mutation and population exponentiality, along with a dependence on a geological time scale for building complexity at all scales of life. These effects have further caused and shaped the biosphere of the Earth.

Life's complexity is observable in the construction of the camera eye of organisms. This novel form emerged over time from a simpler organ, such as an eye spot, and the occurrence depended on a sequence of adaptations.[13][14]. These presumably rare and unique events was no hinderance to the independent formation of a camera eye in both vertebrates and cephalopods. This show that evolution is a powerful generator of change in physical traits, although counterforces limit the occurrence of an infinite number of varieties, and these include restrictions from the finiteness in the genetic code and the constraints of physical laws on the expression of physical traits. These traits range from the microscopic to those observable by eye.

The evolution of cognition is an analogous series of events to the origination of the camera eye. The probabilistic processes that led to complexity in the camera eye is a driver of the origin of cognition and the design of the animal neural systems. These neural systems are expected to show the same efficiency in design as the other systems of an organism, especially since the neural system has co-evolved with the sensory organs. However, the efficiency is not boundless, but constrained by the limits of molecular and cellular processes.

2.2. Abstract Encoding of Sensory Input

“The biologically plausible proximate mechanism of cognition originates from the receipt of high dimensional information from the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays range across the electromagnetic spectra, but the retinal cells are specific to a small subset of all possible light rays”.[1]

Figure 1 shows the above view, in abstract form, as a sheet of neuronal cells that receive sensory input from the outside world. This input is processed at cell surface receptors and then communicated downstream to the neural system for processing. The sensory neurons and their receptors may be imagined as a set of activation values that are undergoing change over time, and succinctly described as a dynamical system.

Neurosci 02 00010 g001 550

Figure 1. An abstract representation of information that is received by a sensory organ, such as the light rays absorbed by neuronal cells across the retinal surface of a camera eye.

The information processing at the sensory organs are tractable to study, but the downstream cognitive processes are less understood at a proximate level. These processes include the generalization of knowledge, also referred to as transfer learning, that is a higher order organizer of sensory input.[5][15][16] Transfer learning is dependent on segmentation of the sensory world and identification of sensory objects with resistance to viewpoint or perspective (Figure 2).[17] In computer science, there is a model[4] designed for segmentation and robust recognition of objects. This approach includes the sampling of sensory input, the parts of sensory objects, and encoding the information in an abstract form for submission to downstream processes. The encoding scheme is expected to include a set of discrete representational levels of unlabeled objects, and then employ a probabilistic approach for matching these representations to a known object in memory. Without the potential for a labeled memory that describes an object, then there is no opportunity for knowledge of the object, the basis for knowledge in general.

Neurosci 02 00010 g002 550

Figure 2. The first panel is a drawing of the digit nine (9) while the next panel is the same digit as transformed by one rotation.

3. Models of General Cognition

3.1. Algorithmic Description of Cognition

Experts have investigated the question on whether there is an algorithm that explains brain computation.[18] They concluded that this is an unsolved problem, even though every natural process is inherently representable by a quantitative model. However, information flow in the brain is a product of a non-linear dynamical system, a complex phenomenon that is analogous to the physics of fluid flow, a complexity that may exceed the limits of computational work. Similarly, these systems are highly dimensional and not representable by simple mathematical description.[18][19] They recommend an empirical approach for disentangling these systems that are intractable to theoretical science.

An artificial neural system, such as the deep learning architecture, has a great potential for explaining elements of animal cognition. This is because artificial systems are built from parts and relationships that are known, whereas in nature the origin and history of the neural system is obscured by time and events, and an understanding of its structures and processes require experimentation that is often confounded with error from many sources.

3.2. Encoding of Knowledge in Cognitive Systems

It is possible to hypothesize about a model of object representation in the brain and its artificial analog, as in the deep learning architectures. First, these cognitive systems are expected to encode objects by their parts, their elements.[3][4][5] Second, it is expected that the process is stochastic as occurs in natural processes.

The neural network system is encoded with weight values along the connections between nodes of the network and activation values at the nodes. It is expected that the brain functions analogously at this level of information processing, since these systems are both based on non-linear dynamic principles in a network and distributed representations of objects.[5][18][20][21] The encoding schemes are expectedly abstract and not deterministic, and therefore it follows that the coding is not generated by a top-down process.

Moreover, a physical interpretation of cognition requires the matching of patterns for generalization of knowledge. This is consistent with a view of cognition as a statistical machine with a reliance on sampling for robust information processing and output. With advancement in deep learning methods, such as the invention of the transformer architecture[5][22], it is possible to sample and search for exceedingly complex patterns in a sequence of information.

This sampling of the world is dependent on a sensory modality, such as vision or speech, and is complemented by a sampling among modalities which has the potential to lead to increased robustness in information processing and the internal representations.[23]

3.3. Future Directions in Cognitive Science

3.3.1. Dynamic Properties of Cognition

One question is whether animal cognition is as interpretable as a deep learning system. This arises because of the difficulty in disentangling the mechanisms of the animal brain, whereas it is possible to record the changing states of an artificial system since the design is known. If the artificial system is analogous, then it is possible to gain insight into the mechanisms of animal cognition.[5][24] However, an assumption for the analogy may not hold. For example, it is known that the mammalian brain is highly dynamic, such as in the rates of sensory input and the downstream activation of internal representations.[18] These dynamic properties are not feasibly modeled in current deep learning systems, a constraint by hardware design and efficiency.[18] This is an impediment to design of an artificial system that is approximate of animal cognition, although there are ideas to model these properties, such as an overlay architecture with “fast weights” for providing a form of true recursion in a neural network.[5][18]

The artificial neural networks continue to scale in size and efficiency. This is accompanied by empirical approaches for exploring sources of error in these systems. This is dependent on a thorough understanding on the construction of these models. One avenue for increasing the robustness in the output is by combining many sources of sensory data, such as from the visual and language domains. Another approach is to establish unbiased measures of the reliability of output from the models. Likewise, error in information processing is not resistant to bias in animals. In human cognition, there is a well documented bias in speech perception.[25]

3.3.2. Generalization of Knowledge

Another area of importance is the property of generalization in a model of cognition. This goal is approachable by processing the levels of representation of sensory input, the process that presumably occurs in the ability of animals to generalize knowledge.[4][26][27] In an abstract context, this generalizability is based on the premise that information of the outside world is compressible, such as its repeatability in the patterns of sensory information, so that it is possible to classify objects and gain knowledge.

There is also the question of how to reuse knowledge outside the environment where it is learned, "being able to factorize knowledge into pieces which can easily be recombined in a sequence of computational steps, and being able to manipulate abstract variables, types, and instances".[5] Therefore, it is relevant to have a model of cognition that includes the high level representations of the parts of a whole object, even for abstract objects. However, the dynamic and various states of the internal representations  are also likely contributors to the processes of reasoning.

3.3.3. Embodiment in Cognition

Lastly, there is uncertainty on the dependence of animal cognition on the outside world. This dependence has been characterized as the phenomenon of embodiment, so an animal cognition is also an embodied cognition, even in the case where the outside world is a machine simulation.[18][28][29] This is in essence a property of a robotic and mechanical system, where its functions are fully dependent on input and output from the world. Although a natural system receives input, produces output, and learns at a time scale constrained by the physical world, an artificial system is not as constrained, such as in reinforcement learning,[29][30][31] a method that can reconstruct the sensorimotor function in animals.

Deepmind[29] developed artificial agents in a 3-dimensional space that learn in a continually changing world. The method employs a deep reinforcement learning method in conjunction with dynamic generation of environments that lead to the uniqueness of each world. Each of the worlds have artificial agents that learn to handle tasks and receive rewards for completion of objectives. An agent observes a pixel image of an environment along with receipt of a "text description of their goal".[29] Task experience is sufficiently generalizable that the agents are capable of adapting to tasks that are not yet known from prior knowledge. This reflects an animal that is embodied in a world and is learning interactively by the performance of physical tasks. It is known that animals navigate and learn from the world around them, so the above approach is a thoughtful experiment in the virtual realm.

However, the above approach is fragile to tasks outside those from its learned distribution.

4. Abstract Reasoning

4.1. Abstract Reasoning as a Cognitive Process

Abstract reasoning is often associated with a process of thought, but the elements of the process are ideally respresented by physical processes only. This restriction constrains the explanations for emergence of abstract reasoning, such as in the formation of new ideas. Moreover, a process of abstract reasoning may be compared against the more intuitive forms of cognition in vision and speech. Both these sensory forms occur by sensory input to the neural system. Without the sensory input, the layers of the neural systems are not expected to encode new information by some pathway, such as expected in the recognition of visual objects. Therefore, it is expected that any information system is dependent on an external input for learning, an essential process for knowledge.

It follows that abstract reasoning is formed from an input source received by the neural systems. If there is no input that is relevant to a pathway of abstract reasoning, then the system is not expected to encode that pathway. This also leads to the hypothesis on whether abstract reasoning is comprised of one or many pathways, and the contribution of unrelated pathways, such as from visual cognition. It is probable that there is no sharp division between abstract reasoning and the other types of reasoning, and there is likely more than one form of abstract reasoning, such as for solving puzzles reliant on the manipulation of visual objects.

Another hypothesis is on whether these input sources derive mainly from internal representations. If true, then a model of abstract reasoning would involve the true forms of abstract objects, in contrast to the recognition of an object by reconstruction from sensory information input.

Since abstract reasoning is dependent on an input source, there is an expectation that deep learning methods, modeling the non-linear dynamics, are sufficient to model one or more pathways involved in abstract reasoning. This reasoning involves recognition of objects that are not necessarily sensory objects with definable properties and relationships. As with the training process to learn of sensory objects, it is expected that there is a training process to learn about the forms and properties of abstract objects. This class of problem is of interest since the universe of abstract objects is boundless and their properties and interrelationships are not constrained by the physical world.

4.2. Models for Abstract Reasoning

A model of higher level cognition includes the abstract reasoning.[5] This is a pathway or pathways that are expected to learn the higher level representation of sensory objects, such as from vision or speech, and that the input is processed and generative of a generalizable rule set. These may include a single rule or a sequence of rules. One model is for the deep learning system to learn the rule set, such as in the case of puzzles solvable by a logical operation.[32] This is likely the basis for a person to play a chess game, by memorizing prior patterns of information and events on the game board, and leading to generalization of the game system.

Similarly, another kind of visual puzzle is a Rubik's Cube. However, in this case it has a known final state where each face of the cube shares a unique color. Likewise, if there is a detectable rule set, then there must be patterns of information, including missing information, that allow construction of a generalized rule set.

The pathway to a solution can include the repeated testing of potential rule sets against an intermediate or final state of the puzzle. This iterative process is approachable by a heuristic search algorithm.[5] However, these puzzles are typically low-dimensional as compared to abstract verbal problems, such as in inductive reasoning. The acquisition of rule sets for verbal reasoning require a search for patterns in a high-dimensional space. In either of these cases of pattern searching, whether complex or simple, they are dependent on the detection of patterns that represent sets of rules.

It is simpler to imagine a logical operation as the pattern that offers a solution, but it is expected that inductive reasoning involves high-dimensional representations than an operator that combines boolean values. It is also probable that these representations are dynamic, so that there is a possibility to sample many valid representations.

4.3. Future Directions for Exploring Abstract Reasoning

4.3.1. Embodiment in a Virtual and Abstract World

While the phenomenon of embodiment refers to an occupant of our 3-dimensional world, this is not necessarily a complete model for reasoning on abstract concepts. However, it is plausible that at least some abstract concepts are approachable in a virtual 3-dimensional world, and similarly, Deepmind[29] showed a solution to visual problems across a set of generated 3-dimensional worlds.

A population and distribution of tasks are also elements of Deepmind's approach. They show that learning a task distribution leads to knowledge for solving tasks outside the prior task distribution.[29][30] This leads to potential for generalizability in solving tasks, along with the promise that increasing complexity in the worlds would lead to a further expansion in task knowledge.

However, the problem of abstract concepts extends beyond the conventional sensory representations as formed by cognition. Examples include visual puzzles with solutions that are abstract and require the association of patterns that extend beyond the visual realm, along with the symbolic representations from areas of mathematics.[33]

Combining these two approaches, it is possible to construct a world that is not a reflection of the 3-dimensional space as inhabited by animals, but instead to construct a virtual world of abstract objects and the sets of tasks[30]. The visual and symbolic puzzles, such as in the case of chess and related boardgames[31], are solvable by deep learning approaches, but the machine reasoning is not generalized across a space of abstract environments and objects. The question is whether the abstract patterns to solve chess are also useful in solving other kinds of puzzles. It seems a valid hypothesis that there is at least overlap in the use of abstract reasoning between these visual puzzles and the synthesis of knowledge from other abstract objects and their interactions[29], such as in solving problems by the use of mathematical symbols and their operators. Since we are capable of abstract thought, it is plausible that generation of a distribution of general abstract tasks would lead to a working system for solving abstract problems in general.

If instead of a dynamic generation of 3-dimensional worlds and objects, there is a vast and dynamic generation of abstract puzzles, for example, then the deep reinforcement learning approach could train on solving these problems and acquire knowledge of these tasks.[29] The question is whether the distribution of these applicable tasks is generalizable to an unknown set of problems, those unrelated to the original task distribution, and the compressibility of the space of tasks.

4.3.2. Reinforcement Learning and Generalizability

Google Research showed that an unmodified reinforcement learning approach is not necessarily robust for acquiring knowledge of tasks outside the trained task distribution.[30] Therefore, they introduced an approach that incorporates a measurement for similarity among worlds that were generated by the reinforcement learning procedure. This similarity measure is estimated by behavioral similarity, corresponding to the salient features by which an agent finds success in any given world. Given these salient features are shared among the worlds, the agents have a path for generalizing knowledge for success in worlds outside their learned experience. Procedurally, the salient features are acquired by a contrastive learning procedure, and embeds these values of behavioral similarity in the neural network itself.[34]

The above reinforcement learning approach is dependent on both a deep learning framework and an input data source. The source of input is typically a 2- or 3-dimensional environment where an agent learns to accomplish tasks within the confines of the worlds and rules.[29][30] One approach is to represent the salient features of tasks and the worlds in a neural network. As Google Research showed[30], the process required an additional step to extract the salient information for creating better models of the tasks and worlds. They found that this method is more robust to generalization of tasks. Similarly, in animal cognition, it is expected that the salient features to generalize a task are also stored in a neuronal network.

Therefore, a naive input of visual data from a 2-dimensional environment is not an efficient means for coding tasks that consistently generalize across environments. To capture the high-dimensional information in a set of related tasks, Google Research extended the reinforcement learning approach to better capture the task distribution[30], and it may be possible to mimic this approach by similar methods. These task distributions provide structured data for representing the dynamics of tasks among worlds, and therefore generalize and encode the high-dimensional and dynamic features in a low-dimensional form.

It is difficult to imagine the relationship between two different environments. A game of checkers and that of chess appear as different game systems. Encoding the dynamics of each of these in a deep learning framework may show that they relate in an unintuitive abstract way.[29] This concept was expressed in the above article[30], that short paths in a larger pathway may provide salient and generalizable features. In the case of boardgames, the salient features may not correspond to a naive perception of visual relatedness. Likewise, our natural form of abstract reasoning shows that we capture patterns in these boardgames, and these patterns are not entirely recognized by a single ruleset at the level of our awareness, but instead are likely represented at a high-dimensional level in the neural network.

For emulating a reasoning process, extracting the salient features from a pixel image is a complex problem, and the pathway may involve many sources of error. Converting images to a low-dimensional form, particularly for the salient subtasks, allows for a greater expectation on generalization and repeatability in the patterns of objects and events. Where it is difficult to extract the salient features of a system, then it may be possible to translate the objects and events in the system to text based description, a process that has been studied and introduced as a measure for interpretability in the procedure. The error in translation to text descriptions is an observable phenomenon.

Last, since the advanced cognitive processes in animals involve a widespread use of dynamic representations, it is plausible that the tasks are not merely generalizable, but may originate in the varied sensory systems and memory. Therefore, the tasks would be expressed by different sensory forms, although the lower dimensional representations are the more generalizable, providing a better substrate for recognition of patterns, and essential for the general process of abstract reasoning.

References

  1. Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 141-150.
  2. Vlastos, G. Parmenides’ theory of knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.
  3. Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013-1028.
  4. Hinton, G. How to represent part-whole hierarchies in a neural network. 2021, arXiv:2102.12627.
  5. Bengio, Y.; LeCun, Y.; Hinton G. Deep Learning for AI. Commun ACM 2021, 64, 58-65.
  6. Searle, J.R.; Willis, S. Intentionality: An essay in the philosophy of mind. Cambridge University Press, Cambridge, UK, 1983.
  7. Huxley, T.H. Evidence as to Man's Place in Nature. Williams and Norgate, London, UK, 1863.
  8. Haggard, P. Sense of agency in the human brain. Nat Rev Neurosci 2017, 18, 196-207.
  9. Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados trans. Nicolas Moya, Madrid, Spain, 1899.
  10. Kriegeskorte, N.; Kievit, R.A. Representational geometry: integrating cognition, computation, and the brain. Trends Cognit Sci 2013, 17, 401-412.
  11. Hinton, G.E. Connectionist learning procedures. Artif Intell 1989, 40, 185-234.
  12. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw 2015, 61, 85-117.
  13. Paley, W. Natural Theology: or, Evidences of the Existence and Attributes of the Deity, 12th ed., London, UK, 1809.
  14. Darwin, C. On the origin of species. John Murray, London, UK, 1859.
  15. Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. 2021, arXiv:2103.01937.
  16. Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. In Proceedings of the IEEE, 2021.
  17. Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog Neurobiol 1997, 51, 167-194.
  18. Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, May 17, 2021. Panelists: Blum, L; Gallant, J; Hinton, G; Liang, P; Yu, B.
  19. Gibbs, J.W. Elementary Principles in Statistical Mechanics. Charles Scribner's Sons, New York, NY, 1902.
  20. Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences 2010, 14, 357-364.
  21. Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel distributed processing: explorations in the microstructure of cognition; Rumelhart, D.E., McClelland, J.L., PDP research group, Eds., Bradford Books: Cambridge, Mass, 1986.
  22. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. 2017, arXiv:1706.03762.
  23. Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. 2021, arXiv:2102.10772.
  24. Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proceedings of the National Academy of Sciences 2021, 118.
  25. Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, 1986, pp. 1-24.
  26. Chase, W.G.; Simon, H.A. Perception in chess. Cognitive psychology 1973, 4, 55-81.
  27. Pang, R.; Lansdell, B.J.; Fairhall, A.L. Dimensionality reduction in neuroscience. Curr Biol 2016, 26, R656-R660.
  28. Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. 2019, arXiv:1912.00312.
  29. Team, E.L., Stooke, A., Mahajan, A., Barros, C., Deck, C., Bauer, J., Sygnowski, J., Trebacz, M., Jaderberg, M., Mathieu, M. and McAleese, N. Open-ended learning leads to generally capable agents. 2021, arXiv:2107.12808.
  30. Agarwal, R., Machado, M.C., Castro, P.S. and Bellemare, M.G. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. 2021, arXiv:2101.05265.
  31. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; Lillicrap, T.; Simonyan, K.; Hassabis, D. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140-1144.
  32. Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning, PMLR, 2018.
  33. Schuster, T., Kalyan, A., Polozov, O. and Kalai, A.T. Programming Puzzles. 2021, arXiv:2106.05784.
  34. Chen, T., Kornblith, S., Norouzi, M. and Hinton, G. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, PMLR, 2020.
More
This entry is offline, you can click here to edit this entry!