Please note this is a comparison between Version 42 by Robert Friedman and Version 78 by Robert Friedman.
Higher Cognition: A Mechanical Perspective
Cognition is the acquisition of knowledge by the mechanical process of information flow in a system. In animal cognition, input is received by the sensory modalities and the output may occur as a motor or other response. The sensory information is internally transformed to a set of representations, which is the basis for downstreamof cognitive processing. This is in contrast to the traditional definition that is based on mental processes, a phenomenon of the mind that ing and originates in past ideas ofd from metaphysical philosophy.
animal cognition
cognitive processes
physical processes
mental processes
Please wait, diff process is still running!
References
Merriam-Webster Dictionary (an Encyclopedia Britannica Company: Chicago, IL, USA). Available online: https://www.merriam-webster.com/dictionary/cognition (accessed on 27 July 2022).Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 141-150.
Cambridge Dictionary (Cambridge University Press: Cambridge, UK). Available online: https://dictionary.cambridge.org/us/dictionary/english/cognition (accessed on 27 July 2022).Vlastos, G. Parmenides’ theory of knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.
Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 10.Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013-1028.
Vlastos, G. Parmenides Theory of Knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.Hinton, G. How to represent part-whole hierarchies in a neural network. 2021, arXiv:2102.12627.
Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013–1028.Bengio, Y.; LeCun, Y.; Hinton G. Deep Learning for AI. Commun ACM 2021, 64, 58-65.
Hinton, G. How to represent part-whole hierarchies in a neural network. arXiv 2021, arXiv:2102.12627.Searle, J.R.; Willis, S. Intentionality: An essay in the philosophy of mind. Cambridge University Press, Cambridge, UK, 1983.
Bengio, Y.; LeCun, Y.; Hinton, G. Deep Learning for AI. Commun. ACM 2021, 64, 58–65.Huxley, T.H. Evidence as to Man's Place in Nature. Williams and Norgate, London, UK, 1863.
Streng, M.L.; Popa, L.S.; Ebner, T.J. Modulation of sensory prediction error in Purkinje cells during visual feedback manipulations. Nat. Commun. 2018, 9, 1099.Haggard, P. Sense of agency in the human brain. Nat Rev Neurosci 2017, 18, 196-207.
Popa, L.S.; Ebner, T.J. Cerebellum, Predictions and Errors. Front. Cell. Neurosci. 2019, 12, 524.Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados trans. Nicolas Moya, Madrid, Spain, 1899.
Searle, J.R.; Willis, S. Intentionality: An Essay in the Philosophy of Mind; Cambridge University Press: Cambridge, UK, 1983.Kriegeskorte, N.; Kievit, R.A. Representational geometry: integrating cognition, computation, and the brain. Trends Cognit Sci 2013, 17, 401-412.
Huxley, T.H. Evidence as to Man’s Place in Nature; Williams and Norgate: London, UK, 1863.Hinton, G.E. Connectionist learning procedures. Artif Intell 1989, 40, 185-234.
Haggard, P. Sense of agency in the human brain. Nat. Rev. Neurosci. 2017, 18, 196–207.Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw 2015, 61, 85-117.
Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados Trans; Nicolas Moya: Madrid, Spain, 1899.Paley, W. Natural Theology: or, Evidences of the Existence and Attributes of the Deity, 12th ed., London, UK, 1809.
Kriegeskorte, N.; Kievit, R.A. Representational geometry: Integrating cognition, computation, and the brain. Trends Cogn. Sci. 2013, 17, 401–412.Darwin, C. On the origin of species. John Murray, London, UK, 1859.
Hinton, G.E. Connectionist learning procedures. Artif. Intell. 1989, 40, 185–234.Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. 2021, arXiv:2103.01937.
Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117.Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. In Proceedings of the IEEE, 2021.
Descartes, R. Meditations on First Philosophy; Moriarty, M., Translator; Oxford University Press: Oxford, UK, 2008.Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog Neurobiol 1997, 51, 167-194.
Friedman, R. Themes of advanced information processing in the primate brain. AIMS Neurosci. 2020, 7, 373.Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, May 17, 2021. Panelists: Blum, L; Gallant, J; Hinton, G; Liang, P; Yu, B.
Prasad, S.; Galetta, S.L. Anatomy and physiology of the afferent visual system. In Handbook of Clinical Neurology; Kennard, C., Leigh, R.J., Eds.; Elsevier: Amsterdam, The Netherlands, 2011; pp. 3–19.Gibbs, J.W. Elementary Principles in Statistical Mechanics. Charles Scribner's Sons, New York, NY, 1902.
Paley, W. Natural Theology: Or, Evidences of the Existence and Attributes of the Deity, 12th ed.; R. Faulder: London, UK, 1809.Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences 2010, 14, 357-364.
Darwin, C. On the Origin of Species; John Murray: London, UK, 1859.Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel distributed processing: explorations in the microstructure of cognition; Rumelhart, D.E., McClelland, J.L., PDP research group, Eds., Bradford Books: Cambridge, Mass, 1986.
De Sousa, A.A.; Proulx, M.J. What can volumes reveal about human brain evolution? A framework for bridging behavioral, histometric, and volumetric perspectives. Front. Neuroanat. 2014, 8, 51.Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. 2017, arXiv:1706.03762.
Slobodkin, L.B.; Rapoport, A. An optimal strategy of evolution. Q. Rev. Biol. 1974, 49, 181–200.Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. 2021, arXiv:2102.10772.
Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. arXiv 2021, arXiv:2103.01937.Xu, Y., Zhu, C., Wang, S., Sun, S., Cheng, H., Liu, X., Gao, J., He, P., Zeng, M. and Huang, X. Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. 2021. arXiv:2112.03254.
Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. Proc. IEEE 2021, 109, 612–634.Ha, D., Tang, Y. Collective Intelligence for Deep Learning: A Survey of Recent Developments. 2021. arXiv:2111.14377.
Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog. Neurobiol. 1997, 51, 167–194.Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proceedings of the National Academy of Sciences 2021, 118.
Friedman, R. A Perspective on Information Optimality in a Neural Circuit and Other Biological Systems. Signals 2022, 3, 25.Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, 1986, pp. 1-24.
Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, 17 May 2021. Panelists: Blum, L., Gallant, J., Hinton, G., Liang, P., Yu, B. Available online: https://sites.google.com/view/conceptualdlworkshop/home (accessed on 17 May 2021).Chase, W.G.; Simon, H.A. Perception in chess. Cognitive psychology 1973, 4, 55-81.
Gibbs, J.W. Elementary Principles in Statistical Mechanics; Charles Scribner’s Sons: New York, NY, USA, 1902.Pang, R.; Lansdell, B.J.; Fairhall, A.L. Dimensionality reduction in neuroscience. Curr Biol 2016, 26, R656-R660.
Schmidhuber, J. Making the World Differentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments; Technical Report FKI-126-90; Technical University of Munich: Munich, Germany, 1990.Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. 2019, arXiv:1912.00312.
Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A.; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends Cogn. Sci. 2010, 14, 357–364.Team, E.L., Stooke, A., Mahajan, A., Barros, C., Deck, C., Bauer, J., Sygnowski, J., Trebacz, M., Jaderberg, M., Mathieu, M. and McAleese, N. Open-ended learning leads to generally capable agents. 2021, arXiv:2107.12808.
Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition; Rumelhart, D.E., McClelland, J.L., PDP Research Group, Eds.; Bradford Books: Cambridge, MA, USA, 1986.Agarwal, R., Machado, M.C., Castro, P.S. and Bellemare, M.G. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. 2021, arXiv:2101.05265.
Friston, K. The history of the future of the Bayesian brain. NeuroImage 2012, 62, 1230–1233.Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; Lillicrap, T.; Simonyan, K.; Hassabis, D. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140-1144.
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762.Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning, PMLR, 2018.
Phuong, M.; Hutter, M. Formal Algorithms for Transformers. arXiv 2022, arXiv:2207.09238.Schuster, T., Kalyan, A., Polozov, O. and Kalai, A.T. Programming Puzzles. 2021, arXiv:2106.05784.
Chen, T.; Saxena, S.; Li, L.; Fleet, D.J.; Hinton, G. Pix2seq: A language modeling framework for object detection. arXiv 2021, arXiv:2109.10852.Chen, T., Kornblith, S., Norouzi, M. and Hinton, G. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, PMLR, 2020.
Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. arXiv 2021, arXiv:2102.10772.
Xu, Y.; Zhu, C.; Wang, S.; Sun, S.; Cheng, H.; Liu, X.; Gao, J.; He, P.; Zeng, M.; Huang, X. Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. arXiv 2021, arXiv:2112.03254.
Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proc. Natl. Acad. Sci. USA 2021, 118, e2016569118.
Irie, K.; Schlag, I.; Csordás, R.; Schmidhuber, J. A Modern Self-Referential Weight Matrix That Learns to Modify Itself. arXiv 2022, arXiv:2202.05780.
Schlag, I.; Irie, K.; Schmidhuber, J. Linear transformers are secretly fast weight programmers. In Proceedings of theInternational Conference on Machine Learning, PMLR 139, Virtual, 24 July 2021; pp. 9355–9366.
Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, USA, 1986; pp. 1–24.
Mittal, S.; Bengio, Y.; Lajoie, G. Is a Modular Architecture Enough? arXiv 2022, arXiv:2206.02713.
Ha, D.; Tang, Y. Collective Intelligence for Deep Learning: A Survey of Recent Developments. arXiv 2021, arXiv:2111.14377.
Mustafa, B.; Riquelme, C.; Puigcerver, J.; Jenatton, R.; Houlsby, N. Multimodal Contrastive Learning with LIMoE: The Language-Image Mixture of Experts. arXiv 2022, arXiv:2206.02770.
Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. arXiv 2019, arXiv:1912.00312.
Open-Ended Learning Team; Stooke, A.; Mahajan, A.; Barros, C.; Deck, C.; Bauer, J.; Sygnowski, J.; Trebacz, M.; Jaderberg, M.; Mathieu, M.; et al. Open-ended learning leads to generally capable agents. arXiv 2021, arXiv:2107.12808.
Agarwal, R.; Machado, M.C.; Castro, P.S.; Bellemare, M.G. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. arXiv 2021, arXiv:2101.05265.
Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140–1144.
Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In Proceedings of the International Conference on Machine Learning, PMLR 80, Stockholm, Sweden, 15 July 2018.
Drori, I.; Zhang, S.; Shuttleworth, R.; Tang, L.; Lu, A.; Ke, E.; Liu, K.; Chen, L.; Tran, S.; Cheng, N.; et al. A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level. arXiv 2021, arXiv:2112.15594.
Reed, S.; Zolna, K.; Parisotto, E.; Colmenarejo, S.G.; Novikov, A.; Barth-Maron, G.; Gimenez, M.; Sulsky, Y.; Kay, J.; Springenberg, J.T.; et al. A Generalist Agent. arXiv 2022, arXiv:2205.06175.
Guo, Z.D.; Thakoor, S.; Pîslar, M.; Pires, B.A.; Altche, F.; Tallec, C.; Saade, A.; Calandriello, D.; Grill, J.; Tang, Y.; et al. BYOL-Explore: Exploration by Bootstrapped Prediction. arXiv 2022, arXiv:2206.08332.
Baker, B.; Akkaya, I.; Zhokhov, P.; Huizinga, J.; Tang, J.; Ecoffet, A.; Houghton, B.; Sampedro, R.; Clune, J. Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos. arXiv 2022, arXiv:2206.11795.
Traniello, I.M.; Chen, Z.; Bagchi, V.A.; Robinson, G.E. Valence of social information is encoded in different subpopulations of mushroom body Kenyon cells in the honeybee brain. Proc. R. Soc. B 2019, 286, 20190901.
Bickle, J. The first two decades of CREB-memory research: Data for philosophy of neuroscience. AIMS Neurosci. 2021, 8, 322.
Piller, C. Blots on a field? Science 2022, 377, 358–363.