Cognition: Comparison
Please note this is a comparison between Version 53 by Robert Friedman and Version 78 by Robert Friedman.

Higher Cognition: A Mechanical Perspective

Cognition is the acquisition of knowledge by the mechanical process of information flow in a system. In animal cognition, input is received by the sensory modalities and the output may occur as a motor or another response. The sensory information is internally transformed to a set of representations, which is the basis for downstream cognitive processing. This is in contrast to the traditional definition based on mental processes, a phenomenon of the mind that and originates in past ideas ofing from metaphysical philosophy.

  • animal cognition
  • cognitive processes
  • physical processes
  • mental processes
Please wait, diff process is still running!

References

  1. Merriam-Webster Dictionary (an Encyclopedia Britannica Company: Chicago, IL, USA). Available online: https://www.merriam-webster.com/dictionary/cognition (accessed on 27 July 2022).Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 141-150.
  2. Cambridge Dictionary (Cambridge University Press: Cambridge, UK). Available online: https://dictionary.cambridge.org/us/dictionary/english/cognition (accessed on 27 July 2022).Vlastos, G. Parmenides Theory of Knowledge. In Transactions and Proceedings of the American Philological Association, The Johns Hopkins University Press: Baltimore, MD, USA, 1946, pp. 66–77.
  3. Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 10.Chang, L., Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013-1028.
  4. Vlastos, G. Parmenides Theory of Knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77.Hinton, G. How to represent part-whole hierarchies in a neural network. 2021, arXiv:2102.12627.
  5. Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013–1028.Bengio, Y., LeCun, Y., Hinton G. Deep Learning for AI. Communications of the ACM 2021, 64, 58-65.
  6. Hinton, G. How to represent part-whole hierarchies in a neural network. arXiv 2021, arXiv:2102.12627.Searle, J.R., Willis, S. Intentionality: An essay in the philosophy of mind. Cambridge University Press, Cambridge, UK, 1983.
  7. Bengio, Y.; LeCun, Y.; Hinton, G. Deep Learning for AI. Commun. ACM 2021, 64, 58–65.Huxley, T.H. Evidence as to Man's Place in Nature. Williams and Norgate, London, UK, 1863.
  8. Streng, M.L.; Popa, L.S.; Ebner, T.J. Modulation of sensory prediction error in Purkinje cells during visual feedback manipulations. Nat. Commun. 2018, 9, 1099.Haggard, P. Sense of agency in the human brain. Nature Reviews Neuroscience 2017, 18, 196-207.
  9. Popa, L.S.; Ebner, T.J. Cerebellum, Predictions and Errors. Front. Cell. Neurosci. 2019, 12, 524.Ramon, Y., Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados trans. Nicolas Moya, Madrid, Spain, 1899.
  10. Searle, J.R.; Willis, S. Intentionality: An Essay in the Philosophy of Mind; Cambridge University Press: Cambridge, UK, 1983.Kriegeskorte, N., Kievit, R.A. Representational geometry: integrating cognition, computation, and the brain. Trends in Cognitive Sciences 2013, 17, 401-412.
  11. Huxley, T.H. Evidence as to Man’s Place in Nature; Williams and Norgate: London, UK, 1863.Hinton, G.E. Connectionist learning procedures. Artif Intell 1989, 40, 185-234.
  12. Haggard, P. Sense of agency in the human brain. Nat. Rev. Neurosci. 2017, 18, 196–207.Schmidhuber, J. Deep learning in neural networks: An overview. Neural Networks 2015, 61, 85-117.
  13. Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados Trans; Nicolas Moya: Madrid, Spain, 1899.Paley, W. Natural Theology: or, Evidences of the Existence and Attributes of the Deity, 12th ed., London, UK, 1809.
  14. Kriegeskorte, N.; Kievit, R.A. Representational geometry: Integrating cognition, computation, and the brain. Trends Cogn. Sci. 2013, 17, 401–412.Darwin, C. On the Origin of Species. John Murray, London, UK, 1859.
  15. Hinton, G.E. Connectionist learning procedures. Artif. Intell. 1989, 40, 185–234.Goyal, A., Didolkar, A., Ke, N.R., Blundell, C., Beaudoin, P., Heess, N., et al. Neural Production Systems. 2021, arXiv:2103.01937.
  16. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117.Scholkopf, B., Locatello, F., Bauer, S., Ke, N.R., Kalchbrenner, N., Goyal, A., Bengio, Y. Toward Causal Representation Learning. In Proceedings of the IEEE, 2021.
  17. Descartes, R. Meditations on First Philosophy; Moriarty, M., Translator; Oxford University Press: Oxford, UK, 2008.Wallis, G., Rolls, E.T. Invariant face and object recognition in the visual system. Progress in Neurobiology 1997, 51, 167-194.
  18. Friedman, R. Themes of advanced information processing in the primate brain. AIMS Neurosci. 2020, 7, 373.Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, May 17, 2021. Panelists: Blum, L., Gallant, J., Hinton, G., Liang, P., Yu, B.
  19. Prasad, S.; Galetta, S.L. Anatomy and physiology of the afferent visual system. In Handbook of Clinical Neurology; Kennard, C., Leigh, R.J., Eds.; Elsevier: Amsterdam, The Netherlands, 2011; pp. 3–19.Gibbs, J.W. Elementary Principles in Statistical Mechanics. Charles Scribner's Sons, New York, NY, 1902.
  20. Paley, W. Natural Theology: Or, Evidences of the Existence and Attributes of the Deity, 12th ed.; R. Faulder: London, UK, 1809.Schmidhuber, J., 1990. Making the World Differentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments. arning and planning in non-stationary environments. Technical Report FKI-126-90, Tech. Univ. Munich, 1990.
  21. Darwin, C. On the Origin of Species; John Murray: London, UK, 1859.Griffiths, T.L., Chater, N., Kemp, C., Perfors, A, Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences 2010, 14, 357-364.
  22. De Sousa, A.A.; Proulx, M.J. What can volumes reveal about human brain evolution? A framework for bridging behavioral, histometric, and volumetric perspectives. Front. Neuroanat. 2014, 8, 51.Hinton, G.E., McClelland, J.L., Rumelhart, D.E. Distributed representations. In Parallel distributed processing: explorations in the microstructure of cognition; Rumelhart, D.E., McClelland, J.L., PDP Research Group, Eds., Bradford Books: Cambridge, Mass, 1986.
  23. Slobodkin, L.B.; Rapoport, A. An optimal strategy of evolution. Q. Rev. Biol. 1974, 49, 181–200.Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., et al. Attention Is All You Need. 2017, arXiv:1706.03762.
  24. Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. arXiv 2021, arXiv:2103.01937.Chen, T., Saxena, S., Li, L., Fleet, D.J. and Hinton, G. Pix2seq: A language modeling framework for object detection. 2021, arXiv:2109.10852.
  25. Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. Proc. IEEE 2021, 109, 612–634.Hu, R., Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. 2021, arXiv:2102.10772.
  26. Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog. Neurobiol. 1997, 51, 167–194.Xu, Y., Zhu, C., Wang, S., Sun, S., Cheng, H., Liu, X., et al. Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. 2021. arXiv:2112.03254.
  27. Friedman, R. A Perspective on Information Optimality in a Neural Circuit and Other Biological Systems. Signals 2022, 3, 25.Zeng, A., Wong, A., Welker, S., Choromanski, K., Tombari, F., Purohit, A., et al. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. 2022, arXiv:2204.00598.
  28. Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, 17 May 2021. Panelists: Blum, L., Gallant, J., Hinton, G., Liang, P., Yu, B. Available online: https://sites.google.com/view/conceptualdlworkshop/home (accessed on 17 May 2021).Chaabouni, R., Kharitonov, E., Dupoux, E., Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proceedings of the National Academy of Sciences 2021, 118.
  29. Gibbs, J.W. Elementary Principles in Statistical Mechanics; Charles Scribner’s Sons: New York, NY, USA, 1902.Irie, K., Schlag, I., Csordás, R. and Schmidhuber, J. A Modern Self-Referential Weight Matrix That Learns to Modify Itself. 2022, arXiv:2202.05780.
  30. Schmidhuber, J. Making the World Differentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments; Technical Report FKI-126-90; Technical University of Munich: Munich, Germany, 1990.Schlag, I., Irie, K. and Schmidhuber, J. Linear transformers are secretly fast weight programmers. In International Conference on Machine Learning (pp. 9355-9366). PMLR, July 2021.
  31. Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A.; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends Cogn. Sci. 2010, 14, 357–364.Petty, R.E., Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion, Springer: New York, NY, 1986, pp. 1-24.
  32. Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition; Rumelhart, D.E., McClelland, J.L., PDP Research Group, Eds.; Bradford Books: Cambridge, MA, USA, 1986.Ha, D., Tang, Y. Collective Intelligence for Deep Learning: A Survey of Recent Developments. 2021, arXiv:2111.14377.
  33. Friston, K. The history of the future of the Bayesian brain. NeuroImage 2012, 62, 1230–1233.Chase, W.G., Simon, H.A. Perception in chess. Cognitive Psychology 1973, 4, 55-81.
  34. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762.Pang, R., Lansdell, B.J., Fairhall, A.L. Dimensionality reduction in neuroscience. Current Biology 2016, 26, R656-R660.
  35. Phuong, M.; Hutter, M. Formal Algorithms for Transformers. arXiv 2022, arXiv:2207.09238.Deng, E., Mutlu, B., Mataric, M. Embodiment in socially interactive robots. 2019, arXiv:1912.00312.
  36. Chen, T.; Saxena, S.; Li, L.; Fleet, D.J.; Hinton, G. Pix2seq: A language modeling framework for object detection. arXiv 2021, arXiv:2109.10852.Team, E.L., Stooke, A., Mahajan, A., Barros, C., Deck, C., Bauer, J., et al. Open-ended learning leads to generally capable agents. 2021, arXiv:2107.12808.
  37. Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. arXiv 2021, arXiv:2102.10772.Agarwal, R., Machado, M.C., Castro, P.S. and Bellemare, M.G. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. 2021, arXiv:2101.05265.
  38. Xu, Y.; Zhu, C.; Wang, S.; Sun, S.; Cheng, H.; Liu, X.; Gao, J.; He, P.; Zeng, M.; Huang, X. Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. arXiv 2021, arXiv:2112.03254.Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140-1144.
  39. Zeng, A.; Wong, A.; Welker, S.; Choromanski, K.; Tombari, F.; Purohit, A.; Ryoo, M.; Sindhwani, V.; Lee, J.; Vanhoucke, V.; et al. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. arXiv 2022, arXiv:2204.00598.Barrett, D., Hill, F., Santoro, A., Morcos, A., Lillicrap, T. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning, PMLR, 2018.
  40. Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proc. Natl. Acad. Sci. USA 2021, 118, e2016569118.Schuster, T., Kalyan, A., Polozov, O. and Kalai, A.T. Programming Puzzles. 2021, arXiv:2106.05784.
  41. Irie, K.; Schlag, I.; Csordás, R.; Schmidhuber, J. A Modern Self-Referential Weight Matrix That Learns to Modify Itself. arXiv 2022, arXiv:2202.05780.Chen, T., Kornblith, S., Norouzi, M. and Hinton, G. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, PMLR, 2020.
  42. Schlag, I.; Irie, K.; Schmidhuber, J. Linear transformers are secretly fast weight programmers. In Proceedings of theInternational Conference on Machine Learning, PMLR 139, Virtual, 24 July 2021; pp. 9355–9366.Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., et al. Decision Transformer: Reinforcement Learning via Sequence Modeling. Advances in Neural Information Processing Systems 2021, 34.
  43. Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, USA, 1986; pp. 1–24.Lee, K.H., Nachum, O., Yang, M., Lee, L., Freeman, D., Xu, W., et al. Multi-Game Decision Transformers. 2022, arXiv:2205.15241.
  44. Mittal, S.; Bengio, Y.; Lajoie, G. Is a Modular Architecture Enough? arXiv 2022, arXiv:2206.02713.Reed, S., Zolna, K., Parisotto, E., Colmenarejo, S.G., Novikov, A., Barth-Maron, G., et al. A Generalist Agent. 2022, arXiv:2205.06175.
  45. Ha, D.; Tang, Y. Collective Intelligence for Deep Learning: A Survey of Recent Developments. arXiv 2021, arXiv:2111.14377.
  46. Mustafa, B.; Riquelme, C.; Puigcerver, J.; Jenatton, R.; Houlsby, N. Multimodal Contrastive Learning with LIMoE: The Language-Image Mixture of Experts. arXiv 2022, arXiv:2206.02770.
  47. Chase, W.G.; Simon, H.A. Perception in chess. Cogn. Psychol. 1973, 4, 55–81.
  48. Pang, R.; Lansdell, B.J.; Fairhall, A.L. Dimensionality reduction in neuroscience. Curr. Biol. 2016, 26, R656–R660.
  49. Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. arXiv 2019, arXiv:1912.00312.
  50. Open-Ended Learning Team; Stooke, A.; Mahajan, A.; Barros, C.; Deck, C.; Bauer, J.; Sygnowski, J.; Trebacz, M.; Jaderberg, M.; Mathieu, M.; et al. Open-ended learning leads to generally capable agents. arXiv 2021, arXiv:2107.12808.
  51. Agarwal, R.; Machado, M.C.; Castro, P.S.; Bellemare, M.G. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. arXiv 2021, arXiv:2101.05265.
  52. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140–1144.
  53. Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In Proceedings of the International Conference on Machine Learning, PMLR 80, Stockholm, Sweden, 15 July 2018.
  54. Schuster, T.; Kalyan, A.; Polozov, O.; Kalai, A.T. Programming Puzzles. arXiv 2021, arXiv:2106.05784.
  55. Lewkowycz, A.; Andreassen, A.; Dohan, D.; Dyer, E.; Michalewski, H.; Ramasesh, V.; Slone, A.; Anil, C.; Schlag, I.; Gutman-Solo, T.; et al. Solving Quantitative Reasoning Problems with Language Models. arXiv 2022, arXiv:2206.14858.
  56. Drori, I.; Zhang, S.; Shuttleworth, R.; Tang, L.; Lu, A.; Ke, E.; Liu, K.; Chen, L.; Tran, S.; Cheng, N.; et al. A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level. arXiv 2021, arXiv:2112.15594.
  57. Reed, S.; Zolna, K.; Parisotto, E.; Colmenarejo, S.G.; Novikov, A.; Barth-Maron, G.; Gimenez, M.; Sulsky, Y.; Kay, J.; Springenberg, J.T.; et al. A Generalist Agent. arXiv 2022, arXiv:2205.06175.
  58. Lee, K.H.; Nachum, O.; Yang, M.; Lee, L.; Freeman, D.; Xu, W.; Guadarrama, S.; Fischer, I.; Jang, E.; Michalewski, H.; et al. Multi-Game Decision Transformers. arXiv 2022, arXiv:2205.15241.
  59. Chen, L.; Lu, K.; Rajeswaran, A.; Lee, K.; Grover, A.; Laskin, M.; Abbeel, P.; Srinivas, A.; Mordatch, I. Decision Transformer: Reinforcement Learning via Sequence Modeling. Adv. Neural Inf. Process. Syst. 2021, 34, 15084–15097.
  60. Fei, N.; Lu, Z.; Gao, Y.; Yang, G.; Huo, Y.; Wen, J.; Lu, H.; Song, R.; Gao, X.; Xiang, T.; et al. Towards artificial general intelligence via a multimodal foundation model. Nat. Commun. 2022, 13, 1–13.
  61. Kant, I.; Smith, N.K. Immanuel Kant’s Critique of Pure Reason; Translated by Norman Kemp Smith; Macmillan & Co: London, UK, 1929.
  62. Chan, S.C.; Santoro, A.; Lampinen, A.K.; Wang, J.X.; Singh, A.; Richemond, P.H.; McClelland, J.; Hill, F. Data Distributional Properties Drive Emergent In-Context Learning in Transformers. arXiv 2022, arXiv:2205.05055.
  63. Seo, P.H.; Nagrani, A.; Arnab, A.; Schmid, C. End-to-end Generative Pretraining for Multimodal Video Captioning. arXiv 2022, arXiv:2201.08264.
  64. Yan, C.; Carnevale, F.; Georgiev, P.; Santoro, A.; Guy, A.; Muldal, A.; Hung, C.; Abramson, J.; Lillicrap, T.; Wayne, G. Intra-agent speech permits zero-shot task acquisition. arXiv 2022, arXiv:2206.03139.
  65. Guo, Z.D.; Thakoor, S.; Pîslar, M.; Pires, B.A.; Altche, F.; Tallec, C.; Saade, A.; Calandriello, D.; Grill, J.; Tang, Y.; et al. BYOL-Explore: Exploration by Bootstrapped Prediction. arXiv 2022, arXiv:2206.08332.
  66. Baker, B.; Akkaya, I.; Zhokhov, P.; Huizinga, J.; Tang, J.; Ecoffet, A.; Houghton, B.; Sampedro, R.; Clune, J. Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos. arXiv 2022, arXiv:2206.11795.
  67. Traniello, I.M.; Chen, Z.; Bagchi, V.A.; Robinson, G.E. Valence of social information is encoded in different subpopulations of mushroom body Kenyon cells in the honeybee brain. Proc. R. Soc. B 2019, 286, 20190901.
  68. Bickle, J. The first two decades of CREB-memory research: Data for philosophy of neuroscience. AIMS Neurosci. 2021, 8, 322.
  69. Piller, C. Blots on a field? Science 2022, 377, 358–363.
More