1000/1000
Hot
Most Recent
Concept learning, also known as category learning, concept attainment, and concept formation, is defined by Bruner, Goodnow, & Austin (1967) as "the search for and listing of attributes that can be used to distinguish exemplars from non exemplars of various categories". More simply put, concepts are the mental categories that help us classify objects, events, or ideas, building on the understanding that each object, event, or idea has a set of common relevant features. Thus, concept learning is a strategy which requires a learner to compare and contrast groups or categories that contain concept-relevant features with groups or categories that do not contain concept-relevant features. In a concept learning task, a human or machine learner is trained to classify objects by being shown a set of example objects along with their class labels. The learner simplifies what has been observed by condensing it in the form of an example. This simplified version of what has been learned is then applied to future examples. Concept learning may be simple or complex because learning takes place over many areas. When a concept is difficult, it is less likely that the learner will be able to simplify, and therefore will be less likely to learn. Colloquially, the task is known as learning from examples. Most theories of concept learning are based on the storage of exemplars and avoid summarization or overt abstraction of any kind.
Concept learning must be distinguished from learning by reciting something from memory (recall) or discriminating between two things that differ (discrimination). However, these issues are closely related, since memory recall of facts could be considered a "trivial" conceptual process where prior exemplars representing the concept are invariant. Similarly, while discrimination is not the same as initial concept learning, discrimination processes are involved in refining concepts by means of the repeated presentation of exemplars.
Concrete or Perceptual Concepts vs Abstract Concepts
Concrete concepts are objects that can be perceived by personal sensations and perceptions. These are objects like chairs and dogs where personal interactions occur with them and create a concept.[1] Concepts become more concrete as the word we use to associate with it has a perceivable entity.[2] According to Paivio’s dual -coding theory, concrete concepts are the one that is remembered easier from their perceptual memory codes.[3] Evidence has shown that when words are heard they are associated with a concrete concept and are re-enact any previous interaction with the word within the sensorimotor system.[4] Examples of concrete concepts in learning are early educational math concepts like adding and subtracting.
Abstract concepts are words and ideas that deal with emotions, personality traits and events.[5] Terms like "fantasy" or "cold" have a more abstract concept within them. Every person has their personal definition, which is ever changing and comparing, of abstract concepts. For example, cold could mean the physical temperature of the surrounding area or it could define the action and personality of another person. While within concrete concepts there is still a level of abstractness, concrete and abstract concepts can be seen on a scale. Some ideas like chair and dog are more cut and dry in their perceptions but concepts like cold and fantasy can be seen in a more obscure way. Examples of abstract concept learning are topics like religion and ethics. Abstract-concept learning is seeing the comparison of the stimuli based on a rule (e.g., identity, difference, oddity, greater than, addition, subtraction) and when it is a novel stimulus.[6] With abstract-concept learning have three criteria’s to rule out any alternative explanations to define the novelty of the stimuli. One transfer stimuli has to be novel to the individual. This means it needs to be a new stimulus to the individual. Two, there is no replication of the transfer stimuli. Third and lastly, to have a full abstract learning experience there has to be an equal amount of baseline performance and transfer performance.[6]
Binder, Westbury, McKiernan, Possing, and Medler (2005)[7] used fMRI to scan individuals' brains as they made lexical decisions on abstract and concrete concepts. Abstract concepts elicited greater activation in the left precentral gyrus, left inferior frontal gyrus and sulcus, and left superior temporal gyrus, whereas concrete concepts elicited greater activation in bilateral angular gyri, the right middle temporal gyrus, the left middle frontal gyrus, bilateral posterior cingulate gyri, and bilateral precunei.
In 1986 Allan Paivio[8] hypothesized the Dual Coding Theory, which states that both verbal and visual information is used to represent information. When thinking of the concept “dog” thoughts of both the word dog and an image of a dog occur. Dual Coding Theory assumes that abstract concepts involve the verbal semantic system and concrete concepts are additionally involved with the visual imaginary system.
Defined (or Relational) and Associated Concepts
Relational and associated concepts are words, ideas and thoughts that are connected in some form. For relational concepts they are connected in a universal definition. Common relational terms are up-down, left-right, and food-dinner. These ideas are learned in our early childhood and are important for children to understand.[9] These concepts are integral within our understanding and reasoning in conservation tasks.[10] Relational terms that are verbs and prepositions have a large influence on how objects are understood. These terms are more likely to create a larger understanding of the object and they are able to cross over to other languages.[11]
Associated concepts are connected by the individual’s past and own perception. Associative concept learning (also called functional concept learning) involves categorizing stimuli based on a common response or outcome regardless of perceptual similarity into appropriate categories.[12] This is associating these thoughts and ideas with other thoughts and ideas that are understood by a few or the individual. An example of this is in elementary school when learning the direction of the compass North, East, South and West. Teacher have used “Never Eat Soggy Waffles”, “Never Eat Sour Worms” and students were able to create their own version to help them learn the directions.[13]
Complex Concepts. Constructs such as a schema and a script are examples of complex concepts. A schema is an organization of smaller concepts (or features) and is revised by situational information to assist in comprehension. A script on the other hand is a list of actions that a person follows in order to complete a desired goal. An example of a script would be the process of buying a CD. There are several actions that must occur before the actual act of purchasing the CD and a script provides a sequence of the necessary actions and proper order of these actions in order to be successful in purchasing the CD.
Discovery – Every baby discovers concepts for itself, such as discovering that each of its fingers can be individually controlled or that care givers are individuals. Although this is perception driven, formation of the concept is more than memorizing perceptions.
Examples – Supervised or unsupervised generalizing from examples may lead to learning a new concept, but concept formation is more than generalizing from examples.
Words – Hearing or reading new words leads to learning new concepts, but forming a new concept is more than learning a dictionary definition. A person may have previously formed a new concept before encountering the word or phrase for it.
Exemplars comparison and contrast – An efficient way to learn new categories and to induce new categorization rules is by comparing a few example objects while being informed about their categorical relation. Comparing two exemplars while being informed that the two are from the same category allows identifying the attributes shared by the category members, as it exemplifies variability within this category. On the other hand, contrasting two exemplars while being informed that the two are from different categories may allow identifying attributes with diagnostic value. Within category comparison and between categories contrast are not similarly useful for category learning (Hammer et al., 2008), and the capacity to use these two forms of comparison-based learning changes at childhood (Hammer et al., 2009).
Invention – When prehistoric people who lacked tools used their fingernails to scrape food from killed animals or smashed melons, they noticed that a broken stone sometimes had a sharp edge like a fingernail and was therefore suitable for scraping food. Inventing a stone tool to avoid broken fingernails was a new concept.
In general, the theoretical issues underlying concept learning are those underlying induction. These issues are addressed in many diverse publications, including literature on subjects like Version Spaces, Statistical Learning Theory, PAC Learning, Information Theory, and Algorithmic Information Theory. Some of the broad theoretical ideas are also discussed by Watanabe (1969,1985), Solomonoff (1964a,1964b), and Rendell (1986); see the reference list below.
It is difficult to make any general statements about human (or animal) concept learning without already assuming a particular psychological theory of concept learning. Although the classical views of concepts and concept learning in philosophy speak of a process of abstraction, data compression, simplification, and summarization, currently popular psychological theories of concept learning diverge on all these basic points. The history of psychology has seen the rise and fall of many theories about concept learning. Classical conditioning (as defined by Pavlov) created the earliest experimental technique. Reinforcement learning as described by Watson and elaborated by Clark Hull created a lasting paradigm in behavioral psychology. Cognitive psychology emphasized a computer and information flow metaphor for concept formation. Neural network models of concept formation and the structure of knowledge have opened powerful hierarchical models of knowledge organization such as George Miller's Wordnet. Neural networks are based on computational models of learning using factor analysis or convolution. Neural networks also are open to neuroscience and psychophysiological models of learning following Karl Lashley and Donald Hebb.
Rule-based theories of concept learning began with cognitive psychology and early computer models of learning that might be implemented in a high level computer language with computational statements such as if:then production rules. They take classification data and a rule-based theory as input which are the result of a rule-based learner with the hopes of producing a more accurate model of the data (Hekenaho 1997). The majority of rule-based models that have been developed are heuristic, meaning that rational analyses have not been provided and the models are not related to statistical approaches to induction. A rational analysis for rule-based models could presume that concepts are represented as rules, and would then ask to what degree of belief a rational agent should be in agreement with each rule, with some observed examples provided (Goodman, Griffiths, Feldman, and Tenenbaum). Rule-based theories of concept learning are focused more so on perceptual learning and less on definition learning. Rules can be used in learning when the stimuli are confusable, as opposed to simple. When rules are used in learning, decisions are made based on properties alone and rely on simple criteria that do not require a lot of memory ( Rouder and Ratcliff, 2006).
Example of rule-based theory:
"A radiologist using rule-based categorization would observe whether specific properties of an X-ray image meet certain criteria; for example, is there an extreme difference in brightness in a suspicious region relative to other regions? A decision is then based on this property alone." (see Rouder and Ratcliff 2006)
The prototype view of concept learning holds that people abstract out the central tendency (or prototype) of the examples experienced and use this as a basis for their categorization decisions.
The prototype view of concept learning holds that people categorize based on one or more central examples of a given category followed by a penumbra of decreasingly typical examples. This implies that people do not categorize based on a list of things that all correspond to a definition, but rather on a hierarchical inventory based on semantic similarity to the central example(s).
Exemplar theory is the storage of specific instances (exemplars), with new objects evaluated only with respect to how closely they resemble specific known members (and nonmembers) of the category. This theory hypothesizes that learners store examples verbatim. This theory views concept learning as highly simplistic. Only individual properties are represented. These individual properties are not abstract and they do not create rules. An example of what exemplar theory might look like is, "water is wet". It is simply known that some (or one, or all) stored examples of water have the property wet. Exemplar based theories have become more empirically popular over the years with some evidence suggesting that human learners use exemplar based strategies only in early learning, forming prototypes and generalizations later in life. An important result of exemplar models in psychology literature has been a de-emphasis of complexity in concept learning. One of the best known exemplar theories of concept learning is the Generalized Context Model (GCM).
A problem with exemplar theory is that exemplar models critically depend on two measures: similarity between exemplars, and having a rule to determine group membership. Sometimes it is difficult to attain or distinguish these measures.
More recently, cognitive psychologists have begun to explore the idea that the prototype and exemplar models form two extremes. It has been suggested that people are able to form a multiple prototype representation, besides the two extreme representations. For example, consider the category 'spoon'. There are two distinct subgroups or conceptual clusters: spoons tend to be either large and wooden, or small and made of metal. The prototypical spoon would then be a medium-size object made of a mixture of metal and wood, which is clearly an unrealistic proposal. A more natural representation of the category 'spoon' would instead consist of multiple (at least two) prototypes, one for each cluster. A number of different proposals have been made in this regard (Anderson, 1991; Griffiths, Canini, Sanborn & Navarro, 2007; Love, Medin & Gureckis, 2004; Vanpaemel & Storms, 2008). These models can be regarded as providing a compromise between exemplar and prototype models.
The basic idea of explanation-based learning suggests that a new concept is acquired by experiencing examples of it and forming a basic outline.1 Put simply, by observing or receiving the qualities of a thing the mind forms a concept which possesses and is identified by those qualities.
The original theory, proposed by Mitchell, Keller, and Kedar-Cabelli in 1986 and called explanation-based generalization, is that learning occurs through progressive generalizing.2 This theory was first developed to program machines to learn. When applied to human cognition, it translates as follows: the mind actively separates information that applies to more than one thing and enters it into a broader description of a category of things. This is done by identifying sufficient conditions for something to fit in a category, similar to schematizing.
The revised model revolves around the integration of four mental processes – generalization, chunking, operationalization, and analogy3.
This particular theory of concept learning is relatively new and more research is being conducted to test it.
Taking a mathematical approach to concept learning, Bayesian theories propose that the human mind produces probabilities for a certain concept definition, based on examples it has seen of that concept.[14] The Bayesian concept of Prior Probability stops learners' hypotheses being overly specific, while the likelihood of a hypothesis ensures the definition is not too broad.
For example- say a child is shown three horses by a parent and told these are called "horses"- she needs to work out exactly what the adult means by this word. She is much more likely to define the word "horses" as referring to either this type of animal or all animals, rather than an oddly specific example like "all horses except Clydedales", which would be an unnatural concept. Meanwhile, the likelihood of 'horses' meaning 'all animals' when the three animals shown are all very similar is low. The hypothesis that the word "horse" refers to all animals of this species is most likely of the three possible definitions, as it has both a reasonable prior probability and likelihood given examples.
Bayes' theorem is important because it provides a powerful tool for understanding, manipulating and controlling data5 that takes a larger view that is not limited to data analysis alone6. The approach is subjective, and this requires the assessment of prior probabilities6, making it also very complex. However, if Bayesians show that the accumulated evidence and the application of Bayes' law are sufficient, the work will overcome the subjectivity of the inputs involved7. Bayesian inference can be used for any honestly collected data and has a major advantage because of its scientific focus6.
One model that incorporates the Bayesian theory of concept learning is the ACT-R model, developed by John R. Anderson. The ACT-R model is a programming language that defines the basic cognitive and perceptual operations that enable the human mind by producing a step-by-step simulation of human behavior. This theory exploits the idea that each task humans perform consists of a series of discrete operations. The model has been applied to learning and memory, higher level cognition, natural language, perception and attention, human-computer interaction, education, and computer generated forces.
In addition to John R. Anderson, Joshua Tenenbaum has been a contributor to the field of concept learning; he studied the computational basis of human learning and inference using behavioral testing of adults, children, and machines from Bayesian statistics and probability theory, but also from geometry, graph theory, and linear algebra. Tenenbaum is working to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners.
M. D. Merrill's component display theory (CDT) is a cognitive matrix that focuses on the interaction between two dimensions: the level of performance expected from the learner and the types of content of the material to be learned. Merrill classifies a learner's level of performance as: find, use, remember, and material content as: facts, concepts, procedures, and principles. The theory also calls upon four primary presentation forms and several other secondary presentation forms. The primary presentation forms include: rules, examples, recall, and practice. Secondary presentation forms include: prerequisites, objectives, helps, mnemonics, and feedback. A complete lesson includes a combination of primary and secondary presentation forms, but the most effective combination varies from learner to learner and also from concept to concept. Another significant aspect of the CDT model is that it allows for the learner to control the instructional strategies used and adapt them to meet his or her own learning style and preference. A major goal of this model was to reduce three common errors in concept formation: over-generalization, under-generalization and misconception.