In semantics, semiotics, philosophy of language, metaphysics, and metasemantics, meaning "is a relationship between two sorts of things: signs and the kinds of things they intend, express, or signify". The types of meanings vary according to the types of the thing that is being represented. Namely: The major contemporary positions of meaning come under the following partial definitions of meaning:
The evaluation of meaning according to each one of the five major substantive theories of meaning and truth is presented below. The question of what is a proper basis for deciding how words, symbols, ideas and beliefs may properly be considered to truthfully denote meaning, whether by a single person or an entire society, is dealt with by the five most prevalent substantive theories listed below. Each theory of meaning as evaluated by these respective theories of truth are each further researched by the individual scholars supporting each one of the respective theories of truth and meaning.
Both hybrid theories of meaning and alternative theories of meaning and truth have also been researched, and are subject to further assessment according to their respective and relative merits.
Correspondence theories emphasise that true beliefs and true statements of meaning correspond to the actual state of affairs and that associated meanings must be in agreement with these beliefs and statements. This type of theory stresses a relationship between thoughts or statements on one hand, and things or objects on the other. It is a traditional model tracing its origins to Ancient Greece philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined in principle entirely by how it relates to "things", by whether it accurately describes those "things". An example of correspondence theory is the statement by the thirteenth-century philosopher/theologian Thomas Aquinas: Veritas est adaequatio rei et intellectus ("Truth is the equation [or adequation] of things and intellect"), a statement which Aquinas attributed to the ninth-century neoplatonist Isaac Israeli. Aquinas also restated the theory as: "A judgment is said to be true when it conforms to the external reality".
Correspondence theory centres heavily around the assumption that truth and meaning are a matter of accurately copying what is known as "objective reality" and then representing it in thoughts, words and other symbols. Many modern theorists have stated that this ideal cannot be achieved without analysing additional factors. For example, language plays a role in that all languages have words to represent concepts that are virtually undefined in other languages. The German word Zeitgeist is one such example: one who speaks or understands the language may "know" what it means, but any translation of the word apparently fails to accurately capture its full meaning (this is a problem with many abstract words, especially those derived in agglutinative languages). Thus, some words add an additional parameter to the construction of an accurate truth predicate. Among the philosophers who grappled with this problem is Alfred Tarski, whose semantic theory is summarized further below in this article.
For coherence theories in general, the assessment of meaning and truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. So, for example, the completeness and comprehensiveness of the underlying set of concepts is a critical factor in judging the validity and usefulness of a coherent system. A pervasive tenet of coherence theories is the idea that truth is primarily a property of whole systems of propositions, and can be ascribed to individual propositions only according to their coherence with the whole. Among the assortment of perspectives commonly regarded as coherence theory, theorists differ on the question of whether coherence entails many possible true systems of thought or only a single absolute system.
Some variants of coherence theory are claimed to describe the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries. On the whole, coherence theories have been rejected for lacking justification in their application to other areas of truth, especially with respect to assertions about the natural world, empirical data in general, assertions about practical matters of psychology and society, especially when used without support from the other major theories of truth.
Coherence theories distinguish the thought of rationalist philosophers, particularly of Spinoza, Leibniz, and G.W.F. Hegel, along with the British philosopher F.H. Bradley. Other alternatives may be found among several proponents of logical positivism, notably Otto Neurath and Carl Hempel.
Social constructivism holds that meaning and truth are constructed by social processes, are historically and culturally specific, and are in part shaped through power struggles within a community. Constructivism views all of our knowledge as "constructed", because it does not reflect any external "transcendent" realities (as a pure correspondence theory might hold). Rather, perceptions of truth are viewed as contingent on convention, human perception, and social experience. It is believed by constructivists that representations of physical and biological reality, including race, sexuality, and gender, are socially constructed.
Giambattista Vico was among the first to claim that history and culture, along with their meaning, are human products. Vico's epistemological orientation gathers the most diverse rays and unfolds in one axiom – verum ipsum factum – "truth itself is constructed". Hegel and Marx were among the other early proponents of the premise that truth is, or can be, socially constructed. Marx, like many critical theorists who followed, did not reject the existence of objective truth but rather distinguished between true knowledge and knowledge that has been distorted through power or ideology. For Marx, scientific and true knowledge is "in accordance with the dialectical understanding of history" and ideological knowledge is "an epiphenomenal expression of the relation of material forces in a given economic arrangement".
Consensus theory holds that meaning and truth are whatever is agreed upon—or, in some versions, might come to be agreed upon—by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.
Among the current advocates of consensus theory as a useful accounting of the concept of "truth" is the philosopher Jürgen Habermas. Habermas maintains that truth is what would be agreed upon in an ideal speech situation. Among the current strong critics of consensus theory is the philosopher Nicholas Rescher.
The three most influential forms of the pragmatic theory of truth and meaning were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that meaning and truth are verified and confirmed by the results of putting one's concepts into practice.
Peirce defines truth as follows: "Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth." This statement stresses Peirce's view that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and "reference to the future", are essential to a proper conception of meaning and truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.
William James's version of pragmatic theory, while complex, is often summarized by his statement that "the 'true' is only the expedient in our way of thinking, just as the 'right' is only the expedient in our way of behaving". By this, James meant that truth is a quality, the value of which is confirmed by its effectiveness when applying concepts to practice (thus, "pragmatic").
John Dewey, less broadly than James but more broadly than Peirce, held that inquiry, whether scientific, technical, sociological, philosophical or cultural, is self-corrective over time if openly submitted for testing by a community of inquirers in order to clarify, justify, refine and/or refute proposed meanings and truths.
A later variation of the pragmatic theory was William Ernest Hocking's "negative pragmatism": what works may or may not be true, but what fails cannot be true, because the truth and its meaning always works. James's and Dewey's ideas also ascribe meaning and truth to repeated testing, which is "self-corrective" over time.
Pragmatism and negative pragmatism are also closely aligned with the coherence theory of truth in that any testing should not be isolated but rather incorporate knowledge from all human endeavors and experience. The universe is a whole and integrated system, and testing should acknowledge and account for its diversity. As physicist Richard Feynman said: "if it disagrees with experiment, it is wrong".
Some have asserted that meaning is nothing substantially more or less than the truth conditions they involve. For such theories, an emphasis is placed upon reference to actual things in the world to account for meaning, with the caveat that reference more or less explains the greater part (or all) of meaning itself.
The logical positivists argued that the meaning of a statement arose from how it is verified.
In his paper "Über Sinn und Bedeutung" (now usually translated as "On Sense and Reference"), Gottlob Frege argued that proper names present at least two problems in explaining meaning.
Frege can be interpreted as arguing that it was therefore a mistake to think that the meaning of a name is the thing it refers to. Instead, the meaning must be something else—the "sense" of the word. Two names for the same person, then, can have different senses (or meanings): one referent might be picked out by more than one sense. This sort of theory is called a mediated reference theory. Frege argued that, ultimately, the same bifurcation of meaning must apply to most or all linguistic categories, such as to quantificational expressions like "All boats float".
Logical analysis was further advanced by Bertrand Russell and Alfred North Whitehead in their groundbreaking Principia Mathematica, which attempted to produce a formal language with which the truth of all mathematical statements could be demonstrated from first principles.
Russell differed from Frege greatly on many points, however. He rejected Frege's sense-reference distinction. He also disagreed that language was of fundamental significance to philosophy, and saw the project of developing formal logic as a way of eliminating all of the confusions caused by ordinary language, and hence at creating a perfectly transparent medium in which to conduct traditional philosophical argument. He hoped, ultimately, to extend the proofs of the Principia to all possible true statements, a scheme he called logical atomism. For a while it appeared that his pupil Wittgenstein had succeeded in this plan with his Tractatus Logico-Philosophicus.
Russell's work, and that of his colleague G. E. Moore, developed in response to what they perceived as the nonsense dominating British philosophy departments at the turn of the 20th century, which was a kind of British Idealism most of which was derived (albeit very distantly) from the work of Hegel. In response Moore developed an approach ("Common Sense Philosophy") which sought to examine philosophical difficulties by a close analysis of the language used in order to determine its meaning. In this way Moore sought to expunge philosophical absurdities such as "time is unreal". Moore's work would have significant, if oblique, influence (largely mediated by Wittgenstein) on Ordinary language philosophy.
The Vienna Circle, a famous group of logical positivists from the early 20th century (closely allied with Russell and Frege), adopted the verificationist theory of meaning, a type of truth theory of meaning. The verificationist theory of meaning (in at least one of its forms) states that to say that an expression is meaningful is to say that there are some conditions of experience that could exist to show that the expression is true. As noted, Frege and Russell were two proponents of this way of thinking.
A semantic theory of truth was produced by Alfred Tarski for formal semantics. According to Tarski's account, meaning consists of a recursive set of rules that end up yielding an infinite set of sentences, "'p' is true if and only if p", covering the whole language. His innovation produced the notion of propositional functions discussed on the section on universals (which he called "sentential functions"), and a model-theoretic approach to semantics (as opposed to a proof-theoretic one). Finally, some links were forged to the correspondence theory of truth (Tarski, 1944).
Perhaps the most influential current approach in the contemporary theory of meaning is that sketched by Donald Davidson in his introduction to the collection of essays Truth and Meaning in 1967. There he argued for the following two theses:
The result is a theory of meaning that rather resembles, by no accident, Tarski's account.
Davidson's account, though brief, constitutes the first systematic presentation of truth-conditional semantics. He proposed simply translating natural languages into first-order predicate calculus in order to reduce meaning to a function of truth.
Saul Kripke examined the relation between sense and reference in dealing with possible and actual situations. He showed that one consequence of his interpretation of certain systems of modal logic was that the reference of a proper name is necessarily linked to its referent, but that the sense is not. So for instance "Hesperus" necessarily refers to Hesperus, even in those imaginary cases and worlds in which perhaps Hesperus is not the evening star. That is, Hesperus is necessarily Hesperus, but only contingently the morning star.
This results in the curious situation that part of the meaning of a name — that it refers to some particular thing — is a necessary fact about that name, but another part — that it is used in some particular way or situation — is not.
Kripke also drew the distinction between speaker's meaning and semantic meaning, elaborating on the work of ordinary language philosophers Paul Grice and Keith Donnellan. The speaker's meaning is what the speaker intends to refer to by saying something; the semantic meaning is what the words uttered by the speaker mean according to the language.
In some cases, people do not say what they mean; in other cases, they say something that is in error. In both these cases, the speaker's meaning and the semantic meaning seem to be different. Sometimes words do not actually express what the speaker wants them to express; so words will mean one thing, and what people intend to convey by them might mean another. The meaning of the expression, in such cases, is ambiguous.
W. V. O. Quine attacked both verificationism and the very notion of meaning in his famous essay, "Two Dogmas of Empiricism". In it, he suggested that meaning was nothing more than a vague and dispensable notion. Instead, he asserted, what was more interesting to study was the synonymy between signs. He also pointed out that verificationism was tied to the distinction between analytic and synthetic statements, and asserted that such a divide was defended ambiguously. He also suggested that the unit of analysis for any potential investigation into the world (and, perhaps, meaning) would be the entire body of statements taken as a collective, not just individual statements on their own.
Other criticisms can be raised on the basis of the limitations that truth-conditional theorists themselves admit to. Tarski, for instance, recognized that truth-conditional theories of meaning only make sense of statements, but fail to explain the meanings of the lexical parts that make up statements. Rather, the meaning of the parts of statements is presupposed by an understanding of the truth-conditions of a whole statement, and explained in terms of what he called "satisfaction conditions".
Still another objection (noted by Frege and others) was that some kinds of statements don't seem to have any truth-conditions at all. For instance, "Hello!" has no truth-conditions, because it doesn't even attempt to tell the listener anything about the state of affairs in the world. In other words, different propositions have different grammatical moods.
Deflationist accounts of truth, sometimes called 'irrealist' accounts, are the staunchest source of criticism of truth-conditional theories of meaning. According to them, "truth" is a word with no serious meaning or function in discourse. For instance, for the deflationist, the sentences "It's true that Tiny Tim is trouble" and "Tiny Tim is trouble" are equivalent. In consequence, for the deflationist, any appeal to truth as an account of meaning has little explanatory power.
The sort of truth theories presented here can also be attacked for their formalism both in practice and principle. The principle of formalism is challenged by the informalists, who suggest that language is largely a construction of the speaker, and so, not compatible with formalization. The practice of formalism is challenged by those who observe that formal languages (such as present-day quantificational logic) fail to capture the expressive power of natural languages (as is arguably demonstrated in the awkward character of the quantificational explanation of definite description statements, as laid out by Bertrand Russell).
Finally, over the past century, forms of logic have been developed that are not dependent exclusively on the notions of truth and falsity. Some of these types of logic have been called modal logics. They explain how certain logical connectives such as "if-then" work in terms of necessity and possibility. Indeed, modal logic was the basis of one of the most popular and rigorous formulations in modern semantics called the Montague grammar. The successes of such systems naturally give rise to the argument that these systems have captured the natural meaning of connectives like if-then far better than an ordinary, truth-functional logic ever could.
Throughout the 20th century, English philosophy focused closely on analysis of language. This style of analytic philosophy became very influential and led to the development of a wide range of philosophical tools.
The philosopher Ludwig Wittgenstein was originally an ideal language philosopher, following the influence of Russell and Frege. In his Tractatus Logico-Philosophicus he had supported the idea of an ideal language built up from atomic statements using logical connectives (see picture theory of meaning and logical atomism). However, as he matured, he came to appreciate more and more the phenomenon of natural language. Philosophical Investigations, published after his death, signalled a sharp departure from his earlier work with its focus upon ordinary language use (see use theory of meaning and ordinary language philosophy). His approach is often summarised by the aphorism "the meaning of a word is its use in a language". However, following in Frege's footsteps, in the Tractatus, Wittgenstein declares: "... Only in the context of a proposition has a name meaning."
His work would come to inspire future generations and spur forward a whole new discipline, which explained meaning in a new way. Meaning in a natural language was seen as primarily a question of how the speaker uses words within the language to express intention.
This close examination of natural language proved to be a powerful philosophical technique. Practitioners who were influenced by Wittgenstein's approach have included an entire tradition of thinkers, featuring P. F. Strawson, Paul Grice, R. M. Hare, R. S. Peters, and Jürgen Habermas.
At around the same time Ludwig Wittgenstein was re-thinking his approach to language, reflections on the complexity of language led to a more expansive approach to meaning. Following the lead of George Edward Moore, J. L. Austin examined the use of words in great detail. He argued against fixating on the meaning of words. He showed that dictionary definitions are of limited philosophical use, since there is no simple "appendage" to a word that can be called its meaning. Instead, he showed how to focus on the way in which words are used in order to do things. He analysed the structure of utterances into three distinct parts: locutions, illocutions and perlocutions. His pupil John Searle developed the idea under the label "speech acts". Their work greatly influenced pragmatics.
Past philosophers had understood reference to be tied to words themselves. However, Peter Strawson disagreed in his seminal essay, "On Referring", where he argued that there is nothing true about statements on their own; rather, only the uses of statements could be considered to be true or false.
Indeed, one of the hallmarks of the ordinary use perspective is its insistence upon the distinctions between meaning and use. "Meanings", for ordinary language philosophers, are the instructions for usage of words — the common and conventional definitions of words. Usage, on the other hand, is the actual meanings that individual speakers have — the things that an individual speaker in a particular context wants to refer to. The word "dog" is an example of a meaning, but pointing at a nearby dog and shouting "This dog smells foul!" is an example of usage. From this distinction between usage and meaning arose the divide between the fields of pragmatics and semantics.
Yet another distinction is of some utility in discussing language: "mentioning". Mention is when an expression refers to itself as a linguistic item, usually surrounded by quotation marks. For instance, in the expression "'Opopanax' is hard to spell", what is referred to is the word itself ("opopanax") and not what it means (an obscure gum resin). Frege had referred to instances of mentioning as "opaque contexts".
In his essay, "Reference and Definite Descriptions", Keith Donnellan sought to improve upon Strawson's distinction. He pointed out that there are two uses of definite descriptions: attributive and referential. Attributive uses provide a description of whoever is being referred to, while referential uses point out the actual referent. Attributive uses are like mediated references, while referential uses are more directly referential.
The philosopher Paul Grice, working within the ordinary language tradition, understood "meaning" — in his 1957 article — to have two kinds: natural and non-natural. Natural meaning had to do with cause and effect, for example with the expression "these spots mean measles". Non-natural meaning, on the other hand, had to do with the intentions of the speaker in communicating something to the listener.
In his essay, Logic and Conversation, Grice went on to explain and defend an explanation of how conversations work. His guiding maxim was called the cooperative principle, which claimed that the speaker and the listener will have mutual expectations of the kind of information that will be shared. The principle is broken down into four maxims: Quality (which demands truthfulness and honesty), Quantity (demand for just enough information as is required), Relation (relevance of things brought up), and Manner (lucidity). This principle, if and when followed, lets the speaker and listener figure out the meaning of certain implications by way of inference.
The works of Grice led to an avalanche of research and interest in the field, both supportive and critical. One spinoff was called Relevance theory, developed by Dan Sperber and Deirdre Wilson during the mid-1980s, whose goal was to make the notion of relevance more clear. Similarly, in his work, "Universal pragmatics", Jürgen Habermas began a program that sought to improve upon the work of the ordinary language tradition. In it, he laid out the goal of a valid conversation as a pursuit of mutual understanding.
Although he has focused on the structure and functioning of human syntax, in many works Noam Chomsky has discussed many philosophical problems too, including the problem of meaning and reference in human language. Chomsky has formulated a strong criticism against both the externalist notion of reference (reference consists in a direct or causal relation among words and objects) and the internalist one (reference is a mind-mediated relation holding among words and reality). According to Chomsky, both these notions (and many others widely used in philosophy, such as that of truth) are basically inadequate for the naturalistic (= scientific) inquiry on human mind: they are common sense notions, not scientific notions, which cannot, as such, enter in the scientific discussion. Chomsky argues that the notion of reference can be used only when we deal with scientific languages, whose symbols refers to specific things or entities; but when we consider human language expressions, we immediately understand that their reference is vague, in the sense that they can be used to denote many things. For example, the word “book” can be used to denote an abstract object (e.g., “he is reading the book”) or a concrete one (e.g., “the book is on the chair”); the name “London” can denote at the same time a set of buildings, the air of a place and the character of a population (think to the sentence “London is so gray, polluted and sad”). These and other cases induce Chomsky to argue that the only plausible (although not scientific) notion of reference is that of act of reference, a complex phenomenon of language use (performance) which includes many factors (linguistic and not: i.e. beliefs, desires, assumptions about the world, premises, etc.). As Chomsky himself has pointed out,  this conception of meaning is very close to that adopted by John Austin, Peter Strawson and the late Wittgenstein.
Michael Dummett argued against the kind of truth-conditional semantics presented by Davidson. Instead, he argued that basing semantics on assertion conditions avoids a number of difficulties with truth-conditional semantics, such as the transcendental nature of certain kinds of truth condition. He leverages work done in proof-theoretic semantics to provide a kind of inferential role semantics, where:
A semantics based upon assertion conditions is called a verificationist semantics: cf. the verificationism of the Vienna Circle.
This work is closely related, though not identical, to one-factor theories of conceptual role semantics.
Sometimes between the 1950-1990s, cognitive scientist Jerry Fodor said that use theories of meaning (of the Wittgensteinian kind) seem to assume that language is solely a public phenomenon, that there is no such thing as a "private language". Fodor thinks it is necessary to create or describe the language of thought, which would seemingly require the existence of a "private language".
In the 1960s, David Kellogg Lewis described meaning as use, a feature of a social convention and conventions as regularities of a specific sort. Lewis' work was an application of game theory in philosophical topics. Conventions, he argued, are a species of coordination equilibria.
The idea theory of meaning (also ideational theory of meaning), most commonly associated with the British empiricist John Locke, claims that meanings are mental representations provoked by signs.
The term "ideas" is used to refer to either mental representations, or to mental activity in general. Those who seek an explanation for meaning in the former sort of account endorse a stronger sort of idea theory of mind than the latter.
Each idea is understood to be necessarily about something external and/or internal, real or imaginary. For example, in contrast to the abstract meaning of the universal "dog", the referent "this dog" may mean a particular real life chihuahua. In both cases, the word is about something, but in the former it is about the class of dogs as generally understood, while in the latter it is about a very real and particular dog in the real world.
John Locke considered all ideas to be both imaginable objects of sensation and the very unimaginable objects of reflection. He said in his Essay Concerning Human Understanding that words are used both as signs for ideas and also to signify a lack of certain ideas. David Hume held that thoughts were kinds of imaginable entities: his Enquiry Concerning Human Understanding, section 2. He argued that any words that could not call upon any past experience were without meaning.
In contrast to Locke and Hume, George Berkeley and Ludwig Wittgenstein held that ideas alone are unable to account for the different variations within a general meaning. For example, any hypothetical image of the meaning of "dog" has to include such varied images as a chihuahua, a pug, and a black Labrador; and this seems impossible to imagine, since all of those particular breeds look very different from one another. Another way to see this point is to question why it is that, if we have an image of a specific type of dog (say of a chihuahua), it should be entitled to represent the entire concept.
Another criticism is that some meaningful words, known as non-lexical items, don't have any meaningfully associated image. For example, the word "the" has a meaning, but one would be hard-pressed to find a mental representation that fits it. Still another objection lies in the observation that certain linguistic items name something in the real world, and are meaningful, yet which we have no mental representations to deal with. For instance, it is not known what Newton's father looked like, yet the phrase "Newton's father" still has meaning.
Another problem is that of composition—that it is difficult to explain how words and phrases combine into sentences if only ideas are involved in meaning.
Eleanor Rosch and George Lakoff have advanced a theory of "prototypes" which suggests that many lexical categories, at least on the face of things, have "radial structures". That is to say, there are some ideal member(s) in the category that seem to represent the category better than other members. For example, the category of "birds" may feature the robin as the prototype, or the ideal kind of bird. With experience, subjects might come to evaluate membership in the category of "bird" by comparing candidate members to the prototype and evaluating for similarities. So, for example, a penguin or an ostrich would sit at the fringe of the meaning of "bird", because a penguin is unlike a robin.
Intimately related to these researches is the notion of a psychologically basic level, which is both the first level named and understood by children, and "the highest level at which a single mental image can reflect the entire category" (Lakoff 1987:46). The "basic level" of cognition is understood by Lakoff as crucially drawing upon "image-schemas" along with various other cognitive processes.
Philosophers Ned Block, Gilbert Harman and Hartry Field, and cognitive scientists G. Miller and P. Johnson-Laird say that the meaning of a term can be found by investigating its role in relation to other concepts and mental states. They endorse a "conceptual role semantics". Those proponents of this view who understand meanings to be exhausted by the content of mental states can be said to endorse "one-factor" accounts of conceptual role semantics and thus to fit within the tradition of idea theories.