Fuzzy VS Bivalent Logic: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Logic
Contributor:

In this work a comparison is attempted between the Aristotle’s traditional bivalent logic, which dominated for centuries on the human way of thinking, and the relatively recent Zadeh’s fuzzy logic, which has already found many and important applications to almost all sectors of the human activity. It is concluded that, although the enormous progress of science and technology owes a lot to bivalent logic, fuzzy logic which completes and extends it, fits much better to our everyday life situations and to the scientific way of thinking.

  • Fuzzy logic
  • bivalent logic
  • scientific thinking

1. Introduction

Logic is understood to be the study of the correct reasoning, involving the drawing of inferences. There is no doubt that the enormous progress of science and technology owes a lot to the Aristotle’s (384-322 BC, Figure 1) bivalent logic, which dominated for centuries the human way of thinking.

Figure 1. Plato (left) and Aristotle in a Raphael’s fresco (1509).

The bivalent logic is based on the famous “Laws of Thought” of Aristotle [1], which are the following:

  • The principle of identity: All things are identical to their selves; i.e. x, x=x.
  • The law of the excluded middle: For all propositions p, either p or not p must be true and there is no middle (third) true proposition between them
  • The law of contradiction: For all propositions p, it is impossible for both p and not p to be true.

Those three laws constitute a sufficient foundation for the whole of bivalent logic, all its other principles being mere elaborations of them.

However, even from the time of Buddha Siddhartha Gautama, who lived in India around 500 BC, Heraclitus (535-475 BC) and Plato (427-377 BC) views have appeared too discussing the existence of a third area between “true” and “false”, where those two opposites can exist together. More recent philosophers like Hegel, Marx, Engels, Russel and others supported and cultivated further those ideas, but the first integrated propositions of multivalued logics appeared only during the 20th century by Jan Lukasiewicz (1858-1956) and Alfred Tarski (1901-1983) (for more details see [2], Section 2). Max Black [3] introduced in 1937 the concept of vague set being a premonition of the Zadeh’s fuzzy set (FS) [4], which led to the development of the infinite-valued fuzzy logic (FL) [5].

The target of the present paper is the study of the development of FL and its applications and its comparison with the traditional logic.

2. Fuzzy Sets, Fuzzy Systems and Fuzzy Logic

The notion of FS introduced in 1965 by Lotfi Aliasker Zadeh (Figure 2), an electrical engineer of Iranian origin, born in Azerbaijan, USSR and Professor at the University of Berkeley, California.

Figure 2. L.A. Zadeh (1921–2017)

A FS Α on the universal set of the discourse U is defined as a set of ordered pairs of the form

A = {(x, mA(x)): x U}          (1).

In the FS (1) mA: U [0,1] is its  membership function and the real number mΑ(x) is called the membership degree of x in Α. The greater is mΑ(x), the more x satisfies the characteristic property of Α. Many authors, for reasons of simplicity, identify the FS A with its membership function mA.

A crisp subset B of U can be considered as a FS on U with membership function

In other words, the concept of FS is an extension/generalization of the concept of crisp set. Most notions and operations concerning the crisp sets, e.g. subset, complement, union, intersection, Cartesian product, binary and other relations, etc., can be extended to FS. For general facts about FSs and the connected to them uncertainty we refer to the chapters 4-7 of the book [6].

The infinite-valued on the interval [0, 1] FL is defined with the help of the concept of FS. Through FL, the fuzzy terminology is translated by algorithmic procedures into numerical values, operations are performed upon those values and the outcomes are returned into natural language statements in a reliable manner. FL is useful for handling real-world situations that are inherently fuzzy, calculating the existence in such situations fuzzy data and describing the operation of the corresponding fuzzy systems. An important advantage of FL is that rules are set in natural language with the help of linguistic, and therefore fuzzy, variables.

The process of reasoning with fuzzy rules involves:

  • Fuzzification of the problem’s data by utilizing the suitable MFs to define the required FSs.
  • Application of FL operators on the defined FSs and combination of them to obtain the final result in the form of a unique FS.
  • Defuzzification of the final FS to return to a crisp output value, in order to apply it on the real world situation for resolving the corresponding problem.

Among the more than 30 defuzzification methods in use, the most popular is probably the Centre of Gravity (COG) technique. According to it, a problem’s fuzzy solution is represented by the coordinates of the COG of the level’s section contained between the graph of the MF involved and the OX axis [7]

But, while Zadeh was trying to spread out the message of fuzziness, he received many tough critiques for his radical ideas from three different directions [8].  The first direction came from many scientists asked for some practical applications. In fact, such applications started to appear in industry during the 1970’s, the first one being in the area of cement kiln control [9]. This is an operation demanding the control of a highly complex set of chemical interactions by dynamically managing 40-50 “rules of thumb”. This was followed by E. H. Mamdani’s [10] work in the Queen Mary College of London, who designed the first fuzzy system for controlling a steam engine and later the operation of traffic lights. Another type of fuzzy inference systems was developed later in Japan by Takagi-Sugeno-Kang [11]. It is well known that nowadays FSs and FL have found many and important applications to almost all sectors of human activity. It must be mentioned also that fuzzy mathematics has also been significantly developed on theoretical level, providing important contributions even in branches of classical mathematics, such as algebra, analysis, geometry, etc.

The second direction is related to the probability theorists, who claimed that FL cannot do any more than probability does. Membership degrees, taking values in the same with probabilities interval [0, 1], are actually hidden probabilities, fuzziness is a kind of disguised randomness, and the multi-valued logic is not a new idea. It took a long time to become universally understood, just recently, that fuzziness does not oppose probability, but actually supports and completes it by treating successfully the cases of the existing in the real world uncertainty which is caused by reasons different from randomness [12].

The expressions “John’s membership degree in the FS of clever people is 0.7” and “The probability of John to be clever is 0.7”, although they look similar, they actually have essentially different meanings. The former means that John is a rather clever person, whereas the latter means that John, according to the principle of the excluded middle, is either clever or not, but his outlines (heredity, academic studies, etc.) suggests that the probability to be clever is high (70%). There are also other differences between the two theories mainly arising from the way of defining the corresponding notions and operations. For instance, whereas the sum of the probabilities of all the single events (singleton subsets) of the universal set of the discource is always equal to 1 (probability of the certain event), this is not necessarily true for the membership degrees. Consequently, a probability distribution could be used to define membership degrees, but the converse does not hold in general.

Note that Edwin T. Jaynes (1922-1998), Professor of Physics at the University of Washington, argued that Probability theory can be considered as a generalization of the bivalent logic reducing to it in the special case where our hypothesis is either absolutely true or absolutely false [13]. Many eminent scientists have been inspired by the ideas of Janes, like the expert in Algebraic Geometry David Mumford, who believes that Probability and Statistics are emerging as a better way for building scientific models [14]. Probability and Statistics are related mathematical topics having, however, fundamental differences. In fact, Probability is a branch of theoretical mathematics dealing with the estimation of the likelihood of future events, whereas Statistics is an applied branch , which tries to make sense by analyzing the frequencies of past events. Nevertheless, both Probability and Statistics have been developed on the basis of the principles of the bivalent logic. As a result, they are tackling effectively only the cases of the existing in the real world uncertainty which are due to randomness and not those due to imprecision [12].

One could argue, however, that Bayesian Reasoning constitutes an interface between bivalent and FL [15]. In fact, the Bayes’ rule expressed by equation (2) below, calculates the conditional probability P(A/B) with the help of the inverse in time conditional probability P(B/A), the prior probability P(A }and the posterior probability P(B)

   

In other words, the Bayes’ rule calculates the probability of an event based on prior knowledge of conditions related to that event. The value of the prior probability P(A) is fixed before the experiment, whereas the value of the posterior probability is derived from the experiment’s data. Usually, however, there exists an uncertainty about the exact value of P(A). In such cases, considering all the possible values of P(A), we obtain different values for the conditional probability P(A/B). Therefore, the Bayes’ rule introduces a kind of multi-valued logic tackling the existing, due to the imprecision of the value of the prior probability, uncertainty in a way analogous to FL.

The Bayes’ rule was first appeared in the work “An Essay towards a Problem in the Doctrine of Chances” of the 18th century British mathematician and theologian Thomas Bayes (1701-1761). This essay was published by Richard Price in 1763, after the Bayes’ death, in the “Philosophical Transactions of the Royal Society of London”. The famous French mathematician Laplace (1749-1827), independently from Bayes, pioneered and popularized the Bayesian probabilities.

The third direction of the critiques against FL comes from bivalent logic. Many of its traditional supporters, based on a culture of centuries, argue that, since this logic works effectively in science, functions the computers and explains satisfactorily the phenomena of the real world, except perhaps those that happen in the boundaries, there is no reason to make things more complicated by introducing the unstable principles of a multi-valued logic.

FL, however, aims exactly at smoothing the situation in the boundaries! Look, for example, at the graph of Figure 3 corresponding to the FS T of “tall people”. People with heights less than 1.50 m are considered of having membership degree 0 in T. The membership degree is continuously increasing for heights greater than 1.50m, taking its maximal value 1 for heights equal or greater than 1.80 m. Therefore, the “fuzzy part” of the graph - which is conventionally represented in Figure 3 by the straight line segment AC, but its exact form depends upon the way in which the membership function has been defined - lies in the area of the rectangle ABCD defined by the OX axis, its parallel through the point E and the two perpendicular to it lines at the points A and B. 

Figure 3: The fuzzy set of “tall people”

In fact, the way of perceiving a concept (e.g. “tall”) is different from person to person, depending on the “signals” that each one receives from the real world about it.  Mathematically speaking, this means that the definition of the membership function of a FS is not unique, depending on the observer’s personal criteria. The only restriction is that this definition must be compatible to the common logic, because otherwise the corresponding FS does not give a reliable description of the corresponding real situation. On the contrary, bivalent logic defines a bound, e.g. 1.80 m, above which people are considered to be tall and under which are considered to be short. Consequently, one with height 1.79 m is considered to be short, whether another with height 1.81 m is tall!

They have appeared also strong voices of anger against FL, without bothering to present any logical arguments about it. Those voices, characterize FL as the tool for making the science unstable, or more emphatically as the “cocaine of science”! Such voices, however, appear frequently in analogous cases of the history of science and must be simply ignored.

Zadeh realized that FSs are connected to words (adjectives and adverbs) of the natural language; e.g. the adjective “tall” indicates the FS of the tall people, since “how tall is everyone” is a matter of degree. A grammatical sentence may contain many adjectives and/or adverbs, therefore it correlates a number of FSs. A synthesis of grammatical sentences, i.e. a group of FSs related to each other, forms what we call a fuzzy system. A fuzzy system provides empirical advices, mnemonic rules and common logic in general. It is not only able to use its own knowledge to represent and explain the real world, but can also increase it with the help of the new data; in other words, it learns from the experience. This is actually the way in which humans think. Nowadays, , for example, a fuzzy system can control the function of an electric washing-machine or send signals for purchasing shares from the stock exchange, etc. [16].

FL constitutes a part of the Soft Computing area [17], which is a composition of FL, neural networks and probabilistic reasoning (Figure 4). It is recalled that the neural networks, within the artificial intelligence’s approach [18], mimic the behavior of the biological neural networks which are connected to the human brains, playing the role of “hardware”, whereas FL plays the role of the “software” in those artificial neural networks.

Figure 4: A graphical approach of the area of Soft Computing.

Intersections in Figure 4 include neuro-fuzzy systems and techniques, probabilistic approaches to neural networks and Bayesian Reasoning. A neuro-fuzzy system is a fuzzy system that uses a learning algorithm derived from or inspired by neural network theory to determine its parameters (FSs and fuzzy rules) by processing data samples. Characteristic examples of such kind of systems are the Adaptive Neuro-Fuzzy Inference Systems (ANFIS).

In an attempt to manage in a better way the existing in the real world uncertainty, several efforts have been made to improve/generalize the FS theory. Atanassov, as a complement of Zadeh’s membership degree (M), introduced in 1986 the degree of non-membership (N) and defined the intuitionistic FS. Smarandache introduced in 1995 the degree of indeterminacy/neutrality (I) and defined the neutrosophic set in three components (M, N, I), where M, N and I are subsets of the interval [0, 1]. In this way one can study better the neutral cases that frequently appear in a real situation, e.g. white votes in an election process.  Alternatives to the FS theory were proposed by Deng in 1982 (Grey Systems, where the arithmetic of the real intervals is used), Pawlak in 1991 (Rough Sets, with important applications to computing), Molodtsov in 1999 (Soft Sets, managing the existing uncertainty for defining the proper membership function of a FS) and others. The corresponding list of extensions/generalizations of FSs, however, does not end here; for more details see [19].

3. The Necessity of Fuzzy Logic for Scientific Thinking

The process of scientific thinking, being a synthesis of inductive and deductive reasoning, is graphically represented in Figure 5 [20]. The effort of explaining a phenomenon starts with the humans’ observations a1, a2,… , an of the real world connected to this phenomenon, which laid by induction (intuitively) to the development of a theory T1 explaining it. Theory T1 is verified by deductive reasoning and additional deductive inferences K1, K2, …., Ks are obtained. Next, a new series of observations b1, b2,…,bm follow. If some of them are not compatible to the laws of theory T1, a new theory T2 is developed to replace/extend T1, and so on. In each case the new theory extends or rejects the previous one approaching more and more to the objective truth related to the corresponding phenomenon.

That process is known as the scientific method. The term was introduced during the 19th century, when significant terminologies appeared establishing clear boundaries between science and non-science. Aristotle is recognized as the instructor of the scientific method due to his refined analysis of the logical implications contained in demonstrative discourse. The first book in the history of human civilization written on the basis of the principles of the scientific method is, according to the existing witnesses, the “Elements” of Euclid (365-300 BC) addressing the axiomatic foundation of Geometry.

Figure 5:  A graphical representation of the scientific method.

As an example, the geocenrtic theory (Almagest) of Ptolemy of Alexandria (100-170), being able to predict satisfactorily the movements of the planets and the moon, was considered to be true for centuries. However, it was finally proved to be wrong and has been replaced by the heliocentric theory of Copernicus (1473-1543). Another characteristic example is the Einstein’s general relativity theory developed at the beginning of the 20th century. This theory has replaced, in case of the long distances and the big masses of the Universe outside of our planet, the Newton’s classical gravitational theory, which was believed to be true for more than two centuries. The scientific method in general is highly based on the Trial and Error procedure, a term introduced by C. Lloyd Morgan (1852-1936). This procedure is characterized by repeated attempts, which are continued until success or until the subject stops trying [21].

The premises of all the scientific theories (with possible exception for pure mathematics only), expressed by axioms, basic principles, etc., are based on human intuition and inductive reasoning. Therefore, a deductive inference developed on the basis of a scientific theory is true if, and only if, the premises of the corresponding theory are true. In other words, the error of inductive reasoning is transferred to the deductive inference through its premises. This means that none of the existing scientific theories could be considered as been absolutely true; it simply could be considered as approaching the truth in a better way than the previous theories, which has replaced, did.

The bivalent logic, however, is able to verify the validity/consistency of an argument only and not its truth. A deductive argument is always valid, even if its inference is false!  A characteristic analogue can be found in the function of computers. A computer is unable to judge, if the input data inserted to it is correct, and therefore if the result obtained by elaborating this data is correct and useful for the user. The only thing   that it guarantees is that, if the input is correct, then the output will be correct too. On the contrary, always under the bivalent logic’s approach, an inductive argument is never valid, even if its inference is true! To put it in a different way, if a property p is true for a sufficient large number of cases, the expression “The property p is possibly true in general” is not acceptable, since it does not satisfy the principle of the excluded middle.

People, however, want always to know the truth in order to organize better, or even to protect their lives. Consequently, under this option, the significance of an argument has greater importance than its validity/precision. In Figure 6 [22], for example, the extra precision on the left makes things worse for the poor man in danger, who has to spend too much time trying to understand the data and misses the opportunity to take the much needed action of getting out of the way. On the contrary, the rough / fuzzy warning on the right could save his life.

Figure 6. Validity/precision vs significance.

Figure 6 illustrates very successfully the importance of FL for the real life situations. Real-world knowledge has generally a different structure and requires different formalization than the existing formal systems. FL, which according to Zadeh is “a precise logic of imprecision and approximate reasoning” [5], serves as a link between classical logic and human reasoning/experience, which are two incommensurable approaches. Having a much higher generality than bivalent logic, FL is capable to generalize any bivalent logic-based theory. In conclusion, it seems that bivalent logic, although the enormous progress of science owes a lot to it, is not the most suitable one to support the scientific thinking! On the contrary, FL fits much better to that purpose.

4. Conclusion

The discussion performed in this paper leads to the conclusion that the infinite-valued in the interval [0, 1] fuzzy logic generalizes and completes the traditional bivalent logic fitting better not only to our everyday life situations, but also to the scientific way of thinking.

References

  1. Korner, S., Laws of Thought. In Encyclopedia of Philosophy; Mac Millan: New York, NY, USA, 1967; Volume 4, pp. 414–417.
  2. Voskoglou, M.Gr., Methods for Assessing Human-Machine Performance under Fuzzy Conditions, Mathematics, 7, 2019, 230.
  3. Black, M., Vagueness, Phil. of Science, 4, 1937, 427-455. Reprinted in Language and Philosophy: Studies in Method, Cornell University Press, Ithaca and London, 1949, 23-58. Also in Int. J. of General Systems, 17, 1990, 107-128.
  4. Zadeh, L.A., Fuzzy Sets, Information and Control, 1965, 8(3), 338–353.
  5. Zadeh, L.A. Outline of a new approach to the analysis of complex systems and decision processes. IEEE Trans. Syst. Man Cybern. 1973, 3, 28–44.
  6. Voskoglou, M.Gr., Finite Markov Chain and Fuzzy Logic Assessment Models: Emerging Research and Opportunities; Create Space Independent Publishing Platform, Amazon, Columbia, SC, USA, 2017.
  7. van Broekhoven, E. and De Baets, B., Fast and accurate centre of gravity defuzzification of fuzzy systems outputs defined on trapezoidal fuzzy partitions. Fuzzy Sets Syst. 2006, 157, 904–918.
  8. Kosko, B., Fuzzy Thinking: The New Science of Fuzzy Logic, Hyperion: New York, 1993.
  9. Umbers, I.G. and King, P.J. , An analysis of human decision-making in cement kiln control and the implications for automation, Int. J. of Man-Mach. Stud., 12, 1980, 11-23.
  10. Mamdani, E.H. and Assilian, S., An experiment in linguistic synthesis with a fuzzy logic controller, Int. J. of Man-Machine Studies, 7(1), 1975, 1-13.
  11. Sugeno, M., Industrial applications of fuzzy control, Elsevier Science Pub. Co., 1985.
  12. Kosko, B., Fuzziness Vs Probability, Int. J. of General Systems, 17(2-3), 1990, 211-240.
  13. Jaynes, E.T., Probability Theory: The Logic of Science, Cambridge University Press, UK, 8th Printing, 2011 (first published, 2003).
  14. Mumford, D., The Dawing of the Age of Stochasticity, in V. Amoid, M. Atiyah, P. Laxand & B. Mazur (Eds.), Mathematics: Frontiers and Perspectives, AMS, 2000, 197-218.
  15. Voskoglou, M.Gr. and Athanassopoulos, E., The Importance of Bayesian Reasoning in Everyday Life and Science, Int. J. of Education, Development, Society and Technology, 8(2), 2020, 24-33.
  16. Voskoglou, M.Gr., Fuzzy Control Systems, WSEAS, Trans. On Systems, 19, 2020, 295-300.
  17. Paplinski, E.P., Neuro-fuzzy Computing, Ch. 1, 2005. Retrieved on April 20, 2019 from https://silo.tips/download/neuro-fuzzy-comp-ch-1-may-25-2005.
  18. Voskoglou, M.Gr. and Salem, A.-B. M., Benefits and Limitations of the Artificial with respect to the Traditional Learning of Mathematics, Mathematics, 8, 2020, 611.
  19. Voskoglou, M.Gr., Generalizations of Fuzzy Sets and Related Theories, in M. Voskoglou (Ed.), An Essential Guide to Fuzzy Systems, Nova Science Publishers, NY, 2019, pp. 345-352.
  20. Athanassopoulos, E. and Voskoglou, M.Gr., A Philosophical Treatise on the Connection of Scientific Reasoning with Fuzzy Logic, Mathematics, 8, 2020, 875.
  21. Thrope, W.H., The origins and rise of ethology: The science of the natural behavior of animals, Praeger, London-NY, 1979.
  22. Dernoncourt, F., Fuzzy logic: Between human reasoning and Artificial Intelligence, Report, Ecole Normale Supperieure, Paris, 2011. Retrieved on March, 15, 2019 from w.researchgate.net/publication/235333084_Fuzzy_logic_between_human_reasoning_and_artificial_intelligence.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations