Hmm. “Resolution” and “reworking” are very different things to my mind.
The point is, accepting that some binary classifications that humans use are actually judgement calls based on some continuous property of the object in question, basically questions the whole premise of the paradox. (That a discrete line actually exists)
It’s not that it reworks the problem, it essentially suggests there is no problem.
That’s akin to my take. IANA logician or philosopher, but the discussion of “solutions” to the sorites problem on wiki skipped what to me is the most obvious angle of attack.
We accept as one of the fundamentals of Boolean logic that “P or not-P” must be true always and forever regardless of P’s value and P’s specifics.
But IMO that’s only half right.
Like any function, it has a range of inputs under which it’s valid and it’s GIGO or nonsense when applied to values outside its range. Once we get into these non-brightline situations, the truth value of P itself is inherently non-0 and also non-1. When faced with such inputs, the result of the calc (P or not-P) is undefined.
IMO it’s very roughly analogous to NAN in IEEE floating point. Once a NAN gets into your data flow, the result of any subsequent legit operation is still NAN. Once you’re out of bounds, there’s no returning to in-bounds and continuing to play.
In that sense the sorites paradox is no more a paradox than the classic algebra manipulation riddle that contains a disguised divide by zero to allegedly “prove” 1=2. Along the way by unwittingly applying an operation outside its valid range you actually “proved” NAN = NAN.
Fuzzy logic is not so dead that it does not still feature prominently in the IEC standards for programmable controllers, including a Fuzzy Control Language.
I’m often a defender of philosophy – when people claim that philosophy is useless navel-gazing, I beg to differ and I explain why. Indeed, IME, the people who are most anti-philosophy are often heavily into theology / divinity. And that’s quite telling to the actual effectiveness of philosophical inquiry – certain people want us to stop doing it because it hurts
–but–
The one thing that annoys me about philosophy, is a general reluctance to just accept that a particular problem or paradox was based on a flawed premise. Once we’ve invested a certain amount of time into thinking about something, it must deliver some important new truth, it can’t have been a waste of time.
(ETA: IMO, the fact humans can make binary classifications based on continuous / complex input is an important truth, but not one within the premises of the paradox)
Two-valued (aka Aristotelian) logic is obviously insufficient for many purposes. For example, mathematical propositions can be provably true, provably false, or undecidable. In fact it is possible for a proposition to be undecidable in one axiom system, but decidable in another. I know a beautiful proposition about ordinary whole number arithmetic that it is undecidable in ordinary (that is, Peano) arithmetic but is decidable using what is called epsilon_0 induction.
Fine, that is three-valued logic and perfectly comprehensible. Now consider the concept of “tall”. People have varying degrees of tallness that is perfectly expressed by the concept of “height”. There is no need to bring fuzzy logic into the discussion; it only muddies the waters. What are you going to say, that the tallest person ever (who was, I think, 8’4") has tallness 1 and the shortest has tallness 0 and we will assign linearly in between? What have you added to the discussion that is not captured by height? Now take a concept like beautiful. Yes there are degrees of beauty, but assignment of numbers between 0 and 1 is purely arbitrary. I just don’t see where this is going.
Although I don’t think this has appeared in this thread, let me mention that Zadeh created–or tried to create–a fuzzy set theory. And someone–I met him, but don’t recall his name–was studying fuzzy topology. When I pointed out to him that one gap in fuzzy set theory was a lack of a fuzzy set of fuzzy sets. He admitted that but explained that whenever he came on something that seemed to require that, he dropped it and turned to something else.
This gap can be filled quite readily but it leads immediately to something called the topos of sheaves on the interval [0,1], a well-studied concept. Just as three valued logic leads to the topos of sheaves on the three element total order. No, I will not attempt to describe these gadgets.
I was a Computer Science student at Berkeley in the early 1970’s at about the time Prof. Zadeh was developing this stuff. Everybody knew him, or at least knew who he was.
One of my roommates (the same one who I just mentioned in a nearby thread thought he had invented the word “humongous”) told me a story of a device controller that had long been programmed using conventional algorithms, but a new model had been programmed using a fuzzy logic algorithm.
It was some sort of electric oven, to heat up some scientific experiment to the right temperature, that was sent up to the Skylab or some such orbiting research platform. To get the oven to the right temperature, you have to turn down the power before it gets there, otherwise the temperature will overshoot the desired temp and cook the experiment. The conventional algorithm measured the temp at intervals, computed the derivative of that, and steadily lowered the power until the right temp was reached.
The new model, with the fuzzy logic, was somehow supposed to figure all of that out, but didn’t, and fried the experiment.
On the other hand, both P(I)D control and fuzzy logic control are suitable for controlling oven temperature or linear systems, so it sounds like a programming error.
I think this is why Fuzzy Logic has mostly faded into the background. In the 90s, the big trend at the time were AI systems that were “interpretable” like expert systems. The thinking was, the way we got smart machines was we got people to explain their smartness to machines and the machines would model their intelligence off of human logic.
The shift into “big data” caused AI researchers to discover that quantity often significantly outperformed quality and that abandoning interprebility in favor of effectiveness was usually the right tradeoff. Now with modern deep learning/GAN style machine learning, you don’t have things like “happiness” and “sadness” any more, you have a 100 dimensional vector that’s empirically derived from the training data and one of the vectors kinda maybe resembles what a human might label “happiness” but only by coincidence and the other 99 vectors don’t have any human words that can describe them at all.
In a world like this, Fuzzy Logic loses a lot of its applicability.
And, of course, machine intelligence techniques (I don’t necessarily mean “deep learning”) are eminently suitable for designing controllers— by having the computer program itself, an obvious source of error is removed
That’s not how fuzzy logic works, or at least isn’t the way it was taught to me in comp sci class.
Firstly, it is not a matter of normalizing all the data into the scale 0…1.
Typically you’d scale it more like: Anyone above 6’5" is fully tall, then as you go to shorter heights the confidence in calling the person tall gradually falls off.
There are a whole range of heights where we have full confidence in calling “tall” and at the other end a whole range we’d trivially map to “short”.
Secondly, you don’t typically output a real number between 0…1.
If the output is to a classical system, then you just output “tall” or “not tall”. Or, if it’s to a control system, you output the actual desired value for the thermostat or whatever.
If the output is to another fuzzy system, then you might output reals, but those numbers have no significance in themselves, you are just passing on the degree of tallness or whatever to another fuzzy system.
Wow! Different than I was taught. We would never have assigned “1.0” to any height, just a value approaching that. This, again, is very similar to probability: in the messy “real world” we almost never assign a probability of 1.0 to any event. (In controlled event spaces – such as a roulette wheel – then, yes, some probabilities are 1.0)
These real numbers are not probabilities of events, though. At least not directly.
The devices are programmed via “rules” like
RULE 1 : IF service IS poor OR food IS rancid THEN tip IS cheap;
RULE 2 : IF service IS good THEN tip IS average;
RULE 3 : IF service IS excellent AND food IS delicious THEN tip IS generous;
The effects of such statements is quite deterministic. However, it is as easy or even easier to fuck up the programming as with normal logic, as the aforementioned anecdote confirms.
Fuzzy logic is alive and kicking in our washing machine. Its purpose is to show the time left until the washing programm is done and it feeds on loose socks.
Library catalogs (the electronic kind, not card catalogs) use fuzzy logic to deal with typos in searches.
Back when libraries started implementing online catalogs in the 80s and 90s, they were notoriously unforgiving of errors. People would type an author’s name incorrectly and the computer would say “No results.” Users wouldn’t think, “Wait, did I type that name correctly?” Instead, they’d think “Wow, what a shitty library—they don’t have any books by Mark Twain!” and leave. Fuzzy logic solved that problem.
I believe you, but I don’t think that would be typical fuzzification. I don’t want my system to occasionally say a 6’9 guy is not tall. That guy should be well clear of any gray area.
Here is a page giving an example of fuzzification of height. This author has chosen to say the property “tall” becomes non-zero at around 185cm and is fully true (1.0) at 195cm. This would be the more typical way we’d fuzzify a property like “tall” (personally I’d give it a bigger range, but the point is, it is not only the tallest man ever that is unambiguously tall).
Except the pendulum is swinging the other way now. Interpretable AI is becoming a big topic again as the neural networks produced models are not surviving the transition from the lab to the wild, and no one knows why. It is great to have 99.3% accuracy at identifying a tumour (It’s NOT a tumour!) in photos in some data set, only to find that accuracy drops to 58% when you use it clinically.
P.s. - I recently used fuzzy logic in my research into identifying learners in difficulty (formerly known as “at-risk learners”). The idea is that all learners are in some degree of difficulty and so the question is do what degree do they belong to the label “in difficulty”. It was was a radical departure from most of the literature which preferred a definitive label. For this reason (and others) I’m hoping it gets published. So this was my way of saying fuzzy logic definitely ain’t dead. It’s just pining for the fjords. Perhaps just stunned.
Care to expand, with cites, please? We use neural networks all of the time and test them constantly against real world data, and we’re most certainly not a lab.
Interpretability is indeed important, as many executives/other end users are not willing to accept a model, no matter how well tested, if they:
Disagree with the outcome
Don’t understand how it reached the outcome (although this matters less if they do agree with the results).
Believe that the outcome will cause their division’s resources to decline during future budget allocation exercises.