My graphics card has anti-aliasing to make rough edges look smooth, does the brain have a similar thing ?
And seeing as maths is still using straight lines when the microscope has shown that there aren’t any, isn’t maths lagging behind scientific discovery somewhat ?
The straight lines in math are not particularly meant to describe the positions of matter as viewed under a microscope. One might also have difficulty imaging a pun with a microscope, yet no one takes this to invalidate wordplay.
Our brains have edge-detection circuitry as one of the processing layers in the visual cortex. This is why we can make sense of line art. Let’s hope that the extraterrestrial recipients of the Voyager disk have human-like visual systems. If they don’t, the line art human figures won’t make any sense.
The straight lines in math are not exclusively meant to describe positions of matter as viewed under a microscope. That having been said,
A) if you ask scientists about their scientific discoveries, they will note that they use math to describe them,
B) they will in fact note that they use math to describe scientific discoveries even in the specific context of microscopic positions,
C) the math they use in the context of microscopic positions includes and builds upon the basic math of “straight lines” [note that to even say “These particles do not lie on a straight line, but rather, form an angle of so-and-so” presupposes some non-nonsensical conceptual account of straight line, and much more besides],
D) there are also contexts in which they use other math without having need for the math of “straight lines”, and this is fine and in no conflict, because,
E) there is not just one thing studied in math, nor has math stayed still. As science has added new observations, so has math.
Also, F) why should a concept only be legitimized by happening to be literally drawn by positions of microscopic particles in some particular way? In describing the physics of the Mario universe, it is useful to refer to parabolas [e.g., in describing the effects of gravity, in accordance with the quadratic law known not only from experiment but from source code], yet no parabolas ever appear drawn on screen.
Why would it? If you if you had better eyesight you would be able to see the roughness. What are, in fact, very slightly rough edges look smooth merely because our eyes do not have sufficient resolving power to see the roughness. (And if you still wanted the letters on your computer screen to look smooth, your graphics card , and monitor, would have to be a lot better. It has a very different job to do from the one that your eyes and visual brain are doing.)
The Hamster King is correct to point out that the visual system has edge detecting mechanisms, but they are not really there to remove roughness. They are there to enable us to see objects as separate, and not blending into one another. After all, something can be, and look, rough, and still be an edge.
It does not take microscopes. Mathematicians have been very well aware that there are no straight lines (or perfect circles, triangles, or whatever) in the physical world at least since the time of Plato, in the 4th-5th century B.C. They don’t regard this as ‘lagging behind’ at all, but rather as evidence that math is superior to, more eternally true and more fundamental than, mere physics. Even if there were no physical universe (or if the laws of physics were quite different), the truths of mathematics would still hold.
Indeed. I find all that a bit disturbing. How do you insert macroscopic mental constructs into the microscopic world ? I guess that when all you have is a hammer eveything looks like a nail - to coin a phrase.
I’ve been looking for a good account of that online.
Hmmm, the world of seperate objects.
OK, back to Plato - it always does in this area.
So where did the idea of the straight line come from ?
I suppose that it must be from some natural line - and the early mathemeticians took their inspiration from that.
Did the ancients really have the wherewithal to invent the straight line if there was no natural precedent ?
Seems to me that this perception of a straight line depends upon the particular way our senses depict reality - perhaps animals with keener eyesight never see a straight anything.
Therefore, given different laws of physics (which in our understanding are constructs of our particular brains), and hence given different perceptions and nervous systems, the truths of mathematics would not still hold, and may never arise as such.
For all we know maths is just an Earth ape thing, inhabitants of other galaxies don’t suffer from it like we do.
If I stand at a particular location and you stand at a particular location, we may wish to speak of the location halfway between me and you. Is it necessary for there to actually be some particle at that particular location for us to merely talk about the location?
If not, then, it doesn’t matter that you don’t find infinitely many particles lining up in unbending continua upon microscopic investigation; the structure of the locations comprising the space in which particles live will still be one about which it makes sense to speak of such things as lines [as we can make sense of such talk, if we are forced to contrive to, in terms of such things as halfway-between-locations].
Perhaps animals with keener eyesight would still seek to describe the way in which the figures whose outlines they perceive deviate from straightness.
As I said, there are never any parabolas drawn in the Mario games, yet, nonetheless, it is indisputable that parabolas are a useful concept in analyzing the behavior of the Mario universe. Indisputable insofar as we can see their presence explicitly in the source code, though it could be revealed by empirical experimentation as well.
For that matter, you don’t need a visual system to talk about “straight lines”, though it’s natural that our visual system led us there.
Even without any physical space as we know it, the mathematical concept of “straight line” still makes sense in other contexts; mathematically, a straight line amounts to something like “A nontrivial datatype on which you can perform subtraction to get differences, and divide differences to get difference ratios, these algebraic operations behaving in the familiar ways”. That’s all. It models many more things than just locations in physical space.
Yes and no. It’s more that anti-aliasing works because of how our brain does edge detection. It’s much the same reason that the reason we see green and red really close together and see yellow, it’s not because they add to yellow, but because they stimulate the cones in our eyes in the same way that yellow does; they’re still just individual frequencies of green and red.
It’s believed that the way our brain detects edges is by calculating the second derivative over a localized area; if the value is large enough, we see an edge. Computer vision simulates this by approximating the second derivative and using that value to determine edge strength. Thus, when we see highly pixelated immages, we see stronger edges because, for instance, putting a white pixel next to a black pixel, we get a very large value, and thus a strong edge, but if we can approximate what the value “should” be if the line isn’t exactly on the edge, putting some grey pixels between the white and black ones helps to smooth out the gradient and give us a smoother edge. So, really, it’s not actually making the edge smoother, it’s just taking advantage of how our brains detect edges and working with that to trick us into seeing a smoother edge than is actually there.
Just because something does exist in nature doesn’t mean that the concept itself isn’t useful. The human brain works by abstracting concepts. The brain is terrible at doing highly complex tasks like multiplying large numbers, working with large data sets, etc. But what it can do well is abstract these concepts and then work with that abstraction to get useful results.
In this way, the fact that a perfectly straight line doesn’t actually exist is irrelevant, because treating something as if it’s a straight line such that it operates that way within the tolerences that we need for the task we’re applying it to, why not use the simpler concept? If the human brain actually tried to keep track of all of these specifics we’d be so overwhelmed we couldn’t actually work with that data.
And it makes sense why the brain evolved this way too. Our brains have certain abstract rules in filtering through data, working with simpler stuff when we can to save time and energy, time being particularly important when dealing with hunting or protecting ourselves from predators and other dangers.
That’s an interesting answr, BlasterMaster, with the second derivative - I will have to look that up.
OK
I’m sure you would agree that savants are an interesting exception. I understand that training can push normal people towards those high levels of ability too.
It seems to have a utility for good or ill, certainly.
However, don’t you think there is a certain amount of closed mindedness on the subject ? Just because it is useful - in certain ways - seems to be the end of the story for a lot of people.
Isn’t the upshot of this that we cannot know the truth about the universe we live in, despite many grandiose claims, because in the evolutionary scheme of things truth is not useful and has no place in the process of survival and reproduction ?
Ever read about Kent Cullers, the blind SETI guy ?
He seems to deal with the same physics that sighted people deal with - but I wonder if that is because physics and maths are a matter of manipulating symbolic relationships, rather than really grasping how nature looks - or feels.
Could we say that someone could potentially do maths and physics even without ever having seen or felt anything resembling lines, circles etc. ?
I studied computer vision as part of my PhD program, but it’s been a few years. I may be able to find some resources on it if you want, but I can’t promise anything. You should be able to find some reasonable resources on it with googling computer vision and edge detection, corner detection, and such.
This is just a pet theory on my part, and not something I’ve really put any effort into developing, so take it with a huge grain of salt, but it seems to me that part of what differentiates savants from normal people is precisely their inability to abstract data. As in, looking at a famous savant like Kim Peak, he has a whole bunch of highly specific information in his mind, but he doesn’t really seem able to connect any of it in a particularly useful way, other than answering random trivia questions. It’s not like he could just memorize sports statistics and then become an amazing sports analyst, or memorize weather data and become a premier meteorologist. What I think makes him unique is that he has both a highly powerful brain–the ability to compute and store large amounts of information–but none of that effort is used to abstract and create relationships between those abstract concepts, where other people will have varying degrees of ability to compute, store, abstract, and create relationships.
Or to relate it back to a vision context, it’s trivially easy for a computer to tell you exactly what color any given pixel is, that’s your raw data. But pulling out of what is basically just a bunch of numbers in a grid pattern that it’s supposed to represent a particular concept like a chair or a person, or even harder, a specific person requires that abstraction into meaningful concepts and relationships. And by that same token, this is why seeing the zig-zag nature of lines as they are in nature isn’t useful and is ignored by our brains; once I realize I’m looking at a car, I don’t care if a particular rod or cone cell in my retina is stimulated to see one color or another.
Our brains construct models that make predictions about the universe. When these models make a lot of correct predictions, we say that they’re “true”. Unfortunately, this casual usage of true gets muddled together with the rigorous definition of “true” in formal logic. But, they’re actually two very different things.
In other words, yes, it’s impossible to make a true statement about the universe. The best we can hope for is to make statements that are well-supported and useful.
Yes I will look around first before asking for any of that.
As in, looking at a famous savant like Kim Peak, he has a whole bunch of highly specific information in his mind, but he doesn't really seem able to connect any of it in a particularly useful way, other than answering random trivia questions. It's not like he could just memorize sports statistics and then become an amazing sports analyst, or memorize weather data and become a premier meteorologist.
So what’s his motivation ? There’s no mansion/Ferrari/babes reward for what he’s doing - yet he and others like him will spend hours compulsively absorbing vast quantities of information. Maybe there is a reward but it isn’t so obvious, or is it just pure compulsion, or safety seeking. I guess the commonly held view is that autistic people seek calm and emotional safety in repetitive, highly patterned activity.
How far is this true for what may be called the normal end of the spectrum - your average science student who spends all day with tables and equations - seeing as chasing a career is plenty about seeking calm and emotional safety.
So this is a world of untruth and illusion, but it is right, and indeed inevitable, to utilise untruth if it is profitable. At least, that seems to be where people are pointing.
Forecast, gentle Platonic breezes, gusting to gale force Milton.
Words like “truth” and “untruth” are not helpful concepts when we’re discussing knowledge of the world.
People know things about the world. Sometimes that knowledge is well-supported. Sometimes that knowledge is useful. And often “well-supported” and “useful” go hand in hand.
But some types of knowledge are more well-supported and useful than others. It doesn’t do any good to lump everything together as “untruth and illusion”. That only serves to highlight the inadequacy of “truth” as a means for discussing real-world knowledge.
A lot of edge detection happens in the retina, using a very simple mechanism: neighbor cell inhibition. This happens in our ears too, for frequencies.
Imagine a line of receptors, all getting the same signal. Each inhibits its neighbor by the same amount (some fraction of the signal it sends to the brain). Here I’ll show a line of inputs, with a line of outputs below. Assume the inhibition is 25%.
This works as well in 2 dimensions. This is just what’s happening in the retina, and only does edge enhancement, which is only one small part of line detection. A lot goes on in the brain to do line detection, and we don’t know a lot about it, but we do know that there are neurons that fire based on the presence of lines of particular slopes. Unfortunately the articles I’ve read on the subject don’t go into enough detail for me to quite understand quite what that means. But it’s very much not simple.
Nitpicking, but with 25% inhibition from each of the two neighbors, shouldn’t input …4 - 4 - 4 - 4 - 8 - 8 - 8 - 8… produce an output of …2 - 2 - 2 - 1 - 5 - 4 - 4 - 4…?