# Explain fractal geometry like I'm an idiot

Well, as tax dollar amounts go, this is pretty small. All you need to investigate fractals is one math professor, maybe a grad student, and a couple PCs. Hardly a Star Wars missle defense budget.

But, as mentioned, the real point is that basic research is worth it, even if you don’t know what, if anything, useful you’ll get out of it. Nobody knew that investing in quantum physics would give us transistors (and everything from cheap radios up to the iPhone), but it did. Will the investment in fractal research pay off? Maybe, maybe not, but it’s a very good bet that over a bunch of topics investment in mathematics research will pay off in some way.

And stock charts.

I wonder what Mandelbrot’s “real world” investment portfolio looks like.

From The (mis)behavior of markets: A fractal view of risk, ruin and reward by Benoit Mandelbrot and Richard L. Hudson, 2004:

This is what I came here to add. The fact that the border can be made longer, by adding extra points or vertices or ‘jagged-ness’, does not mean that it ever becomes infinitely long.

It is easy to show that this is the case. Clearly, each iteration represents a measurable increase over the length of the previous iteration. For example, it may be the case that after the second iteration, the perimeter is four thirds as long as it was before. Now, if you tell me that the perimeter eventually reaches infinite length, I will simply ask you what the length was at the previous iteration (ie at the iteration that came before inifinite length was achieved). When you can tell me the answer, I will agree that the perimeter becomes infinitely long.

While ftg is right in saying that infinite jaggedness doesn’t necessitate infinite length, the Koch Snowflake’s boundary is, in fact, infinite and encloses a finite area: as you correctly said, with each iteration, the perimeter length increases by a factor of 4/3; thus, after n iterations, the total length is (4/3)[sup]n[/sup], while the area grows geometrically and thus converges to a finite limit, 8/5 the area of the original triangle.

Zeno, is that you?

You’re misunderstanding the nature of limits. The perimeter becomes infinite in the sense that it can be made arbitrarily large by choosing the number of iterations appropriately, not in the sense that it becomes infinite after a finite number of iterations. The Koch Snowflake is the curve that’s left after an infinite number of iterations, and its perimeter is clearly greater than any finite real number.

And if you tell me that the perimeter is finite, I’ll ask you what it is, and tell you the iteration at which it exceeds that.

I’m surprised nobody yet in this thread has yet addressed the question of how a fractal can be fractional in dimension. The best description I’ve seen for that is how one pixellizes an image.

Let’s say that I have a black-and-white image in a square frame. I want to pixellize that image, using a very simple method: I divide the frame up into an n by n grid of squares, and for each one of those squares, I color it black if it contains any of the image, and leave the square white if it doesn’t. The number of squares I need to cover my image completely will depend on how many little squares I have, and how it depends will depend on the dimensionality of my image.

For instance, suppose that my image is zero-dimensional, such as a single point somewhere in the frame. No matter how fine I make my grid of little squares, I’ll only ever need a single square to cover that point. The number of squares I need is proportional to n[sup]0[/sup], and my image is 0-dimensional.

Now suppose that instead of a single point, my image is a line drawn across the frame. If I want to completely cover that line with little squares, the number of squares I need is proportional to n[sup]1[/sup], and my image is 1-dimensional. I’d get the same result (with a different constant of proportionality, of course) for any smooth curve drawn in my frame, at least once I got to a high enough resolution to accurately reproduce the curve.

Or my image could be some sort of two-dimensional figure, such as a black filled-in circle or square. Now, I’ll need a number of squares proportional to n[sup]2[/sup] to completely cover the image, which is 2-dimensional. So in general, if I have a d-dimensional image, I need a number of pixels proportional to n[sup]d[/sup] to be black, to pixellize it into an n by n image.

But there are some images for which the number d isn’t an integer. For instance, if I have a Koch snowflake in my frame, and I want to pixellize it, the number of pixels I need is proportional to n[sup]1.26[/sup]. If we’re calling that exponent the dimensionality of the image, then we would say that the snowflake curve has a dimensionality of 1.26, hence the name “fractional dimension” or “fractal”.

Nice explanation Chronos.

Thanks, all. I suspected that there wouldn’t be a simple explanation, but you’ve pointed me in some good directions, which is what I wanted. I must have taken a bathroom break when they were talking about heart arhythmia, as I don’t recall that part.

Not so, and your own reasoning shows that it is not so. Or else you have to argue that there is a value for which (4/3)[sup]n[/sup] is equal to infinity.

You can repeat the iteration as many times as you like - all you will ever get is a boundary or perimeter that is 4/3 as large as the one before. There is no value of which you can say ‘four thirds of it will equal infinity’, hence the value can never equal inifinity. The length of the boundary will always equal 4/3 times the number of applied iterations. This is always going to be a finite number, which you can prove simply by running the same thought experiment I gave you before - go back one step.

ianzin, the thing people are claiming is infinite is the limit as n goes to infinity. No one is claiming (4/3)[sup]n[/sup] is infinite for some finite n.

In terms of non-rigorous pesudo-math, surely you can agree that (4/3)[sup]infinity[/sup] is infinite. As of course is (4/3)[sup]infinity - 1[/sup], (4/3)[sup]infinity - 2[/sup], etc.

In terms of actual math, what we really mean when we say “(4/3)[sup]n[/sup] goes to infinity as n goes to infinity” is: “For any real number you can name, I can make (4/3)[sup]n[/sup] larger than that number by choosing a large enough n.” When mathematicians say “The limit goes to infinity”, they always mean “I can make it as big as I want.”

After n iterations, you don’t have the Snowflake Curve, you just have an approximation to it. You can make an arbitrarily good approximation to it by taking an arbitrarily high number of iterations, but the true fractal itself doesn’t have any finite number of iterations.

The two concepts that we are dealing with here are those which take students a while to really understand, more than just to be able to quote. But they get absolutely high as they start to glimpse infinity in a finite space, and as they see fractional dimensions. Both challenge their intuitive grasp of the world which all seemed so obvious before we threw pre-calc and fractals at them.

I do fractional dimensions with the Koch Curve but also with an experiment with crumpled paper balls. Like this one (links to pdf file):

Fractal balls experiment.

When you come at the fact that the space they are living in isn’t all nice neat integer dimensions, and approach that idea from a number of different aspects, it is a bit mind-blowing. One thing is to do the calculations, the next, which is the time thing I was talking about - is to actually conceive that the three spatial dimensions we know and love, have an infinite number of fractional ones between. Koch curve, Mandelbrot Set and so on, between 1 and 2, then crumpled paper balls between 2 and 3.

I love this stuff! It is constantly challenging intuition. I love the fact that we can prove that the world is unpredictable - the lovely butterfly effect and chaotic systems. There is nothing more depressing than the thought of a predictable world.

That example seems a bit off, to me: A crumpled up piece of paper is still two-dimensional, and in fact is still mathematically flat. I can see how it could be considered an approximation to a fractal, but there are better approximations you could use.

See, I’d say I’d merely have to argue that there is no finite value x for which (4/3)[sup]n[/sup] < x for all n, which is obviously true; thus, there are only finitely many values of n for which the curve has a length bounded by a given value.
But, as Chronos pointed out, finitely many iterations don’t give you the Koch Curve, they merely give you an approximation.

Thank you, and thank you HMHW as well. I understand you perfectly, and I still contend that I am correct. I submit that to suggest ‘infinity’ is ‘the limit’ or ‘a limit’ or any kind of limit does violence either to the meaning of ‘infinite’ or the meaning of ‘limit’.

I have had the pleasure of discussing this at length with Lynne-42 in person. She was unable to convince me that I was wrong, and vice-versa. But it was an enjoyable discussion all the same. I am perfectly well aware of what the mathematicians and the text books say, and I’ve been reading about fractals and snowflake curves and so on since my early teens, more than 25 ywars ago. I nonetheless submit that what I have said stands, and from my point of view no-one - during these past 25 years - has been able to adequately answer my questions, or show me that my thinking is flawed, or that I am simply failing to appreciate how the terminology is used.

However, I have no problem at all with everyone, or everyone who feels they have the advantage of knowing more about this than me, telling me that I’m wrong. I accept that here on the SDMB, given the prevailing intention to ‘fight ignorance’ and so on, it is appropriate to simply say that I’m wrong, and that I don’t understand what I’m talking about, and that readers coming to this subject afresh should ignore me and go with the opposite and prevailing expressions of wisdom. I am entirely comfortable with this.

It wouldn’t be the first time that, finding myself the only one expressing a contrary way of looking at things, I have turned out to be the one who is wrong. Fair enough. However, it would also not be the first time that I have turned out to be the one who has noticed something others have not. And for now, that’s all there is to it.

Are you, a non-specialist, seriously suggesting that you may have noticed an inconsistency in a basic definition taught to every single undergraduate math major?

Words do not have fixed, concrete meanings, but rather have webs of them, related through family resemblances. I do not know what you are taking “limit” or “infinite” to mean, and you may have notions in mind which are reasonable for some contexts, but undoubtedly you are not picking the only meanings, or even the ones most appropriate for this context. At any rate, I see no problem in making the statement, uncontroversial within the mathematical community, that “The Koch curve is the limit of these successive approximations and its length is infinite”. There is nothing particularly fantastic about that usage. But, let us hear what you take the meanings of “limit” and/or “infinite” to be and how this usage does violence to them.

What questions would those be which you would want answered?

These are your words, not mine. You only have to look back over this thread to see that I have not used these words or made this suggestion.

However, since you raise an interesting point, let me address it.

Taking the general case, I would say there is nothing inherently absurd about the notion that a non-specialist, an outsider or someone lacking formal education in a given subject might spot either (a) an interesting variant on the conventional wisdom or (b) a question that is perhaps glossed over within the conventional teaching of that subject, and that perhaps deserves fuller exploration. I would submit that while the person offering the unconventional view is almost always incorrect or simply misinformed, on rare occasions he or she turns out to have something to offer. I believe there are many documented instances of this phenomenon in almost every field of learning, and if you seriously disagree with this (I don’t think you will) then I will try to find cites and examples for you.

This being the case, I would suggest that it is very unlikely that I have anything new to contribute to this particular field of learning, but it is neither impossible nor inherently risible.

For what it’s worth, Ultrafilter, I am neither naturally iconoclastic nor do I seek to play the renegade. However, there have been many times in my so-called education (which was actually nothing deserving of the name), and my professional life, when I was expected to go along with a given idea simply because it was (or was represented as being) ‘the conventional wisdom’, with conformity backed up by the usual carrot/stick scenarios, and I persisted in asking my questions anyway, and it turned out that I was right to do so. I could give you examples both trivial and non-trivial.

I don’t think there’s any harm in asking questions and expressing doubts, and I don’t think ‘majority consensus’ is ever as satisfying as, or a good substitute for, question, discussion, analysis, reasoning and (maybe) proof. I’m trying not to be a jerk about this, and I apologise to anyone who thinks that I am being one. Sorry, I’ll try to learn from those better informed than myself (such as Lynne42, with whom I have discussed some of these issues already). But where I have honest doubts, I try to express them honestly. I apologise to you and to others if I have sounded rather too snippy earlier in this thread. I’m sometimes in a rush, and I can’t always spend as long composing my posts as I would like. Mea culpa.

ianzin, the questions in my post above remain, but I also want to add the following attempt to explain the Koch snowflake phenomenon, and to hear what your response would be.

We have perhaps been glib in explaining what the Koch snowflake actually is. We have told you how to iteratively construct a series of polygons which “approximate” it, but not how to use that series to determine the Koch snowflake itself. So here is one definition of the Koch snowflake, and I want you to forget any consideration of the words “approximate” or “limit” as you interpret this definition; I am not using such concepts.: Some points in the plane are contained within the borders of at least one polygon in that series, and some points are not. Consider the locus of those points which are. This gives some region in the plane, and this region has a border. That border is the Koch curve. I do not claim this border is actually one of those polygons in that series, because it is not. I repeat, I am not claiming that at any stage of iteration, that series becomes the Koch curve. I am not even here using the terminology of “limit”. I am simply using that series to aid me in presenting a definition of a particular curve I would like to pay attention to. And is this not a perfectly well defined curve I have described?

Now, as it happens, this border (the Koch curve) can be shown to contain every vertex of every polygon anywhere in that series, and from that we can conclude that its length must be greater than that of every polygon in that series (since the straight-line edges of a polygon are the shortest possible paths between its vertices). And we also know that the lengths of the polygons in that series are (proportional to) 1, (4/3), (4/3)^2, (4/3)^3, (4/3)^4, …; thus, every natural number is less than the length of some polygon in that series and thus less than the length of the Koch curve. Do you have any problem with this?