Every piece of it is well defined.
Deterministic system, correlation, objective function, optimization algorithm. The only potentially fuzzy word is “map”, but that’s why I spent extra time trying to lay out what that meant. Now, it’s possible that some people might not be familiar with these terms. In that case, the definition will definitely be “not very clear” because they simply don’t have the background familiarity with each individual piece on its own to handle the combined definition. But for anyone who knows about the different pieces, then they know this definition.
It would hardly be a sin to be unfamiliar with the pieces. I’m not personally familiar with general relativity. But the underlying point here is that that doesn’t mean that general relativity is unclearly defined. I just don’t understand the pieces that make it up. Our days are busy. These aren’t things that most people need to be familiar with.
I’d agree with WordMan that our objectives are different. My goal was to define “decision” in the context of a deterministic system.
I’m not going to try to defend determinism any longer, nor try to argue whether this world might be deterministic, nor deal with definitions that wouldn’t work in a deterministic system. That’s a dead end, as I’m finally beginning to understand. What I want is for people who might not have otherwise had the opportunity to do so to consider what one particular deterministic type of world might look like, and how we might choose to describe the events in that world using everyday language.
The benefits to this approach I’m leaving up in the air for the moment.
If you’re not interested in the exercise as I have narrowed it, that is of course completely natural and understandable. Family arrived a little earlier on Christmas Eve than expected, so although I felt nearly finished with this post, I couldn’t complete it. Looking again now, I see from the new posts that you are apparenty uninterested in the question I’m putting forth here. Which is, of course, fine. It just happens to gather my personal interest, but I’m weird. No one else has to be captivated by this sort of inquiry.
WordMan has it right here. Absolutely, totally right.
Complexity arises from simplicity.
The human mind tends to believe that complex outcomes must necessarily stem from complex inputs. That does not have to be the case. The Mandelbrot set is seeming complex in form. But that surface-level complexity is masking a fundamental simplicity that is almost absurd to look at. I want to investigate a bottom-up conception of a world in order to explore those sorts of features. Not necessarily our world, but a world. But since the criticism has come up, I do for a moment want to discuss the “rigor” of definitions. Because of this comment.
There are basically two approaches for a “proof” of the Intermediate Value Theorem. One way is by saying, look, if I take a pen on graph paper from PointA located at 0 to PointB located at 5, then at some point, I’m going to have to hit all the values between 0 and 5 if I never pick up the pen from the paper.
Or I can point to the “rigorous” proof using the formal notion of continuity. When I was younger, this formal proof used to piss me off. This was something that was obvious. We have the idea of the thing just from the pen on the paper, so what’s the point of this extra nonsense? But as I eventually came to see, there were sometimes certain advantages from putting definitions on a formal basis. It leaves no room for the ambiguity that can sometimes detail discussion. Without that formality, people might put another piece of paper down and say the pen never left “the paper” even when it did not travel the points between 0 and 5. Or they might fold the paper, and say that counts. This stuff is very clever. It takes an active imagination to work through the puzzle in different way.
But it can also comprehensively miss the point.
One way of defining a word is: I know it when I see it.
“What’s green?”
“It’s that color right over there.”
You can point at the aggregate state of the world, the “emergent” properties as we perceive them. But there are potential problems with this. What if you’re talking with a blind man? Then they can’t perceive what you’re pointing at. What if you’re talking with someone with much more fine sensory skills? Then you’re clumping together features in your definition that can be readily distinguished by the person you’re talking with. It can take a lot of conversation, and potentially a lot of confusion, for the other person to finally figure out that your word for “green” does not distinguish color superGreen-A and superGreen-B not because you’re uninterested in separating the two, but that your word for “green” does not make that distinction because you are physically incapable of perceiving the difference with the current set of eyeballs that you have.
This isn’t some random complaint here.
When people in the past have describe their own sensation of eff-doubleyuu, it sounds absolutely nothing like my own personal sensation of eff-doubleyuu. I’m not going to deny people are experiencing… whatever it is that they claim to be experiencing. I’m not going to say people are feeling an “illusion”. Maybe they have the ability to see more shades of green than I do on a fundamental level, and I can’t recognize what they’re talking about because I don’t have the apparatus for it. Or maybe I personally have the ability to see more shades of green, and so when they point to their experience of eff-doubleyuu, I can’t recognize what they’re talking about because they’re conflating together internal experiences, which I have the depth of awareness or some shit, to recognize as several different experiences which should deserve several different words if I were to try to discuss the topic at all.
But what I do know is nobody uses the sensation of eff-doubleyuu to inform them of how close they’re standing to the walls when their eyes are closed.
WordMan is exactly right that I favor a bottom-up approach. Why am I doing this?
For the same reason I wrote the first post in this thread. One or two people were having trouble understanding the definition and implications of deterministic systems. Determinism is highly related to bottom-up thinking. Two sides of the same coin. In the definition I’ve outlined, and in a possible simulation that uses such a definition, we have agents who are wandering through the world and learning about it who use their updated map and some optimization process before they take action, and so can take different actions from one day to another precisely because their view of how their world they inhabit works has actually changed from one day to another. One day they run from the bunny, and the next day they attack the bunny. And what’s important (to me personally): we can point to particular parts of the code to understand all of these pieces.
The underlying point here is that the system is extremely complex, not predictable except by literally computing it out, and the “decisions” that the agents can make can be extremely powerful within the context of the system.
Given enough sophistication, the internal computation of an agent might eventually become so powerful that we would not be able to tell the difference between a “decision” in the deterministic system and a “decision” in our everyday world as we believe we understand the word.
And that is the point that I have been trying to make. A deterministic system can potentially give us the pieces we need. The question is where we apply the word “decision”. Do we apply labels to our ignorance, or do we apply labels to our knowledge? Do we say the proof of the theorem is with a line drawn by a pen on the paper, or do we say the proof is something deeper than that, and that the line on the paper is a visual analogy of that principle? (And definitely not a perfect analogy. Most of the paper and ink is actually empty space between atoms, not a “true” continuous line.)
We have “decisions” as we think we see them in the real world, but we’re not entirely sure how they work. And now we have “decisions” as potentially defined inside a program that will run the same way every time it is started. One system we understand very well, because it is deterministic and therefore well defined. The other, we understand less well. Our brains are a very small part of this universe, and even if this universe happens to be deterministic, we have no way of actually “proving” that. Instead of using the same word for the two scenarios, we might consider using different words. If anyone wishes to use “pseudo-decision” for the deterministic definition above, then that is completely understandable. It draws a distinction between the well-defined toy system we have imagined, and the poorly-defined everyday experiences as we live them.
But for me?
The lack of any way to distinguish one from the other is the dividing point. I put my label on the system that is well defined. I do this because I will be able to see more clearly whether this definition falls short. This is actually the same sort of procedure human beings normally rely on. When people try to define their own “decisions” based on their personal experience of eff-doubleyuu, they’re pointing at their own head, similar to noticing the color green. But when they point at other people’s decision-making, they don’t have access to that. They’re just assuming that other people feel the same sorts of things inside that they’re feeling, and then they claim that both kinds of experiences are “decision-making” even though we have no direct access to what’s actually going on inside other being’s heads. And this is exactly the right way to do things! People can look at their housecat which seems to be making some sort of deliberate process before they decide to jump on the desk and lie down on their keyboard as their working. The people don’t “know” that the housecat is experiencing volition. They’re taking their internal experience and extrapolating it to what they believe other beings must also be experiencing. They are extrapolating their own mental process to the cat in a limited manner. This is the correct approach.
I’m doing an exactly analogous thing here.
We have this deterministic system that we can imagine, and we have agents that are doing things inside this system that I am personally choosing to call “decisions”. I’m using the word here because it is the most clearly defined place. I am not saying that it is certain that all of our real-world decisions must necessarily follow this path. I don’t know that. But I’m attaching the label to a piece of knowledge, rather than a piece of ignorance, and I’m saying that although I’m ignorant of the world, I can not see any fundamental philosophical distinction between the “decisions” I see in the real-world vs the “decisions” I see playing out on the computer screen. I don’t need two words to describe events that have all appearance of being identical. I don’t need more words for “green” than my current perceptual toolkit is able to perceive and process.
The agents inside that deterministic system make decisions.
I’ve got a fundamental handle on what they’re doing. I don’t necessarily have a fundamental handle on what real-world people are doing. The question at this point is where we give priority to labels. Do we label our understanding with our clear definitions, and then try to apply those clear definitions to situations that seem to match them in all aspects? Or do we give the label to our ignorance? There are, I believe, certain benefits that accrue from putting important labels on our positions of deepest understanding, and working out from there. Because the definition is clear, it’s easier to see the places where it doesn’t seem to fit.