No. Because it is not really about intelligence per se, the weather is a chaotic system, and so the limiting factor is how precisely you can know the state of the system as an input to your model. Engineering limitations strongly constrain this, but even beyond that, ultimately you would need to know perfect information about quantum states, which is impossible in principle (as we currently understand quantum mechanics).
It’s an interesting side question to wonder how far in the future we could hypothetically predict the weather just based on having perfect classical information about the system. But it’s tangential to the question at hand.
No, for the same reason, but also more:
Let’s say I offer you coffee or tea, and I am going to truthfully tell you the specific prediction of which one you will choose. However, my awareness of your personality also tells me that whatever I say, you will choose to defy me and pick the other thing.
What happens next?
This paradox illustrates that the notion of being “locked in”, in a fatalistic way to some future state is a misapprehension. Determinism is not Fatalism. And perfect predictive power, even if quantum physics allowed it, also depends upon the context in which that prediction is used and how we will interact with the system.
Whether is chaotic for people with our intelligence but if intelligence wasn’t a factor it could possibly be easy to predict exact whether. But we are talking theoretical intelligence that we would never encounter. Most things at atomic level are predictable but currently we say that there is an inherent level of randomness which is very low. Maybe 95% of everything is predictable and whatever 5 or less is actually random and that you could say is free will. But I’m just talking off the top not really a thesis or theory
Perhaps you are thinking of “chaotic” as just being an adjective meaning random or something. But actually, when we are talking about chaotic systems, we are talking about the mathematical concept of Chaos Theory which means something specific and is not the same thing as randomness or complexity.
There are many chaotic systems that are defined by very simple, 100% known algorithms. The issue with predicting the behaviour of these systems is not to do with our understanding of the algorithms but in how accurately we can measure the initial state of the system. And there seems to be physical laws that prevent anyone from having perfect information of the initial state of a system – so let me repeat this and underline it: it’s nothing to do with intelligence.
I would not say that because I would not define free will as being random.
My position is that choices are necessarily deterministic; what we even think of as a choice is a weighing of alternatives based on knowledge and personal preferences. Even if the universe is not deterministic, “choice” necessarily exists within the deterministic part.
Criminal punishment as a deterrent for the benefit of society survives determinism, but so far as I can tell, the concept of personal responsibility must fall away. I’m not comfortable with the idea of criminal punishment in the absence of personal responsibility, although I acknowledge it still benefits society. For some, that is justification enough; not for me, IMO that line of thinking leads to dark places. That is why jcklpe attempted to argue that restorative justice is compatible with personal responsibility.
Yes, more or less. I mean, I would say we need to be careful by what we mean by “responsible”, but broadly-speaking I agree. And indeed upthread I argued that the justice system should be about rehabilitation, public safety，restoration and deterrence. Only.
Justice systems based on a notion of punishment for evil actions taken out of “free will” are going to struggle as we understand more of how the brain works. We are inevitably going to find neurological features strongly correlated with particular behaviours. At the least we’d end up in a situation where there is “diminished responsibility” for almost every crime.
I would say that’s a bait and switch. I can defend restorative justice without believing in personal responsibility. I can believe that if an agent’s brain is wired to do X, and X causes material harm, then they need to pay up for X.
Perhaps this might seem unfair, and we can talk about that but there is no inconsistency in such a policy.
I wouldn’t say that you alone are responsible for making that family whole. I would say society at large has a responsibility. It’s not solely yours merely because you had capacity to save someone and didn’t for whatever reason.
I think this is a pretty good response to the drowning person idea above.
Sociopaths exist. people with anxiety disorders exist. People with distorted perceptions of reality to the point of being classified as “psychotic” exist.
Do I really care if someone is punished for misdeeds or do I want to insure that mistakes are not repeated in the future?
Justice as retribution is deeply seated in our in our psychology. It’s probably an evolutionary adaption. But it’s not very useful in the grand scheme of things when we have better alternatives.
Indeterminacy isn’t “free”.
Maybe you’re using a particular definition of “personal responsibility”. In this case I don’t really know what you mean by it.
Let’s look at some definitions from a quick google search to get a reference point:
Personal responsibility is the willingness to both accept the importance of standards that society establishes for individual behavior and to make strenuous personal efforts to live by those standards.