Just an FYI, I love this discussion but was afraid this was turning into a major hijack and didn’t want to be responsible for that, so I continued the discussion in Cafe Society:
Sure, you can come to that conclusion. But the OP doesn’t say that explicitly, so I wouldn’t assume that’s what the thread was about. And it really seemed to me like all of the initial responses understood that.
But even if not, it seemed by the same logic that griffin was only meaning current AI, and not some hypothetical future AI that is in fact a person. His entire argument is about objects that are not persons. (And it is one I agree with. including the offensiveness at genuinely arguing that someone who is anti-modern-AI is racist.)
Other than HMS, does anyone think that it’s possible to racist towards a non-person? And does anyone here think it’s impossible to be racist towards some AI people?
The only argument I can see against it is if we say that racism only applies if it’s towards someone of the same species. But we’ve long used the term racist towards other species in fiction, so it seems likely we would do so even with a species in another whole kingdom, as long as they are people.
Oh, and I agree that, given the OP often jokes about clankers, it’s likely that their sister was also making the typical “that’s racist” joke. The joke isn’t offensive, though I guess you could argue it muddles things a bit.
No just regular every day non-causal matter, governed by the uncertainty principle like the rest of the universe
Even if you don’t accept all the consequences of the uncertainty principle. In practice the brain is entirely non-deterministic, there is absolutely no way to even define what it means to have “the same inputs” that would lead to “the same outputs” (beyond saying the inputs are every single particle in the universe outside your brain and the outputs are every single particle in your brain). Unlike a computer program which is entire deterministic in practice and theory (or almost so, excluding things like random solar rays hitting a logic gate)
I’m not sure precisely what you mean by that. Which consequences am I not accepting? It’s certainly true that the atoms that make up your neurons are impacted by the uncertainty principle, but by the time you scale that up to the level your neurons function at, I don’t buy that it would have any real impact.
I don’t see how what we can do “in practice” today in 2025 effects whether or not a brain is deterministic or not. In principle (and likely in practice in the lifetime of a child born today), it should be possible to simulate a human brain neuron by neuron, at which point you certainly could feed it the same inputs and outputs.
You don’t need to simulate “all the particles in the universe” (unless your senses are very different from mine and you can currently experience all the atoms of the universe! If so, DAYUM, where’d you get those shrooms?). Your senses only process so much information.
Why would you exclude those things? Your neurons being impacted by quantum noise is not that dissimilar to a solar ray hitting a logic gate; and in fact, modern computer chips are so small that they have to use various techniques to correct for the possibility that quantum tunneling will impact a stray electron.
Actually, I guess there’s one way you can escape determinism on a macro scale; measure something non-deterministic and make an arbitrary decision based on it.
So if you want to make a non deterministic decision, number your options and use this link to choose.
…
Damn. That’s actually super philosophically interesting, actually.
Chaos theory says that it can, even very tiny causes can have large effects. And our brains and body have a great deal of that sort of chaos built into them.
That said, I don’t think that means it’s impossible to simulate or duplicate the functions of the brain. If anything it indicates the opposite, that it’s easier to emulate brain functions because our brain’s architecture is clearly highly tolerant of small variations or we’d promptly drop dead.
In fact, I’ve come to the conclusion that you could run a human consciousness on a machine simpler than the brain, and without most of it being much like the brain at all. Partly because much of it is devoted to running the body, and partly because our conscious mind, what we think of as “us” appears to be very detached from the unconscious functions of the mind. We’re basically mostly a user interface for the unconscious mind.
I suspect that if the small part of our brain that is conscious was emulated accurately, the much larger underlying unconscious mind could be replaced by wildly different software and the conscious mind would be unable to tell the difference because that’s what it’s for. To mask the underlying functions and present a simplified representation to the outside world.
Even if that is the case, that is the exact opposite of deterministic. We use simulations to produce numerical estimates when no deterministic analytical solution is possible. The earths atmosphere is effectively non-deterministic. No human will ever be able to record the position and velocity of every particle in the atmosphere and plug them into a big equation that can be solved to give their position in a months time. So instead we use a simulation to estimate the most likely future states.
Because including them makes computers complete unusable. It’s actually possible to set up logic gates so that they are effectively nondeterministic doing that makes them completely useless. A.huge part of designing hardware is ensuring computers behave deterministically. Neural networks (and databases, sorting algorithms, and JRPGs and every other computer program) only work because computers are %99.9999… deterministic. If f(1)==true one run then f(1)=false the next then everything will break