Is man merely a machine?

What does it mean to make a decision not determined by input and state? Forget whether the machine the decision is made on is built out of silicon, neurons, or ectoplasm - what does it mean to make a decision with no knowledge of what you’re deciding about? Because that’s all in the input and state, you know. How can you decide which pie you’d like to eat if you don’t know the pies are there and which kind of pie you prefer?

The addition of a soul doesn’t actually help in the determinism/nondeterminism debate - you can actually deduce that the mind is ‘mostly deterministic’ based sheerly on the fact that we don’t spend all our time lying on the ground spastically twitching insensitive to the world. It doesn’t matter where we hide our thoughts; they have to operate using logical causality, either always or so close to always that it’s indistinguishable for practical purposes.

All that souls add to the argument is to allow a person who prefers to think they’re totally unpredictable to continue entertaining that illusion even when they have a materialist model thrust upon them. Their minds operate the same either way; they just can continue not to admit it.

My position is that the concept of free will predated the philosophers, and that it meant “my choices are mine, not made for me by the gods or by fate.” These people believed that there really were things outside of themselves that could be reaching into their mind and controlling them like puppets, and “free will” was the notion that that wasn’t happening.

if you don’t want to blame the philosophers, fine, but at some point somebody got mixed up about the difference between “outside control” and “unpredictability”. Maybe the blame lies with the theologians with all their talk about God being ‘inside you’, muddling the difference between external puppetry and internal control due to the phone call coming from inside the house. Maybe it was a conflation between externally imposed destiny and the inevitability/reliability of a physical mechanistic system. Who knows. But at some point the ball was dropped and this harebrained idea that “free will” means “unpredictability” took hold.

But I still maintain that prior to all the confusion added by somebody, “free will” was a straightforward concept. It doubtlessly was debated whether we had it (what with god muddling with pharaohs’ minds and such), but I don’t believe that it was a concept that laymen weren’t expected to be able to understand, as it seems to be now.

Even worse, the concept of soul can’t explain how some people change their mental states radically. Souls are supernatural, so they can explain anything. But I hope you don’t think that I think souls exist.

A deterministic view is that, say, the aroma of two pies force you into choosing one. But you can know what pies are available without this information determining your choice. One might toss their soiulish dice, or their soulish notes. Don’t expect me to make a model of how nonexistent souls work, but they might well not be machines.

Yeah, the mind works in a way that looks a lot like a machine (which does not mean deterministic) which is a good argument against souls. But you’re assuming a lot about these non-existent things.

The deterministic view is that the aroma of the two pies knocks over a series of molecular and electrical dominoes that encodes information which the brain processes (through more molecular and electrical dominoes), combines with other received input and stored internal state (received and stored via more molecular and electrical dominoes), and then makes calculations about (through yet more molecular and electrical dominoes (there was a sale on them)) which lead it to come to a determination/decision. (Which it acts on by knocking over yet more dominoes).

The notion that the smell dominoes are the controlling factor here, the ones worthy of note, is like saying that one drives from Los Angeles to New York by turning on their headlights. It’s very overtly misstating the importance of various steps in the process.

I’m not pointing this out to criticize you, but rather because it’s notable that this sort of overt and obvious error is the core premise underlying the “we don’t have free will because determinism” position. Only by blandly including our brains in the “it’s just dominoes” pile and then claiming that the dominoes are in control and we aren’t are they able to make their argument, which is a little like saying that people playing dominoes don’t have hands of them they’re playing because if you ignore the dominos in their possession their domino hands don’t exist.

I’m not assuming, I’m deducing! I’m doing science! I’m observing the alleged behavior of the alleged souls and concluding things based on my observations.

And the notion that the mind works mechanistically doesn’t disprove souls; it merely indicates that souls themselves are built of moving parts which operate mechanistically. Which we already know - souls (allegedly) do things, and things that do things have moving parts. The more things they do, the more underlying internal structure they must have to handle the tasks they (allegedly) carry out. If souls remember things, they must have some sort of persistent memory storage. If souls do logic, they must have some sort of calculation mechanism. If souls respond to input, they must have internal state that changes when the input is received.

If there’s one thing that we know about all real things, is that they must have sufficient complexity to carry out their behaviors. If souls existed (which they don’t), then they are not exempt from this - magic or not.

Machines aren’t self-aware; we are. I’d have to say that difference is as important as important can be.

I’ve yet to figure out a way to definitively prove that something is not self-aware. I mean, sure, the lack of apparent internal complexity of a rock does seem to suggest that there’s not much introspection going on in there, but I’d have to chop it open to tell that the parts aren’t there, and even then there could be something I’m overlooking. Were I to crack open my computer or my microwave oven I think I would find parts that do I know not what, and beyond that based on its behavior I’m pretty sure that my microwave oven hates me and wants me to die.

Not yet they aren’t, but please demonstrate that they can’t be. But you need to define self-aware first. Computers do look at their internal states and do things based on it (like slow the clock down if they are too hot) but I don’t think that’s self-awareness. So, what is?

Have you ever debated with someone who actually believes this stuff? I have. If you tell them that drugs clearly affect mental states, and how changes to the brain can change personality, they’ll say that there is something like a radio receiver in parts of the brain receiving the soul’s commands, and physical changes to the brain make it look like personality has changed by affecting the receiver.
Yeah, I know. Flat-earthers make more sense. But I’ll finish this hijack by saying you can use logic to analyze a supernatural entity.
If we ever see one, we can see if this is true.

Voyager,

This condition in microprocessors is usually the result of testing a location in memory that has not been cleared or initialized. The error being in the clear memory routine.

Perhaps we could divide the “free will” issue into two parts : necessary condition and proximal event. Necessary condition being the brain, it’s intrinsic quality, pre-wiring, education/experience and operational errors (race conditions etc.). Proximal event being the immediate thoughts and actions. When acting on proximal events/thoughts we do exercise a degree of discretion (generalization), no matter how small. The results of that discretion accumulate as part of the necessary condition that will influence responses to future proximal events. The combination of deterministic and stochastic processes allows us to adapt to our perceived environment.

That’s what it looks like to me, but I’m an engineer, not a philosopher.

OP’s question: Is human brain just a machine? is the inverse of the more common question: Can machine emulate human brain?

This latter question was discussed in a thread just 7 weeks ago, as well as in a thread 4 years ago, and I’m sure in other threads as well.

This OP is more specific.

There is a semantic problem here. Brains and computers can perform some of the same operations but the terms are not interchangeable. A computer is not a brain and the brain is not an electronic computer. The computer is assembled from gathered components by skilled labor. The brain is self organizing and is produced by unskilled labor.

Both are machines, in the sense that they are both assemblages of physical components with no mystical content. And, the components for both are dug out of the earth, so are sourced equally. However they are processed differently. A CMOS computer element is composed of silicon and aluminum, but that has no implications relating to the glass windshield and aluminum covering of an airplane. At the level of the assembled machine they are dramatically different.

The premise that all machines are ultimately capable of all functions, just because they are machines, is not valid.

If you are talking about our bug, not even close. The most effective way of testing for the bug was having the machine sit at the Solaris prompt. When we got really lucky we could insert probes into the silicon and see the signal on the suspect line raise with no external cause we could see.
I suspect you have no experience in debugging real hardware issues.

What specific functions are they not capable of doing?

Current non-biological machines are not capable of self organization. The brain creates and wires itself.

Given an arbitrary problem with unknown terminology, no computer can create a solution.

In example: Many years ago (1957) I was working on analog computers with analog servo systems. I wanted to make a digital servo system. I built a few circuits using 3A5 triodes and came up with pulse width modulation. It was a technique I had never seen. So:

Of my own volition I defined a problem

I designed and constructed test circuits

I designed, assembled and tested a servo loop configuration that was entirely new to me

Computers do not do that. It would be like if Watson, of its’ own volition, were to begin asking the Jeopardy questions.

Not strictly true. FPGAs can have their hardware rewired based on a reload of their memory grid. And if you are implementing something through simulation or emulation you can rewire as much as you want.

Computers, using genetic algorithms, have designed totally new hardware. And that computers cannot generate Jeopardy questions today doesn’t mean that it is fundamentally impossible. I have read an article about computers that generate English-style cryptic crosswords, including the clues, and I believe that many cheapo crosswords are computer generated today. Not the ones in the Times, of course.

Ouspensky was an interesting man, an esotericist, theosophist, and mystic. If we were having this discussion 100 years ago he would probably be regarded as representing the cutting edge of metaphysical thought. These days it seems likely that everything he believed was either wrong or ‘not even wrong’. But his ideas might teach us something, although I’m not sure what.

Actually the Watson scenario is a good test. It is far simpler that the servo problem.

Watson has all of the resources necessary to create and ask Jeopardy questions. If Watson were conscious, it could, and likely would, of its’ own volition, begin asking the Jeopardy questions.

Watson cannot do that.