Is man merely a machine?

Voyager,

Well yes, sort of. The FPGA must be designed into the computer architecture and the patterns are stored.

I wonder - are there any computers that fondle their FPGAs just to see what they do?

Field programmable means exactly that. While usually the patterns are stored, there is no reason they cannot be generated in response to conditions. In fact, there was research at Stanford around reprogramming the FPGAs - dynamically - if some parts of them were defective, in order to work around the problem.
Kind of like what our brains do.

As for your other comment, no one is arguing that computers today are conscious. They are clearly not. The question is, what can’t they do in principle? Not being able to do something now doesn’t mean that it can’t be done tomorrow. I suspect that generating Jeopardy questions is a lot easier than answering them.

The fallacy of the OP is that all machine technologies are equivalent and can achieve the same properties. That is simply not true. Organic and inorganic building blocks have different properties. Systems built from them will exhibit only extensions of those properties. Computers are adding machines that the human brain can configure to do elaborate calculating and management functions. The opposite is not true. The brain is not an adding machine.

Do you have a reference for your genetic algorithm example. What did they use for a fitness function to generate “new” hardware?

Not the adding machine fallacy again. Adding machines can’t learn, computers can and do.

If you look at the brain as just a bunch of threshold logic neurons with lots of connections, you might be able to convince yourself that the brain can never think either, just like computers. You’d be wrong in this case also.

ETA: One of the big problems in AI today is that we don’t really know how a neural network configured itself to solve a problem, which is a far cry from them being directly configured by humans.

Why would Watson “likely” start asking Jeopardy questions of its own volition? What would its motivation be to do that?

Observing humans, it seems evident to me that human action is driven by internal motivations, based in biology, emotion, or cognition. We do things because we want to do them, for one reason or another.

Computer programs generally have a very limited, very controlled set of things they ‘want’ to accomplish, due the fact we use them as tools. The choices they make are thus relatively predictable, because we generally know the ends to which they are striving at any given time. If we knew that about people, they’d look pretty predictable too.

Sorry for the late response, I hadn’t read the latter parts of this thread until today.

The usual deterministic view of the brain is that the decision of which pie to eat is determined by the physics/chemistry of your brain, getting inputs from your senses. People who claim there is a free will reject that it’s just physics.

But the problem with that is that even if there is a soul making this decision, then what is it using other than determinism based on inputs, plus possibly some randomness? What else is there?

I definitely do not believe this, but my understanding of the soul is that it is supposed to be non-physical, existing independent of the body, and therefore would not be controlled by physics. The soul’s decision would be influenced by inputs, of course, but not controlled by them.
In other words, magic.
BTW just physics is not deterministic, as we all know. Whether that means free will is another matter.

I propose the following experiment to determine whether P.D. Ouspensky actually believes this: Someone walks up to P.D. Ouspensky and slaps him really hard across the face, then says, “Gosh, P.D., it’s too bad that external influences and external impressions lead to me slapping you, but I did not produce the action of slapping you and thus I am not at fault. Now I feel that in a few seconds, external influences and external impressions are going to cause me to slap you again.”

If Ouspensky responds by agreeing that the slapper took no wrong action and that any slapping which occurs is the result of things external to any person, we can take his philosophical claims seriously. But if he wants the slapper to choose to stop slapping him, then he’s just admitted that he doesn’t really believe what he says.

(And yes, I’m aware that he’s been dead for 70 years. It’s more like a thought experiment.)

Right, and bear in mind that the soul is only brought into this context in the first place to be an “X factor” for our decisions.
So admitting that we don’t know what souls use other than input data and neural reasoning, and there must be some other thing we don’t know about, really means admitting the soul concept is completely redundant.
(I am not suggesting that you believe in souls, just arguing against that position).

Again though, I would urge anyone thinking about this topic to not forget the third option: that the concept of “free will” is at best incoherent, and so it’s meaningless to talk about whether it does or does not exist.
The problem with saying it doesn’t exist is that we are tacitly agreeing “free will” has been clearly-defined, and in the free will debate the definition is usually vague and/or nonsensical.
Not to mention that people think that saying “There is no free will” actually has implications for their actions, and usually jump to some Fatalistic interpretation.

Since the soul would be a non-physical source of our personality, if there was a soul our personality would be immutable. However since we have mapped areas of the brain affecting our personality, and have modified personality with drugs and surgery, we can be pretty sure that the soul concept is falsified. So the concept is worse than redundant, it’s wrong.

I always start by saying that free will is not falsifiable, since to prove determinism we’d have to be able to predict behavior based on internal states and instantaneous inputs, which is pretty much impossible. And for determinism to have any practical impact we’d have to do it in real time, which is even more impossible.
So we know our actions are not free - as anyone with a phobia could tell you, nor are they deterministic. So I think I agree with you in saying that the term free will is one with no objective meaning, like four-sided triangle or tri-omni God.

I’m willing to accept (aka pretend) that magic has unknown mechanisms - that it follows its own rules that we don’t know about, like how lightning followed rules we didn’t understand back before we, well, understood it. However I’m not willing to accept, even theoretically, that magic can’t be observed externally and deductions about its function made. (Unless it doesn’t exist, I mean - things that don’t exist are difficult to observe.)

Thus if I observe a magic spell bringing a dolly to life and making it go all Toy Story, I can still observe that it’s reacting to things around it, and deduce with certainty that it’s either actually reacting to the things, or it’s not. How it’s reacting isn’t my problem but I can be certain there’s a mechanism that makes it happen, simply because it’s happening. (And yes, that mechanism could be “the puppeteer observes, then makes the puppet react” - that’s still a mechanism.)

Just because it’s imaginary doesn’t mean we can’t do science on it. :smiley:

I prefer option 4: we define the word in a coherent way that makes sense and is compatible with external influences and internal preferences, because of course it is, if we’re going to even pretend that it’s something humans have. Any philosopher who attempts to define the term in a way that is not compatible with humans reacting to external influences or having internal preferences is simply setting up a straw man to knock down.

I agree. I have lots of Unknowns from the early 1940s - which were all about rational magic. However, the internals of a soul might be a black box, which you could maybe model somewhat but not see inside.

The thing you quoted was from Mijin, not me. Just to give credit where credit is due.

Crap! Sorry. I try to keep my copy-pastes in order, I swear…

Voyager,

A Turing machine is a single bit serial system with a single bit accumulator and a single bit adder. It can only add. The significance of the Turing machine is that all numerical processors are just more of the same. Given time and clever programming a Turing machine can produce the same ultimate results as the largest super computer. They are both numerical calculators, hence adding machines.

A hammer is a device for realizing force from acceleration. It is a tool, that properly managed, can build a house or form metal or kill someone. But, the hammer does not gain any mystical properties from these uses. It is just a device that produces force from acceleration.

The same is true of computers. They can be used as tools to multiply, divide and subtract, but the tool never executes anything but add or the logic operations - and,or,xor.

The brain is a pattern matching system. It’s components operate in parallel. The brain is not an infallible logic device. It is computer like but it is not a numerical computer. It has some microprogramming, but it learns (is programmed by) mimicry. The brain will believe anything.

So, we obviously are machines that have some commonality with computers. The brain is not a numerical computer, so it is not axiomatic that it can replaced by a Turing machine. The ability and state of any individual brain is the product of a huge number of random events.

The brain constantly evaluates, and reacts to, it’s environment. These are independent, willful acts. They are not random. They are acts of ‘free will’ because: they are instantaneous products of the brain; they are not predictable; they may not be in the best interest of the individual making them.

You continue to neglect the decision making ability of Turing machines and thus computers. From the Wiki entry on Turing Machines

Adding machines cannot make decisions.
How things are implemented is not important. Basically you can design anything with just a NAND gate. We don’t do that for the sake of efficiency. (Not counting analog stuff and I/O, of course.)

A hammer, like an adding machine, does not have a conditional branch instruction.

How does this cover jamming a value into the program counter based on the condition of a register?

Content Addressable Memories are pattern matching systems also, and are designed that way. Not to mention that computers can be programmed to do pattern matching also. It can be simple like a hash in Perl or complex like picture recognition.
As for microprogramming, that is what my PhD is in, and I was also a world’s leading authority™ in the field. When I graduated I recognized that the lifespan of this area was limited, and moved to another field.
The microarchitecture of a machine has ways of making decisions also, and you can certainly emulate a Turing machine in microcode if you wished to.

We’re clearly not the equivalent of a Turing machine since we don’t have an infinite tape. I agree that we are the product of random events, but you can set up a computer which changes its programming based on random inputs also. The hardware of the brain is based on random events also, but so is the hardware of a microprocessor. For instance, large internal memories are redundant, and have spare rows or columns which get swapped in when failure are detected. From the programming point of view the memories may look the same, but physically they can be very different.

I agree that they are not predictable, but if that means the brain has free will is not at all obvious, as has already been discussed. And why does something being not in the best interest of the person has anything to do with anything? Sexual compulsions have often led to doing things not in the best interest of the person. Cite: Anthony and Cleopatra. Extreme thirst can lead to drinking seawater, not a good idea.
I don’t know what free will is, but you are hardly convincing that we have it.

Voyager,

Thanks for the response.

I did not make an argument for free will. We can observe that brains direct independent, willful acts. Actions that are not pre-programmed by an external source. Computers cannot do that. I gave the example of problem selection and solution.

The point made by Turing is that the basic structure of the computer is the adder. With a single bit serial adder you can perform all numerical computer operations. The currently popular Harvard and Princeton architectures are just expansions of the Turing machine.

Your post makes my point with the NAND block example.

Computers can emulate brain operations but it is not axiomatic that they can function as brains. The neuron is not a Turing machine because a neuron is not constructed from numerical adders. The brain is an electrochemical, parallel hybrid (analog/digital) system.

Pattern matching by computers is a function of software logistics. It is not an intrinsic operation of the basic adder. The opposite is true of the brain. It is a pattern match system. Something we all have experienced.

If I were a determinist (which I’m not) I’d respond that the acts of the brain can be considered a function of its structure which is determined by genetics (external) and the fetal environment (external) and memories and learning from experiences (external) and the inputs present when the brain’s decision is being made. Determinists say the result is a function of all these things. Do you have something that affects a decision which does not originally come from outside?

Actually if you read the Wiki article, it is not clear if the functions implemented by the core of a Turing machine is well defined. The decision making ability is more fundamental.
Though Turing’s paper predates the first digital computers, do you have a cite that it affected their architectures? I’ve never seen this mentioned. I don’t think the machine Turing built at Bletchley Park had a “von Neumann” architecture. (Scare quotes because he didn’t invent the architecture, just was the first to write about it.)

My point is that adders aren’t even the basic building block, and that saying that computers are just adders is just as silly as saying they are just NAND gates.
A very small percentage of the silicon of a microprocessor has anything to do with adding or adders.

It is definitely not axiomatic, but that does not mean the computer emulation of the brain can’t be demonstrated by example.
I don’t know if anyone has constructed an emulated neuron from logic, but I don’t think it would be all that difficult. And I doubt it would even include an adder.
BTW, there is a trend to implement traditionally analog functions, like radio, purely digitally, since the circuit becomes easier to shrink that way.

I know what software is, and what logistics is, but I have no idea of what you mean by software logistics. The CAM I mentioned above is not built using an adder. It is a basic block in most cell libraries. But in any case you are contradicting yourself, since if you admit that computers can pattern match, and if you claim that computers are just adders, then clearly adders can indeed do pattern matching. So you might want to think this through some more.