So where IS there a will?

It always upset me in school when somebody would explain some scientific concept in terms of what this or that “wanted”. You know, “The electron WANTS to be more stable so it moves down to a lower-energy orbital”. This explanation is great and all, but since it’s dealing with an electron without any real will or thought, it doesn’t really explain what’s going on. Obviously there are other explanations that do these phenomena justice.

So then here I am, pondering optimization of resources and saying to myself “Ok, so party A WANTS the most of blahblah” and I had a flashback to the oh-so-unsatisfactory explanations of yore. Then I say to myself, “Self, why is it that we are satisfied with saying party A wants something, but not an electron?” Obviously there is a difference, but I can’t see what kind of difference allows this sort of property to emerge.

The only scenario I can picture:

The electron scenario is like rolling a heavy ball bearing down a board at an angle. It follows a simple law that allows you to easily predict where it ends up. There is no “will” making it end up, just a very obvious connection.

With “party A”, the flat board is replaced with one of those boards w/ the pegs in it, except that somehow in the pegs get rearranged based on where the ball ends up. By some mechanical wizardry, the place where the ball lands rearranges the pegs in such an order that each time the ball lands closer and closer to some desired point (desired somehow by some other process?), and so we can say that the peg-board “wants” the ball to end up at that spot.

Ok so this is a weird type of post in which I fleshed out a few things as I typed, so I guess at this point my question would be more along the lines of the following: Is the situation I just described somewhat accurate? I know there are certain computer algorithms that take their own outcomes and refine themselves (I saw a cool java applet once of an algorithm learning to balance a stick or something like that), and the eventual end result is what that process “wants”? If so, it would seem that the electron scenario is the same thing except it’s already at it’s desired result, so then why does it seem so hollow and unsatisfactory to say the electron "wants something?

Does this even belong in great debates? When I thought of the question it sounded a bit less black and white, maybe now it’s more of a GQ thing, but I really can’t tell.

Well, embodied in the abstract concept of “to want” is a host of assorted actions. While its usage in normal conversation tends to imply a conscious will-to-{whatever is wanted} it is easy shorthad to describe strict tendencies.

Electrons, being (so far) fundamental subatomic particles, have very precise behavior. With the proper caveats we may use semantic shorthand to describe the behavior. The proper caveats, of course, are often ignored. :frowning:

Of course, if—like me—you often hold that consciousness isn’t as special as we think (an illusion?) then you would have no semantic quandry with saying what an electron wants or what a person wants.

Yeah, this was the problem I’d reached by the end of my post. And yet, there is certainly some property that emerges when complexity reaches a certain level that allows you to tell the electron’s “will” apart from humans (or other “sentient” animals, whatever that may mean or apply to.