I’m not a great fan of excess definitions—it leads to the Socratic fallacy too easily. We’ve often got a perfectly good grasp of a concept, even without giving a proper definition—indeed, the fact that we can see whether a definition is apt or not entails that we have an understanding of the concept apart from the definition. A lot of people hold that since Gettier, nobody’s really come up with a good definition of ‘knowledge’, but that doesn’t mean we don’t know what knowledge is—indeed, the fact that we can see that Gettier cases don’t constitute knowledge despite fitting its ostensible definition as ‘justified true belief’ means we understand what knowledge is apart from that definition.
Moreover, many things have a claim to being indefinable as such, for instance, subjective appearances—I can’t define my experience of greenness, for instance; I can at best define the wavelength of light that produces this particular experience. So, to the extent that will is something that is experienced—which to me is its most salient aspect—the definitionist game doesn’t really apply.
For present purposes, and with the above caveats, however, I think I’d be alright with something like ‘intention to bring about a certain state of affairs’.
This is dodging my point, though: free will might not explain anything, but it’s not intended to, and to apply the concept of falsifiability is just to commit a category error.
Then your experience must differ from mine—I do experience my actions as free. For instance, if I now decide whether to type A or B—B—my experience is such as to include the possibility of choosing A instead. I experience the world as open to my choice, and my will as filling that opening. I experience my will, my self, as the source of the action—I don’t experience, say, the boundary conditions of the universe as its source. That might be different: I could have an experience, for instance, of consulting an inner randomness generator and producing an action on this basis, or I could look up a value in some database and act accordingly.
One could easily imagine a robot, upon being asked why it typed B instead of A, to answer, well, I looked up the color of the central cell of the cellular automaton Rule 30 generation corresponding to the current timestamp, which was black, so I typed ‘B’. Such a being would not experience its choices as free, and they wouldn’t be; but that experience is not my experience.
Not everything, just every other story about how things happen—that is, mainly, causality and chance. Because the typical argument is that the latter two are well defined, while free will isn’t; but that’s just false. Each is ultimately just a black box, because our scientific investigations tell us nothing whatever about how one thing makes another happen. They tell us the regularities of this, but nothing more.
So if one is committed to the view that the arguments against free will are cogent, then one should equally well hold that there is no such thing as causality, or chance, and that how things happen is just a great mystery. If you’re holding that sort of stance, then I’m not arguing. All I’m saying is that it’s equally reasonable—or unreasonable, as the case may be—to believe in free will as it is to believe in causality, or randomness. You can validly say that, in each of these cases, we just don’t know. But you can’t claim to be able to establish for certain there’s no such thing as free will, in particular.