Lack of Freewill doesn't mean lack of choice

Ah, I got whooshed :slight_smile:

Can exist, or can be conceived of existing? I would say there is a crucial difference between these two. The fact that something can be conceived of bears little weight as to whether it can actually be possible, and even less in regards to whether it actually manifests in reality, without something that can be pointed to as supporting the hypothesis in question.

Acting as though causality exists seems to work well for a multitude of tasks. Indeed, even a philosopher writing down their thoughts. Where ever they may have come from. From this starting point, I’m having a very hard time understanding why the non-existence of causality can be something that doesn’t need anything more than to be internally logically consistent for it to be deemed valid. How does it then differ from any internally logically consistent model that can be conceived of, and what is it that determines its utility or explanatory power over some other model?

It’s hard for me to see the use for purely hypothetical models for which there is no proposed way to observe any evidence to support them. There is too much of an aroma of a ‘hidden god’ around something like that, and I struggle to get it out of my nostrils.

This seems like either a misunderstanding or misuse of that phrase. I’d say we certainly need more than that to make reliable predictions. The relationship of correlation and causation implied here differs critically from what I understand that classic warning to be saying. Instead, its meaning is that correlation alone is not enough to conclude that two events have a causal relationship.

If correlation implied causation, we might assume that Nicolas Cage movies are deadly around water. But this would be at best a hasty conclusion.

[…] correlation alone is not proof of causation. If we truly wanted to say that one of these variables caused the other one, we would need to explain how Nicolas Cage movies are related to pool deaths. And we’d need evidence that the two things were connected.

Without this, we’re left with a spurious correlation (i.e., two things that coincidentally overlap in some way). And we cannot draw any useful conclusions from this kind of relationship between variables.

Those thoughts are simply another step along the way. They are still the result of many different interacting components, which are all part of our universe. In a purely physical universe, the mind is in no way separate from the brain. Likewise, action that follows a decision is the next step in the chain of events on one level. We first consider different options as to which action to take, then decide and act in accordance. We are “free” to decide – subject to any and all relevant factors – but we are not free in some transcendental manner, uncoupled from the rest of the universe. We have options on one level, but cannot tear our selves free from our biology. There are numerous ways to influence our own decisions of course, things like medication and “bottled courage”.

Whether decisions are actually made consciously, or if they happen on an unconscious level and our conscious mind just serves to reinforce the decision by rationalizing etc. is a different matter. But it doesn’t change the fact that if the mind is produced by the brain, it is also affected by what ever happens in the brain. The decisions made and actions taken by a person are affected by all the internal processes of the brain, which in turn are affected by inputs from the body. Including things like alcohol, which is quite a well known tool for changing the decision making process in significant ways.

Yes, after all, that’s how we know the Earth is flat—because maps are! :wink:

More seriously, how we think about things obviously placed no constraints whatever on how things are. If I model the solar system based on gears, clockwork, and springs, even if that allows me to make accurate predictions, that doesn’t mean that the planets run on celestial gears. If feng shui tells me to arrange my living space such that a dragon could pass through unhindered to increase well-being, and that arrangement does happen to increase my well-being, that doesn’t mean dragons exist.

Moreover, I’ve given explicit examples of models that don’t include causation and which are just as successful in reproducing the regularities of the world, and making predictions about their continuation. So this just sort of falls entirely flat.

It’s just baffling to me how you still can’t (or won’t) see that the same things can be said about causation. Causation never made any sort of prediction at all; the prediction comes from the existence of regularity. What upholds that regularity—causal relations, the structure of the block universe, random chance, or free choice—doesn’t matter one whit for the prediction.

But at any rate, that misses the point I was making. Will, intention, is something that has a certain aspect of teleology—it’s directed at a certain aim or object. If we will something, we strive to realize a certain state of affairs; hence, it’s a perfectly ordinary sort of explanation for a certain state of affairs being brought about to say that some agent intended it to obtain. Something otherwise mysterious can become clarified by reference to an agent’s intention—it’s perfectly ordinary, when trying to explain somebody’s behavior, to think about what they’re trying to achieve. It’s typically much less informative to think about the same situation in terms of causes—if you slap me, and I ask you why, a story about how your neuronal activity triggered certain motor neurons that led to contractions in certain muscles, causing your arm to move in such-and-such a way isn’t the sort of explanation I’m looking for.

And even in the realm of predictivity, the will can be usefully appealed to. If there is such a thing as free will, then it’s what brings about one state of the universe rather than another, in the same way that a random coin toss does. So then, knowing the entire history of the universe up to that point won’t suffice to predict a situation’s outcome; but knowing the will, in the same way as knowing the outcome of the coin toss, does.

Not to my way of thinking. What I’m saying is that there’s preciously little that empirical observation and scientific theorizing tells us about the way stuff happens, and that thus, other modes of enquiry must be appealed to (an example might be Kant’s transcendental philosophy in response to Hume’s challenge to the knowability of causation). Most criticism of free will comes from a naive scientistic point of view, coupled with an unquestioned reliance on the notion of causality. But appealing to (natural) science to settle the matter is just using the wrong tool for the job, and upon closer inspection, causality isn’t exactly an innocuous notion itself.

So what I’m saying is that before we can really go and talk about free will, we have to reorient our initial stance and question some ingrained assumptions. That doesn’t mean there’s anything to the notion—I think I lean more to skepticism about it than to belief in it—but even if we reject it, we should work to reject it in its strongest possible form, and for the right reasons.

I don’t follow how this part hangs together. It seems that the second sentence is meant to support the assertion in the first, that it’s ‘indisputable’ that we know a lot about causes and effects. But I’m not clear on how it’s intended to do so—we can calculate necessary conditions and construct objects even if we’re living in a simulation where there obtain no causal relationships between successive events, but the CPU (shorthand for ‘the computer’s hardware and functioning’) manifests as the efficient cause for every phenomenon, or in a block universe, or in any universe that shows some manner of constant regularity.

Let’s consider an example. It’s often observed that exceptional performance is followed by mediocre achievements, and vice versa, that sub-par performance leads to excellent follow up. One might come up with a causal story about this: somebody who’s performed well, and perhaps received accolades to that effect, will feel much less pressure, and be, perhaps, less motivated to excel afterwards; while somebody who’s failed to meet expectations might be motivated to do really apply themselves and do better next time. This model will, to some degree of accuracy, capture the phenomena, and allow making predictions. (It’s also not something I’ve just made up out of the whole cloth: Daniel Kahnemann, in Thinking, Fast and Slow, tells the story of how he was brought in as a consultant by the military, who were puzzling about why their pilots always seemed to underperform after having received accolades for an exceptional performance.)

But it’s dead wrong. What we’re really observing is a simple statistical effect, namely, regression to the mean. Even in cases where there’s no causal relationship between the quality of repeat performances, we’re going to observe the above behavior. We can make predictions on this basis, and have them come true, with some accuracy. So the mere fact of predictive success does not imply a causal relationship.

Moreover, it’s also clear that we never ‘observe causality manifesting itself’. Take Skinner’s pigeons: seeing their (random) actions correlated with reward (being fed), they increased those behaviors that occurred in proximity to the reward, hoping to thereby facilitate its coming (they developed ‘superstitions’). Most of these correlations will, of course, be spurious, and repeat performance will not cause repeat reception of the reward; but some might not be. Skinner might have resolved to always feed a pigeon that takes three steps back and turns in a circle, for instance.

But the crucial point is that even if the pigeons, or we, hit upon a real correlation, a real regularity, that mere fact doesn’t tell us why that correlation exists. We might stipulate that it’s because there’s a causal relationship between the behavior and the reward. And there well might be. But we could also be at the mercy of some nefarious experimenter. Or we could just have gotten lucky. Or that’s just how the block universe is set up. Those might be constructed examples, but this is an in-principle sort of question. The simple fact of behavioral success doesn’t ensure that the ‘superstitions’ behind our behavior track some real aspect of the world.

In a sense, that’s just right, yes. But one should take care to point out that nobody’s presenting this necessarily as a realistic way for things to be—it’s an objection to the logic of the inference that goes from observed regularities to causal connections. If it’s claimed that we can know causal relations, then that inference must be sound; a counter-example, even a constructed one, shows that that’s not the case. To eliminate the counter-example, it’s then not sufficient to argue for its implausibility, but it would have to be shown impossible.

That’s a lot of questions, and answering all of them would take us too far into a side-thread, I fear (if you’re interested in occasionalism, perhaps you might want to open up a thread about it). Personally, I have no interest in occasionalism as a metaphysical option; I don’t believe in god, and even for the simulation-based occasionalism I raised, I don’t think it’s possible to simulate consciousness, so the question is rather mum.

I do think it’s wrong to look for reasons to accepting occasionalism (or rejecting it) in the natural sciences, or in our observation of the world. Like there’s nothing in science that drives us towards accepting causality (or rejecting it), there’s nothing that drives us towards accepting occasionalism. I find it mainly intriguing as a foil for certain entrenched ideas about how the world works. If you’re interested in that sort of thing, I’ve elaborated on it here.

No, quite to the contrary—what we know about the structure of the universe, in an empirical sense, doesn’t actually take us closer to knowing something about causality, hence, we might just as well talk on the level of billiard balls as on the level of strings.

Again, these are different sorts of things. We know a lot about how matter and forces interact. We don’t know—by means of empirical observation—if the regularities of that interaction are due to causal mediation. They could be due to the whim of god setting things up just like that, and in particular, god might change his mind at any time. Or, we could be living in a simulation that’s set up to run one way for the first 10^{61} time steps, after which gravity suddenly turns repulsive.

Well, that’s not at all what I’m doing, apologies if I made it seem that way. We have every reason to have faith in the natural laws, and every reason to base our endeavors on our understanding of them. Indeed, it’s the only rational option. But that doesn’t give us license to infer the mechanism by which those natural laws are upheld—this is something we can talk and reason about, but not something we could claim to know definitively via empirical means. We can’t, from the inside, determine whether we live in a universe in which an even A has some inherent sort of power to bring about event B, where all occurrences of A are set up such that they are followed by B, where a third factor—god or the CPU (how’s that for an updating of Spinoza!)—ensures the ‘constant conjunction’ of both, or where we’ve just been getting lucky.

But that will always be the case, to one degree or another. We’ve all got a perfectly good intuitive understanding of free will—the ability to do whatever we intend to do, in a way such that doing so is due to nothing but our choice—but if I were to put that as a capital-D Definition, then obviously there’d be tons of holes that could be poked into that, and we’d be talking semantics and ‘but in post #325, you said that a fleublr is a thing which gleurbs, now it seems you’re saying it blurghs, but the dictionary definition of blurgh means that it’s something that blarghs, which you can’t do while you gleurb!’, and I’m simply not interested in that kind of a discussion.

Perhaps as a more concrete example, ‘knowledge’ was for a long time taken to be defined as ‘justified true belief’ (JTB), after Plato’s Theaetetus. But in 1963, Edmund Gettier wrote a three-page article outlining a couple of cases of JTB that nevertheless don’t seem to amount to knowledge. Philosophers have been trying to pin down a better definition ever since, some proposing monstrosities of up to fourteen conjoined propositions.

But what does that actually help? Do we understand knowledge better, after those fourteen-clause beasts had been proposed? Did we lose our understanding of knowledge after Gettier proposed wrote his paper? No: in fact, Gettier couldn’t have proposed his counterexamples if our knowledge of, uh, knowledge ever was exhausted by JTB—because proposing such a counterexample amounts to showcasing something that is JTB, but not knowledge; but if we had been using JTB as a definition of knowledge, then obviously, whatever isn’t JTB wouldn’t have been knowledge, to us. So we evidently have a kind of access to concepts that isn’t mediated by definition, and in fact, trying to pin down those concepts using definitions may either be misleading or unhelpful.

It’s true, of course, that this carries a risk of people talking past each other. Indeed, I think we do so far more than we typically believe. But definitions don’t help, since we just as well might be talking past each other regarding that definition (what exactly counts as a ‘justification’, for example?). All we can do is exhibit our concepts to each other—after all, that’s how they’re learned in the first place. You don’t sit down your toddler with a dictionary to teach them words; that would be wholly unproductive. You use words around them, show how they work, what they do: and that’s what discussions such as this one do.

No, again, sorry if that’s how it came across. I was making a general point regarding how learning, including the learning of concepts, works in humans. We don’t read a program, an algorithm, and then know how to do something; we learn by imitation and experimentation/training. The great mathematician and polymath John von Neumann once said: “In mathematics you don’t understand things. You just get used to them.”

I think there’s a truth in that pithy quote. If you listen to the professor explaining some proof or technique, that’s by far not enough to master it. You have to apply it, do the calculations, get stuck, get frustrated, experience sudden bolts of inspiration and muddle through doggedly. You have to have a few soufflés collapse on you before you get it right; reading the recipe alone doesn’t suffice.

So if I just give you an algorithm of bike-riding—tense those muscles, push the pedal, move the handle, press the brake, that sort of thing—then even if you perfectly understood that instruction, that definition, that doesn’t mean you’re suddenly able to ride a bike if you weren’t before. For that, you’ll need a bit of training and some bloody knees.

It’s the same with concepts. You have to see them in action, see what they do, what reactions they evoke, try them out yourself, observe the reactions you get, read what’s written about them, and so on; then you’ll get an understanding of the concept that a few lines of definition can never convey.

I don’t see how—can you elaborate? To me, ‘unknowable’ is stuff where there’s a fact of the matter we just don’t have access to, like what happens behind a black hole horizon (as long as you’re outside of it, at any rate). But something clearly happens there. ‘Indeterminate’, on the other hand, is when there’s no fact of the matter at all—on the usual interpretation of quantum mechanics, if the spin of an electron is definite in one direction, there’s simply no fact of the matter—not even a hidden one—regarding its spin in an orthogonal one.

It’s a bit of a side track, but there’s a large body of philosophical literature that would disagree with you there. Conceivability arguments, which hinge on the premise that conceivability does entail possibility, are a small cottage industry in the philosophy of mind, and support for this thesis (as well as objections to it, replies to these objections, and so on) has received lots of discussion recently. Reviewing this would take us too far afield, but the basic thread is that only consistent things can be conceived of (and if you think you’re conceiving of something inconsistent, like the round square cupola of Berkeley college, you’re not really conceiving of it), consistent things are logically possible, and logically possible things are metaphysically possible. If you want a more sophisticated (and orders of magnitude more complex) model, you could have a look at 2-dimensional semantics.

But for everyday usage, it should suffice to note that conceivability is, of course, our only guide for possibility. Whenever you plan something, you conceive of doing it, and by that conception, judge it to be possible. Whenever you make a prediction, you conceive of how things would work out, and judge it possible—or even likely—that they will. That we’ve got better than chance success at planning and predicting then means that conceivability must give us at least some guide as regards possibility.

Sure. But so does acting as if we live in a block universe containing certain regularities—it’s the regularities that do all the work, not the idea of causality.

Again, the point of the models I propose is to act as a foil for a claimed inference—the possibility of occasionalism shows that we can’t infer causality from observed regularities. This is just a logical point, not one towards arguing for occasionalism.

I don’t see how. If we know that ‘if A, then B’, or even, ‘if A, then the probability of B is increased by x%’, then, when we observe A, we can predict B (in the latter case, with some given certainty). But that’s a statement of correlation. It doesn’t tell us anything about whether A is a cause of B. Both could be, indeed, due to a hidden third cause; or, as in the example with the air force pilots and the relationship between exceptional and mediocre performance, both could just be due to statistical effects. Nevertheless, the predictions are perfectly reliable.

You’re saying things here that pull in opposite directions. If the thoughts produced by a brain are just steps along the way (as I agree they are in a deterministic universe), then they don’t underwrite a choice of which action to take—that action was perfectly well determined before the brain thinking those thoughts ever came into existence.

Consider the famous Rietdijk-Putnam argument. You’re standing stationary on the street, and I walk past you. In your frame of reference, the war council of the Andromeda galaxy is just debating whether to attack the Milky Way; in my frame of reference, their fleet is already on the way. Their debate, then, doesn’t have any power to actually come out any other way—they can’t decide not to invade the Milky Way, because in my frame of reference, they’re already doing it.

The same can be done for your deliberations on whether or not to buy that delicious Milky Way at the supermarket counter. From the point of view of one Andromedan, you might still be deliberating; from the point of view of another, you’ve already eaten it. So there’s no actual alternative from which to choose—that you’ll take by the chocolate bar was a fixed fact, long before you deliberated, and even long before you and your deliberating brain came into existence. There’s nothing those deliberations did: they’re just as ineffectual for the outcome as the waste heat is for a car’s propulsion.

This is the part of your argument that puzzles me. If we lived in a ‘simulation’ made up of temporal slices which follow in a regular sequence, or in a universe where some ‘God’ ordained all causal relationships, or a universe governed by Shmidhuber’s ‘algorithms’, then this would not be a universe without causality where events occur at random. Instead we would be appealing to a hypothetical mechanism that facilitates and indeed dictates such relationships.

Thanks to deep sky astronomy, molecular biology, geology and palaeontology (amongst other fields) we can gather evidence from many billions of years into the past, and it appears that causality is conserved as far back as we can detect. If the current state of the universe were not the result of a chain of causality, then that state (the current, observed state of the universe) would be vanishingly improbable.

But the exact mechanism by which that causality occurs is not evident, and I suggest that it does not need to be. All we need to know is that it works. We need to behave as if causality is a reliable process and will remain so in the future, so we should discuss free will as if causality exists.

The point is that ‘without causality’—which I should probably have specified, without events in succession causing each other—doesn’t mean ‘random’ (in the sense of ‘lawles’ or ‘unstructured’). In Schmidhuber’s universe, this is perhaps most evident: there, every new event is sampled from a probability distribution and could, in principle, be anything. But due to the describability constraints, the sequence of events will, with high probability, have a short description—i.e. follow a law, a regularity. So even though there’s nothing about A that causes B, both just being ‘drawn out of a hat’, it might be that B regularly follows A.

In an occasionalist universe, if B follows A, then that’s not due to B being caused by A—i.e. there’s nothing about A that makes it so that B occurs—but rather, a deity of whatever sort ‘occasions’ first A, and then B (but could, in principle, do things differently—just chooses not to). Similarly for a simulation—it could be set up such that for the first 13.8 billion years, the law works one way, but then, it suddenly changes, and A is followed by C instead. The point is just that no amount of observing B following A allows us to infer that A causes B—that there’s something about A that necessitates the occurrence of B.

Indeed, that’s not even an unscientific speculation. The laws of thermodynamics, for instance, were once believed to be causal laws—that heat differentials, entropy, and the like, cause certain effects. But they’ve since been discovered to be mere statistical regularities. Entropy increases simply because there are more microphysical high-entropy configurations than low-entropy configurations, and thus, any change in the system will be astronomically more likely to drive it to higher entropy. All our laws could be like that; indeed, there’s flourishing research program trying to derive Einstein’s equations for general relativity from thermodynamic principles. In this case, gravity would just be due to the statistical behavior of spacetime, not due to mass having some causal power to warp spacetime.

In quantum mechanics, there’s also the idea of superdeterminism—in brief, the notion that observed quantum correlations only appear to be ‘stronger’ than classically possible because the experiment’s setup can never be independent of the quantities under observation. If that’s true, then there’s no ‘law’ governing these correlations, they’re not governed by causal mediation to take the value they do, the outcomes of the experiments we’re making are just, effectively, encoded into the universe’s initial conditions. (See also this relevant SMBC.)

Yes, but in order to create an ordered universe, such a deity would need to ‘know’ the difference between a universe with causal relationships and one without. That implies the existence of causality inside the ‘mind’ of such a deity.

This is equivalent to saying ’ a wizard did it’. So long as the ‘wizard’, deity or demon concerned chooses to maintain a causal relationship between events, we can act as if causality exists, and attempt to explain free will within that framework (rather than always referring back to the wizard).

I don’t see that at all. If you want to weave a rug with a nice pattern, sure, you have to conceive of the pattern first, but you don’t have to have any ideas of how the previous rows ‘cause’ the next ones.

And as pointed out, order can even exist when the relationship between successive states is random.

‘Acting as if’ causality exists is a far cry from having sufficient grounds to concluding that causality exists, though.

Curiously I think that you do. I’m quite a big fan of Schmidhuber’s algorithmic universe, and if it were true we wouldn’t actually need the ‘physical’ universe to exist in ‘reality’ - you’d just need the maths. But an algorithmic universe follows very distinct rules, and is consistent with observed causality.

All these questions are pretty much a side discussion, anyway; to me the big question is the one that Max_S has posed; is free will the product of an external phenomenon in a dualistic universe, or is that an unnecessary multiplication of entities?

Curious, indeed. Seems to me it’d be just enough to like the pattern.

Just a point of clarification, Schmidhuber’s proposal isn’t for an ‘algorithmic universe’, as such, but for a universe sampled from a probability distribution that leads to a universe with a small algorithmic complexity—that is, that has a short effective description, and hence, will be very regular, rather than unstructured. This doesn’t entail that things happen according to an algorithmic recipe, just that it’s very likely that one can find one describing things. So the algorithmic rules aren’t what make things happen—the probability is never zero for things to happen according to no rules at all—but it’s likely that there will be algorithmic rules describing whatever actually does happen. You can fit the rules to the universe afterwards, but the rules don’t dictate the universe’s unfolding.

Firstly, the point that was being put to you was that post-hoc rationalization is not the same thing as explanatory power. Hence, free will has not been shown to have any explanatory power.
Since you’ve just deflected on to another point, I’m unclear on whether you understand and agree with that point.

Secondly, in terms of this new point that you are making, yes it is true that in science we never claim that our models *are* reality.
However, the point was that science is not about passively observing “regularities”. It is about testing our models by making predictions and inferences. There is no reason why this should work without causality.

Of course, finally, one might argue that while the scientific method assumes causality, it does not prove it. Because, as you have rightly pointed out, there are other philosophical explanations for why science works. Perhaps our predictions have all been right by random chance. Perhaps the universe just sprang into existence 5 milliseconds ago and our memories are all faulty.
But all this is just arguing with the fundamentals of how we can know anything. It’s the defense of last resort for a hypothesis with no supporting argument or data.

The fact that you are neither trying to clarify the definition of free will, nor give reasons to suppose it exists, but instead repeatedly retreating to, essentially, “What the bleep do we know?!” is the best possible illustration of how nonsensical a concept “free will” is.

You still seem to be confused about what my point is in this thread. I’m not out to make an argument for free will; I’m merely pointing out that the arguments against it hold no water, or at least, that free will—an explanatory hypothesis for why certain events occur, namely, those consequent to an agent’s choice, and therefore, an alternative hypothesis to the notion of strict physical causality—isn’t in any worse shape than other such hypotheses: they all contain a black box, a point where we can’t give any further account of their workings.

The explanatory power of free will, just as with causality, is thus in giving the reason why a certain event occurred.

The point you made, to which I reacted, was that you claimed that since our models contain more than just correlations, but also ‘internal models’, we can therefore conclude that there’s more than correlations of events out there in the world. That’s obviously fallacious.

The particular part you’ve quoted was about how we don’t have any empirical evidence for causation, in response to your assertion that ‘our ability to do any kind of empirical reasoning at all relies on us assuming cause and effect’—which is simply false.

So did you perhaps quote the wrong passage? Because I can see nothing in there about the explanatory power of free will.

As for your remarks regarding ‘post-hoc rationalization’, I’ve in the very post you’re quoting (in the part you elected not to quote) described how free will can be used to make predictions a priori:

I’ve given explicit examples where this sort of thing works without there being causality—it works in a block universe, it works in Schmidhuber’s random universe, it works in every universe that’s not just a totally unstructured collection of random data, because then, there will always be a shorter set of input data and an algorithm to recover the complete data of the universe—i.e. an initial state together with a law.

Seriously, this isn’t some obscure point I’m raising. As Bertrand Russell memorably put it in his 1912 article ‘On the Notion of Cause’:

in advanced sciences such as gravitational astronomy, the word “cause” never occurs. […] the reason why physics has ceased to look for causes is that, in fact, there are no such things. The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.

Of course, Russell is not himself here saying anything totally new. Hume made much the same points 150 years earlier:

Both Hume and the traditional philosopher would agree that certain events invariably follow certain other events, and both Hume and the traditional philosopher would agree that our behavior is largely dictated by our knowledge of this sequence. The difference lies in the fact that the traditional philosopher would then argue that there is some principle of cause and effect that we know and can see in operation between two connected events. Hume denies that we know any such principle, suggesting instead that habit simply implants an expectation in us that events will fall out in a certain pattern. He uses the term “constant conjunction” to suggest that we cannot say that two events are causally related, but only that we constantly find one followed by the other.

So, at the very least, it isn’t just some dude’s rambling on the internet. There’s a very robust thread in analytic philosophy that denies the empirical knowability of causation—indeed, I don’t really know of any arguments that claim we do have some way of empirically accessing causal relationships after all (but I also can’t claim to be intimately familiar with the relevant literature). So it’s really going to take more than handwaving about how empirical reasoning presupposes cause and effect to dispel that (after all, Hume was himself a staunch empiricist).

Again—that’s simply not my aim in this thread. I don’t know if free will exists; I don’t think the matter is of great importance, ultimately. I’m concerned with dispelling bad arguments against free will; not with finding arguments for it. You claim to show free will—whatever it may be—incoherent; I’m merely pointing out that the same ‘incoherence’ is present in any alternative explanation for how stuff happens. I’m simply trusting that you have some idea of what free will means—otherwise, what is it you’re claiming is incoherent?

At the moment I’m quite convinced that there are lots of the sort of problems pertaining to communication that I pointed to earlier going on here. That is, a lot of objections and disagreements on both sides seem to stem from insufficient shared understanding of central terms and and concepts. To put it simply, I think what is causing the most disagreement is that I have not grasped well enough what you actually mean by the term ‘causation’, and so we keep talking past each other.

When you use that expression, it gives me the impression that it is meant as a colloquial form of what is meant by causation. But obviously there is some much more involved description that I just am not aware of, and therefore unable to discuss meaningfully.

I keep going back to thinking of understanding the question well enough to be able to even ponder it, in the same way I was talking about whether free will exits. If the question was just “do we know how stuff happens”, I wouldn’t hesitate to answer: yes, we know a lot about how a lot of stuff happens. I’ve reached a dead end in trying to figure out what gives any weight to ideas like “maybe it just seems like there’s gravity, but it’s all just an animation instead”.

My thoughts on this flow pretty straightforwardly from what I said above. I don’t understand a definition of causality, that wouldn’t apply to a simulation. In fact, the only understanding of causality I’m working under is exactly how a simulation works: “event A only occurs if and when cetain conditions are fulfilled”. So I have to surmise that since I’m not familiar with this area of philosophy, you are talking about causation in a completely different sense than what I understand it to mean.

I thought this was exactly the point of what I referred to with my link. And I wasn’t saying that mere predictive success implied anything, quite the opposite. What I meant was that we can make reliable predictions because we know enough about what effects certain causes are followed by, and for what reasons. Of course it’s not enough to observe correlation. Although unfortunately that happens too, for example in the context of healthcare. But it’s not like things in those situations are just accepted as the be-all end-all solution. A doctor might say, “we can try this medication and it might help you, but to be honest, it’s not known conclusively how it works when it does”. That doesn’t change the fact that other things in other contexts are known to a much higher degree of reliability.

I thought this to be a very basic and trivial principle, and the same thing my link was about.

That’s what I was pointing out. That I might be working under a different understanding of ‘unknowable’, and it seems I was. What I understood it to mean had nothing to do with access as such (based on my understanding of ‘access’… Yes, ultimately everything needs a definition). But I’m not saying my understanding was the correct one. I understood ‘unknowable’ to describe things that by their very nature could not be known even given unlimited access. And that’s why I kept referring to an omniscient perspective. It might be that what I should have used instead was indeed ‘indeterminate’. At least in this context. I can’t tell for sure.

There’s a large body of literature about religion that I disagree with. The same can be said for politics, or homeopathy. I don’t mean to say that should give my opinion any special weight or anything, but I’m sure you know what I mean. Or at least hold a lot of hope for it!

Has this conversation resulted in a consensus view or an undisputed conclusion? I just mean from what you said there I thought I just simply fall on one particular side of that debate.

I’ll just say I disagree, and leave the rest for a potential separate thread, lest my contributions here can be said to slip into apparent trolling. :innocent:

Sure, to a certain extent, and in a limited context. And that’s what I would say about causality as well.
But going deeper, the fact that something can be conceived of doesn’t automatically and unambiguously mean that it will be consistent with all required conditions that need to be fulfilled for it to be true. I can’t conceive of a round square, and the reasons for that should be blatantly obvious. I can however conceive of a unicorn flying in space.

Yep. But it’s not enough by itself to conclude that A as such is the cause for B. More investigation is needed there. As a result of doing that investigation, we may discover that there was no causal link, or that there was one. But I suppose I should qualify that with “for a certain definition of causal”. To wit, without A happening, B would not have happened, and could not ever happen, all things being otherwise equal.

Yup. But the thoughts remain a link in the chain. Hypothetically speaking, changing something that would lead to those thoughts not occuring, would also change the course of events thereafter.

To choose what? Different observers at different locations receive information at different points in time. What does this say about the events being observed?

Yup, agreed as above. In a deterministic universe, it was all determined, but temporally, everything is still following from something. That’s the only thing that makes determinism plausible. Doesn’t matter what order the observers at different locations receive the information in.

I get the same feeling I have about numerous things – that there just must be something I’m missing because I don’t know enough. Actually, this feeling is strenghtened by the fact that after reading the scenario you quoted from in full, I couldn’t figure out what was supposed to be especially strange or problematic or controversial etc. about it.

From the link:
(https://en.wikipedia.org/wiki/Rietdijk%E2%80%93Putnam_argument)

The “paradox” consists of two observers who are, from their conscious perspective, in the same place and at the same instant having different sets of events in their “present moment”. Notice that neither observer can actually “see” what is happening in Andromeda, because light from Andromeda (and the hypothetical alien fleet) will take 2.5 million years to reach Earth. The argument is not about what can be “seen”; it is purely about what events different observers consider to occur in the present moment.

Criticisms

The interpretations of relativity used in the Rietdijk–Putnam argument and the Andromeda paradox are not universally accepted. Howard Stein[5] and Steven F. Savitt[6] note that in relativity the present is a local concept that cannot be extended to global hyperplanes. Furthermore, N. David Mermin[7] states:

That no inherent meaning can be assigned to the simultaneity of distant events is the single most important lesson to be learned from relativity.

— David Mermin, It’s About Time

To my limited understanding, a scenario involving a plane flying faster than sound would have communicated essentially the same information (in this conversation). Which mainly leaves me feeling I totally misunderstood something.

To start the engine, there needs to be an electrical connection made, for ignition to happen. There also needs to be fuel available. Many complicated things need to happen, before force is communicated all the way to the axles, and they start to turn. Long before they start to turn, heat will be generated. If the engine gets too hot, it will seize to function. There is a system in it, that is responsible for keeping the temperature low enough. Heat being taken away from the engine is an essential function that is needed for it to operate.

There are many other conditions besides the temperature that influence whether the engine will function as intended and manage to do work. People who build engines are aware of those conditions. However, they cannot build an engine that is impervious to physical forces. So even though when the engine is in sufficient condition to do work, its output may be significantly diminished. When this happens, once a mechanic examines the engine, they can determine what factors are causing the loss of output. But unless those factors are changed, and a magical elf makes it so the engine doesn’t deteriorate any further – that is, every single thing that is relevant to the engines functions remains the same – every single time the key is turned, the chain of events following will remain the same.

When I contemplate buying a chocolate bar, there is something in my biology, that has caused me to do so. Maybe my brain gets a signal from my intestines, that tells the relevant parts of the brain that fuel is getting low. Or perhaps I happened to see the chocolate bar first, and that caused the brain to bring up the positive memories of enjoying the taste of chocolate, and so there is an impulse formed – it felt really nice to eat chocolate, so why not do it again? But another process in the brain might bring up contradicting information. Too much sugar isn’t good for you, and besides, buying the bar would transfer money to a system that I might see as unethical and harmful. But do I think me buying that bar would be enough of a moral failing for me to refuse myself the enjoyment of eating it? I decide it’s not, buy the bar and eat it. Then I suddenly feel guilty. But why do I feel guilty about my decision, since I already rationalised it to myself? Why would I feel bad about the decision if it was the result of my free will, i.e. exactly what I wanted?

According to what I know about findings by the relevant fields of science, it may be that the decision was already made by me unconsciously, after which me rationalising it was just a task carried out in order to try and mitigate the negative feelings for my own good. Unfortunately, when the moment passed, the same factors weren’t in force in the same way anymore, and this lead to me feeling guilt. This is what we humans struggle constantly – how to control our impulses.

The choice of whether to eat chocolate is part of a temporal chain of events, that either leads to one event or another. We eat or we don’t. Different people in the same situation may decide and act differently. But, if it is accepted that what controls our actions is all in one way or another to do with our neurology, and if all physical processes that are part of that can hypothetically be known in a way where all causal chains can be unambiguously followed, then every process leading to the decision and then action could be followed, as if following lines on a map.

I can contemplate my choice, but I am not in total control of what I ultimately decide. That will greatly depend on everything that affects my brain at the moment. I am free to have the chocolate or not, as I decide. But I’m not necessarily able to resist my impulse to have it. Sometimes I might be, sometimes not.

In this way my biological system is comparable to the engine. I can do my best to practice or “tune my engine” so that I can go through life never eating chocolate, if that is what I have deemed a worthy goal. But there is a part of my biology that pushes me to chocolate and another that pushes me away from it. What will up happening may be as simple to determine as tracing the chain of events in an engine. But only if all the parts of that proverbial engine interact in ways that can be determined given unlimited knowledge.

(Emphasis mine)
Precisely so. This is why tried to be careful and use qualifiers such as “sufficient”.

I’ve simply been trying to communicate my disagreement with this. It might be that the “we” you refer to is the key, and it only refers to a certain group that indeed uncontroversially understands it the same way. If so, my concern was not warranted. But I was just pointing out that from what I’ve observed, to me it has seemed that problems often do arise from insufficient shared understanding. Much in the same way that seems to have happened regarding the concept of causality in this thread.

Hmmm. Is there no way to ever detect if that is happening? Aren’t we using English as our language right now to communicate, but it can still contain a lot of words and expressions we understand differently, and that causes misunderstandings? What you said translated to me something like “misunderstandings are inevitable, so we shouldn’t care about them”. Somehow this is exactly in line with what you’ve said about other things, but feels like it sort of goes the opposite way. It makes me think: “We can’t say we know something causes something. But we don’t need to know or think about what ‘causing’ means.”

I have to concede at this point I guess, that the only thing I could really do is to try and learn what causality actually means in the sense you are talking about it. And that to me is a satisfactory conclusion – that’s actually what I was suspecting all along, and was basically trying to investigate.

All of these things for my part may basically boil down to: “I don’t understand, please explain or I can’t take part”. And maybe I could already be justifiably accused of derailing the whole thread or worse, of trolling. So I’ll take some steps back and gather my thoughts. At least I’ve had my chance to poke a things and see what moves, even if I didn’t achieve much.

For what it’s worth, none of my comments were meant to imply that something you said had offended me, so no worries on that front, and all acknowledgements greatly appreciated. And I hope none of my statements come off as too antagonistic or snarky, as they certainly aren’t meant to.

I may have to go lie down now!

I have to agree with @Mijin here; explanatory power doesn’t mean simply that a given hypothesis would be sufficient to explain something. I’m reminded here of discussion of the existence of god: If we concede that a god exists, it’s no problem to state that as the underlying reason for anything and everything. That does nothing to bolster the validity of the claim that god actually does exist. You could just as simply state as the reason for anything and everything: it’s magic. Or the infamous universe creating pixies. This also applies to something like occasionalism. You say it’s equal in its validity to any other model, including causality. But the fact that not every single thing about a model is known just doesn’t seem to give equal footing to another one, of which absolutely nothing is known. No suggested mechanism even, just that it could be conceived of.

Of course all this also crucially hinges on one’s view on whether physical events can cause other physical events, but in that I’d sadly appeal to certain facts observed in the universe, and round we go. At least causality has a proposed mechanism and observations that conform with it, and contains many aspects that let us make reliable predictions. So I’d say the proverbial black box in each model is nowhere near the same size.
(Sorry, I know all this is basically rehashing many things once again. But it seems to me that this is just the unfortunate pit we find ourselves currently in, until we can find some way to climb out together.)

(Emphasis mine)
I’m sorry, I’m having a hard time parsing what you mean to demonstrate with this. With the risk of sliding ever further into the area of “potentially a troll”, could I ask you to clarify or maybe refer to where you already clarified this?

In that particular form, the statement seems completely circular to me. Simplified, as I read it: “If free will exists, knowing the will lets us predict what follows from it.” Is this a correct interpretation?

If so, that’s trivial, of course. But then it’s still unclear how this model has any explanatory power. Again, it ends up being equal to “god just makes things happen”. Q: How did X happen? A: It was willed. Doesn’t really matter if by a god or a human. What is it that you can predict using that model? How do you find out the will beforehand so you can predict something?

Maybe this is a question I ought not to post at this point, but I have to get it off my chest:

How can causality and free will be competing models? If will is something that needs a human in order to operate, in what sense can free will do or achieve anything, if causality doesn’t exist?

And I know I already said I don’t seem to grasp what is meant by ‘causality’, so the whole question may be moot. But it’s at least some more indication of where my confusion lies.

You’re already over-interpreting it—it’s just intended to mean what it says. Suppose A happens, then B. Then that’s stuff that happened. That’s the sum of our empirical knowledge. How that stuff happens—what makes it so that A happens, and then B—is then metaphysical theorizing. It might be that A caused B—that A’s occurrence necessitates B’s, for whatever reason. It might be that both were drawn randomly. It might be that Malebranche’s god occasioned first A, then B. It might be that A and B are on successive slices of a block universe. And so on.

The important thing is that in those instances where there is no causality between A and B, A could be conceived off as happening without B following. Malebranche’s god could’ve chosen differently; the block universe could be set up differently; the random draw could’ve come up differently. But in a causal universe, there is a necessary relationship between A and B: whenever A occurs, B must follow (all else being equal).

So but even on this definition, in a block universe, there wouldn’t really be a causal reason for A occurring, would it? Neither would there be in a random one, it seems to me.

Regardless, I think what’s usually meant by causality is a relationship between events in the universe: that B is caused by A means that A’s occurrence necessitates B’s occurrence. If A occurs, you know that B must occur. This isn’t the case for, say, a simulated universe: even if A’s occurrence is followed by that of B for the first 13.8 billion simulated years, it could easily be set up such that after that threshold, this relationship no longer holds.

You might want to say that then, there’s a causal relationship between whatever’s doing the processing and the events in the simulation. But I wouldn’t be so sure. Not all reasons are causes: the reason why a picture of a flying unicorn appears on my screen is a particular arrangement of colored pixels, but it would be odd to say that that arrangement ‘causes’ the appearance of the unicorn. (Rather, the relation is one of supervenience.)

Or think of the examples of statistical correlations I’ve given. Do the laws of probability theory ‘cause’ regression to the mean? Again, I’d think that would be an odd way to use the notion of ‘cause’. The fact that \sqrt(2) is irrational is the reason that you’ll never write down its decimal expansion in finite time, but does it ‘cause’ this?

Again, though: what we know are regularities of occurrence, which we codify into laws. This doesn’t translate to knowledge of causal relations, because these regularities could exist without the causal relations.

But then, how come you say things like ‘we know enough about what effects certain causes are followed by, and for what reasons’? Because if it’s true that our knowledge of regularities doesn’t tell us why the correlation exists, then we precisely don’t know the reason why certain events reliably precede certain others. We also don’t have to, of course: as long as we know the regularity, whether it’s due to causal mediation, to some deity’s will, or statistical effects doesn’t matter one whit.

Well, to me, the difference is in what’s true and what’s known. ‘Unknowable’ means that there’s some truth out there, just that nobody can possibly know this truth. So it would be true, for instance, that ‘the spin of the electron is ‘up’ along the z-direction’, but nobody can know that truth. As opposed to an ‘indeterminate’ situation, where there simply is no well-defined spin of the electron along the z-direction (if it isn’t in a z-eigenstate).

Well, the idea is that conceiving of something—really conceiving of it—is a bit like a simulation; and just like you can’t write an inconsistent computer program, you can’t ultimately conceive of an inconsistent situation. You can perhaps falsely believe to have conceived of something that’s ultimately inconsistent, but that’s really just a matter of not having fully thought it true—otherwise, the inconsistency would present itself, and it’d be as impossible to imagine as a round square.

So if you can conceive of a unicorn flying through space—can conceive of how it propels itself, how it doesn’t freeze to death, how it keeps a working metabolism—then sure, what’s standing in the way of such a thing being possible?

(A minor point: even if A causes B, that doesn’t mean it must be the sole cause of B; so it’s not generally true that B can’t occur without A happening, even if A causes B. It’s the other way around: all things being equal, A can never happen without making B happen, too.)

More to the point, how do you ever find that out, though? Suppose you’re in the cellular automaton block universe I outlined above. Suppose its first 10^{100} rows are set up such that they follow a certain update rule. Doing observations ‘within’ those first 10^{100} rows, you’ll find grounds for positing stable regularities. Do you find grounds for positing causal links? For, say, positing that the arrangement \blacksquare\square\blacksquare leads to the state \blacksquare for the central cell in the next row?

That proposition is true for the first 10^{100} rows; but afterwards, let’s suppose, I painted in the cells such that the rule changes to producing instead the state \square, or that I’ve just colored in cells randomly.

No matter how often you’ve observed the transition \blacksquare\square\blacksquare \leadsto \blacksquare, you can’t ever claim to be certain that the regularity will continue forever. And thus, you can’t claim to know that \blacksquare\square\blacksquare necessitates \blacksquare (for the central cell). Hence, you don’t know that there’s a causal relation between both.

That doesn’t mean that you can’t bet on the regularity holding. Indeed, it’s the only rational option. But knowing the causal relation would give you a certainty you simply can never achieve in the real world.

Depending on how you mean this, it’s either trivial or inconsistent. If you intend for this to mean that you could leave all else equal, and change one particular thing (the occurrence of your thoughts, say), then that’s simply logically impossible, as the occurrence of those thoughts is entailed by any other state along the chain—so, you can’t, for example, leave the state of affairs ten billion years to the past the same, and change the way your thoughts occur.

If on the other hand you intend for it to mean that in a possible alternative chain where your thoughts would be different, everything else would also be different, then that’s trivial—if things are different, they’re different. That’s just equivalent to saying that if your actions had been different, your actions had been different.

It’s not about the reception of information, it’s about the relativity of simultaneity. For you, stationary in the streets, you present moment includes the Andromedans deliberating; for a relatively moving observer just passing you by, the present moment includes the Andromedans mounting their attack. So if both are true, then the outcome of the Andromedans’ deliberations must already be fixed—the deliberations themselves have no power of influencing anything.

Sure. But the production of heat isn’t; nevertheless, if unavoidably occurs when the engine operates. That’s all I wanted to point out: just because something is unavoidable as part of a process, doesn’t mean it’s relevant to the process, or to it achieving its intended outcome.

And that’s exactly what you can’t do. In a deterministic universe, all of the instances where you break down and eat a chocolate bar were set ten billion years (or whatever) before you ever labored under the false impression that you are in control of when you eat a chocolate bar. That you buy and eat that Milky Way on March 17, 2037 was already true in 10,000 BCE. No way around it; your deliberations, while an unavoidable side effect of the path to get there, didn’t matter any more than the heat of your car’s engine matters to it going forward. The outcome is already fixed.

You might say, but if you hadn’t made those deliberations, you wouldn’t have bought the chocolate bar. But there’s no possibility for you not to make those deliberations, and not to have them come out the way they will—not in an ‘all else being equal’-sense, at least. It’s like driving on a rollercoaster: there’s only one path, and nothing you’re doing can change that, and from the moment the car gets moving, the rest of its journey is fixed.

But clearly, the fact that there’s a discussion at all means that we have some understanding of the concept, no? If I were to start a thread, ‘Lack of gleurb doesn’t mean lack of flurgh’, do you think it’d gain much traction? So there’s some reason you chose to post here, and that reason is your understanding of the concept of free will. That understanding won’t exactly match mine, but that’s not the case for any concept—not even for seemingly inoccuous ones, like table or cat or human. But trying to nail everything down in definitions just makes Diogenes come in with a plucked chicken.

It’s the opposite. Misunderstandings are inevitable, so we need to find ways to work around them either way—whether we do try to cleanly and neatly define everything, or not. It’s just that the discussions in which the latter is attempted then typically degenerate into discussions about the definitions, which then ends up in lame attempted gotchas and one-upmanship.

The thing is, the only way anybody ever understands is by taking part. Making understanding a pre-requisite to taking part just preempts true understanding. ‘Poking things and seeing what moves’ is how we learn, whether it’s riding a bicycle or using words like ‘cause’ and ‘free will’.

There’s also no suggested mechanism for causality. If A’s occurrence necessitates B’s, then what is it about A that makes it B-causing? By what power, what properties does A actually do the work of bringing about B? Ultimately, this—and that’s my whole point—is just as mysterious as the workings of Malebranche’s god; we’re just used to it. Hume had this entirely right: the attribution of causality when observing constant conjunction is a mental habit—nothing else.

What’s the mechanism of causality? What does it let us predict that we couldn’t have predicted otherwise?

But doesn’t prediction always work like this? Knowing the predictive factors allows you to determine possible outcomes. If those outcomes fail to manifest, you didn’t accurately understand the connection between the predictive factors and outcomes, or you didn’t know all the relevant factors.

What’s the relevant difference to the following?: Q: How did X happen? A: It was caused. That also doesn’t really tell us a whole lot. What you need to do in making predictions is collect a series of factors and work out how they influence outcomes. So, one could say that if X, Y, and Z, then W. If it rains, and you’re outside, and you don’t have an umbrella, you’ll get wet.

The claim that’s being made then is that this is only possible if there’s some causal relationship between X, Y, Z, and W. But that’s simply not true. I’ve given the example where one can make perfectly good prediction based on statistics: when a group of pilots has performed excellently, their next performance will likely be worse such as to drive the overall mean back to its true value. There’s no causal relationship, but a perfectly sensible prediction.

I’m just saying one can do the same thing appealing to volitional factors. The chicken is on this side of the road. Its feed is on the other. The chicken wants to eat. I predict it will cross the road.

Sure, maybe one can analyze ‘wants to eat’ in term of neurophysiological causality. But to presume that this must be the case is simply to beg the question.

One possibility is the ‘Bohmian micro-occasionalism’ I outline in the essay I linked to earlier. The laws of physics (or causality) don’t completely determine the course of the universe; like the rules of chess, they constrain the possible moves at any given point, but don’t uniquely select one. You can think of possible futures like branches of a tree unfolding from any given point in time. One possible way of singling out a unique branch is to just appeal to chance—take a random walk through the options. But one could just as well appeal to choice, with an agent’s free decision choosing one branch over the other.

I’m going to summarize my position on free will. (I’ll return to the back-and-forth later).

I don’t have any dog in this fight. I would be very happy to discover a clear definition of something called “free will” and data to imply it exists. I studied neuroscience and understanding more about how our minds work is exciting to me.

I believed in free will (or at least, believed “Do we have free will?” to be a meaningful question) up to the point of reading a summary of the debate by the philosopher Stephen Law.
He explained how, in a “clockwork” universe, there could be no free will because our actions are predictable. Then, he went on to explain how adding quantum indeterminancy doesn’t seem to help because “how could a coin flip constitute free will?”

I’m not picking on Stephen Law; it’s an accurate summary of how most people would see it. But I found it fishy how non-predictability is the critical requirement for free will one minute, and then just handwaved the next.
It got me thinking about what exactly we mean by “free will”. How does a free will choice get made? How would a universe with free will operate?

Which brings us to “could have chosen differently”, the most popular definition of free will. But again this is something that doesn’t stand up to scrutiny.

The worst mistake of my life was probably choosing to do my A levels at my local sixth-form college, despite it being the fourth lowest performing school in the whole UK at the time, and unable to offer two of the three subjects I wanted to study. But I had my reasons for making that decision. And, based on what I knew at the time, and my personality, it seemed the best course of action.
In this context, what does “could have chosen differently” even mean? If the situation is the same, and knowing only what I knew at the time, I would make the same (dumb) decision for the same reasons.
It doesn’t even matter if the wider universe is Deterministic or not; my decision to go to that school could be considered an oasis of determinism, just as all of my considered choices have been – a function of my understanding and personality.

So, ultimately, the conclusion I came to is that “free will” doesn’t mean anything coherent. It’s “not even wrong”.
The debate still persists because, even though we know our thoughts are correlated with electro-chemical activity in the brain, we still feel like they are somehow separate to the physical universe. Thus the common framing of “Determinism Vs Free Will” makes intuitive sense to many people, when really it is a red herring IMO.

This is exactly what I’ve been trying to say! :grinning_face_with_smiling_eyes:
We think and make decision – in our, “human”, “local” scope. But if everything works predictably due to “laws of nature”, we’re not “absolutely” free in the sense that the billiard balls in our brains will always bump into each other in predictable ways.

Yes indeed. All thoughts and actions roll out with total predictability with us going about our business unaware of all the interactions happening on some minute level required for even our brains to work, and totally unable to break free of influences we can’t even be aware of.

Some, yes. As long as there is a sufficient amount of it, all is well. I was just pointing out that there’s always a danger that the difference in definitions will go undetected and lead to the conversation getting stuck. And this is what I personally thought I saw happening often.

Yeah, I don’t doubt it, and I can very well understand where you are coming from when you say this.

I concur wholeheartedly with this, and it’s really all I’m basically interested in. But I’m really new here, and unsure about all the intricacies of etiquette. Especially, a recent thread about trolling alarmed me, that my trying to take part without a high enough level of understanding of the topic can very easily be seen as a form of trolling that I didn’t really even know of until a day or two ago… What’s more, I’ve already angered people by either asking questions or making what I thought were harmless jokes, and am as of yet unsure which or why. So I gotta try and ‘tread lightly’.