What is the name of this informal fallacy - not accounting for second order consequences?

I have this vague feeling I’ve heard this fallacy named before but my admittedly relatively perfunctory search of various lists of common fallacies hasn’t turned it up. Maybe I’m wrong and it has no name.

The fallacy is consideration only of first order effect of a proposed change, without considering the changes that will result from the proposed change. It can occur in any context but I see it most commonly concerning human behaviour: the proponent or detractor of a change argues for (or against) the change without considering changes in behaviour that will result from the change, which may negate benefits (or avoid detriments) that might otherwise result from the change.

So for instance:

“If the government is entitled to a backdoor into that encrypted messaging system they can listen in on the plans of terrorists” - except that if the government is entitled to that backdoor, terrorists won’t use that messaging system.

“If this street is reduced to one lane there will be a huge traffic jam from all the cars that now use it” - except that if that street is reduced to one lane, people will (if they can) drive a different way.

And so on.

Unintended consequences?

There’s the punchline (made in Big Bang Theory among others) about the scientist who is asked to help a farmer improve his chickens’ egg-laying, and has a solution “…but this only applies to spherical chickens in a vacuum.”

The Law of Unintended Consequences? I’ve also hears something similar as the adage “If things can’t go on this way… they won’t”.

I’m not sure your example would be “second order”. More likely would be the example that the surrounding streets now deteriorate much faster from excess traffic. Perhaps this would be considered a “side effect”. Or - a “Knock-On Effect”. It also brings to mind the Heisenberg Principle, that the observer affects the results. (Or more precisely, you can’t know everything about a system at the same time without changing it)

Reminds me of the story about the psychologists studying memory persistence in young animals - they’d show them a treat, hide it behind a screen, distract the baby dog or chimp or cat, and see how long the animal keeps reaching for the treat. Someone suggested doing the same experiment with children. They showed a child some candy, then put it in a bag and kept asking the child questions. After a few minutes, the child says “you’re just trying to make me forget about the candy, aren’t you?” How many psychology experiments are affected by the subject’s knowledge this is an experiment?

The examples also bring to mind the discussion of economists about “what behaviour are you really driving?” Does raising the price of gasoline mean people drive cars less? Or switch to less expensive more polluting diesel and drive the same amount? Does banning plastic bags encourage reusable bags, or switching to paper (with its own issues)? Etc.

I don’t think it necessarily is a fallacy. I think it’s a premise, perhaps implied, that you do not grant.

The “Opponents have no Agency” fallacy (just now made it up) - assuming that no one besides you can react to changes in conditions.

Unquestionably that is what it boils down to as a formal fallacy.

But particular common instances of formal fallacies are often given names, as informal fallacies.

This is getting to the heart of it. Although it’s not necessarily “opponents”. Simply “others”.

Maybe the “Others won’t exercise Agency” fallacy.

Except it doesn’t have to be about people. It could be a physical system in which one fallaciously assumes one can achieve a particular effect by changing certain things, without taking into account confounding changes caused by your changes.

I had a vague recollection it was called the “steady state” fallacy, in the sense that it was the fallacy of assuming that everything would remain steady state, while you changed something. But looking it up, that term seems to be used for a particular fallacy in economics which isn’t related to what I’m thinking of.

Seems like there may be two different mechanisms that are meant.

  • feedback loop, whereby a first order change leads to further second and higher order changes.
  • reflexivity, in which humans can react knowingly on changes in the environment, and thereby counteract intentions by the persons driving the change.

These are not the same.

Don’t count your chickens before they hatch. That’s not precisely it, but it’s just that type of thing.

What if they are spherical chickens in a vacuum?

I agree these are different mechanisms but I thought there was a name for the basic fallacy common to both - either way one is allowing only for the effect of the change you make in the circumstances now prevailing. One is not accounting for the change to the circumstances resulting from your change that will affect the effect of your change.

This reminds me of the faulty assumption often seen in sports, where a single mistake or decision is often blamed (or credited) for a game’s outcome, as if everything that ensued in the game was already somehow predetermined and only the key event was changeable.

Example: Kicker misses a field goal in the 3rd quarter and the team loses by one point. “If only he had made that field goal, we’d have won by two instead of losing by one.”

In reality, of course, every decision that came after the missed field goal was influenced by that moment so it’s impossible to draw that conclusion.

They’re an oblate sphere before hatching. If they’re in a vacuum just open up the bag and take them out.

The “extremely high friction slope” fallacy?

“Fixed background” fallacy?

Whelp, I think we’ve established I was dreamin’ when thought there was a common term for this.

Thanks.

In college I was involved in an activist group that was trying to prevent some logging near my campus (in retrospect I really wish I’d not been involved, but hindsight and maturity are a pain in the ass). The group lost badly, and everyone kept getting outraged, because the foolproof plans we came up with to prevent the logging always failed when the developer did things like “go to court” and “seek an injunction.”

Afterwards I bitterly told everyone I knew that activists needed to learn to play chess, so they could realize your opponent gets to make moves, too. So many folks–in this struggle and others–seem to be completely unaware of this basic fact, seem to think that if you come up with a plan for an action, that’s the end of the story, and if the people you’re acting against figure out how to disrupt your plan, that’s cheating or something.

Somewhat related to “no plan survives contact with the enemy.”

The old-“forgot about the bailiffs”-fallacy

MacAdder:
Why don’t I pretend to be the Duke of Wellington and kill the Prince of Wales in a duel? Then I could kill the King and be crowned with the ancient stone bonnet of MacAdder!

Blackadder:
For God’s sake, MacAdder, you are not Rob Roy! You’re a top kipper salesman with a reputable firm of Aberdeen fishmongers; don’t throw it all away! If you kill the Prince, they’ll just send the bailiffs round and arrest you!

MacAdder:
Oh blast! I forgot the bailiffs.

I think you mean prolate sphere.

Quite right. Been a long time since I thought of either of those words. An oblate egg might be tough on the chicken.

Yeah, I don’t know if there is a widely-used term for the general type of fallacy you are describing (maybe you could invent a catchy term and become Internet-famous!), but for this specific example you mentioned:

it’s the principle of induced/reduced demand.