You can’t bring yourself to say it, can you? That someone self-harms to “seek happiness”.
It’s up to me to demonstrate the opposite. Well, OK, perhaps people self-harm out of a deep sense of self-loathing. How does seeking happiness enter the picture?
I don’t need to justify those assertions. The whole hypothetical was designed to point out those obvious truths.
You haven’t bothered to respond to the hypothetical, so I don’t know what your complaint is exactly.
The obvious truth is that a thinking machine need not be motivated solely by improving its own state. I could design my machine, so say, it has an aggression subroutine that occasionally makes it hit out at things.
This subroutine need not actually improve my machine’s mental state, or circumstances.
Humans are like that to an extent. As I’ve said, we have the carrot and stick of positive and negative emotions. But we also have instincts and learned behaviours, that must have been positive to someone at some time, but are not necessarily so to a particular individual now.
What I’m saying is this: we have multiple motivations and multiple competing factors in how we behave. But how do we ultimately decide what to do? I don’t know, and it’s one of the hot topics of cognitive neuroscience, with several competing theories.
Does my machine need to “like” acting aggressively?
Or is “like” and “seeking happiness” just misleading ways of saying “we act as we will”.
(Or at least, to use my full position, some people self-harm seeking to increase happines/reduce misery. It’s kind of a continuum, you see.)
And you’re making this easy. A person who loathes themselves may feel that justice and good is served by punishing or hurting that which is loathsome. They are distressed (made more miserable) by the sense of the loathsome going unpunished. To alleviate that distress, to reduce their misery, to increase their (meagar) happiness, they self-harm with the goal of satisfying their inner need to attack that which they loathe and thereby hope to reduce their misery/improve their happiness.
Obviously false truths, you mean.
Argument by bald assertion of your own correctness. So easy. So cheap. So meaningless.
The fun part of this hypothetical is that it allows you to just state that your machine is a “thinking machine”, yet it need not have any cognitive similarity to humans. And then it’s supposed to prove something about them!
Here’s another analogy. I have a rock. It has no moving parts and no mind. No feelings, no likes, no dislikes. Yet, it rolls down a hill! Did emotions drive it to roll down a hill? Does it “like” rolling down hills? No! Therefore humans don’t make decisions based on emotions. Also, they don’t have limbs.
In case you don’t get the point, your analogy is bad. It’s a trick, a bait-and-switch for the human mind, because you can’t make a case based on the human mind. And the way it’s a bait-and-switch is that you’re making a hardwired physical action the act in question. It’s quite literally not something the robot decides to do - it’s “an aggression subroutine that occasionally makes it hit out at things”. The robot has no more choice in the matter than humans do about whether they will shed skin cells. But you’re analogizing it to conscious decisions humans make!
Now, if the robot had conscious control over wether to carry out the action or not, and had to assess the decision to act that way over other alternative actions, then it might be a decent analogy. Of course, in that case, what happens in the robot’s brain? Why, it assesses whether to carry out the act based on soime sort of calculation that compares the merits of all available options. To do this with anything like intelligence, it has to do this by calculating a ‘value’ for every action, presumably based on the ‘value’ of the expected outcome. These values would, necessarily, have to be based on internal valuation calculations, used to determine what the robot would ‘like’ to happen according to its hard-coded or variable preferences. Then the robot would, necessarily, have to take those values he assigned to each actions and compare them - choosing to carry out the action it decided it ‘valued’ highest based on its internal valuation criteria.
Sound familiar? (Though of course humans keep a ‘running status’ that your robot need not; we probably developed this to make it possible to self-assess the results of actions and learn more accurate valuations thereby.)
Keep in mind that if your argument is just “the robot’s valuation numbers are called ‘Buffy Numbers’, not ‘Emotional Appeal Numbers’!”, then I can just say “In humans, we call this valuation ‘emotional appeal’, and the ‘running status’ effected by it that decisions seek to maximize is the emotional state, which for decision-making purposes can be thought of as simplified to an emotional continuum from ‘Happiness’ to ‘Misery’”.
Oh really. What are these theories, pray tell? Summaries will do. Let’s focus on the ones that don’t say we do things because we think they’re a good idea at the time.
Misleading, eh? Well, then for the sake of clarity, how about you tell us what it means to say “we act as we will”? What does it mean to “will” something? Does it mean to want it? Does it mean to like that course of action? Does it mean that course of action makes you happy?
I’m just taking things to a deeper level of explanatory detail than you’re willing to. And accusing me of being misleading by doing that is like saying it’s misleading to call 1+1=2 “addition” instead of just “mathematics”. Or rather, “addition instead of mathematics-but-definitely-not-addition-oh-no-not-addition-anything-but-that”.
I was going to bow out of this thread. I’d had enough.
But after your last post I can’t resist letting this play out just a little longer.
Why are you saying some now? Your whole argument is surely that we are all motivated in this way all of the time.
LOL
Utterly absurd. I like the irony of you saying “you’re making this easy” and then following it with one of the messiest, most contrived explanations I’ve read on SD.
I explained at the end of the analogy why I thought that it can apply to the human mind. So all that BS in your post about bait-and-switch (BS about BS) is nonsense.
As for the rest of your argument, it’s just a long description of how you think we make decisions. And in large part, I agree. So let me try and be clear about where we differ:
When you initially joined the thread and were talking about “seeking happiness”, I objected because it implied we do things to bring about a future mental state. But this is not how the mind works; we (generally) do what we think is a good idea at the time. We rarely try to predict how we’ll feel as a result.
For example, I am currently changing career. I find the new career more interesting, but I may well earn less money. Now: I’ve made the change because, yes, I think it’s a good idea but I genuinely don’t know whether I’ll be happier after the change. I don’t know if I’m maximising my happiness.
But you’ve said that you didn’t mean mental states. So you’re basically just saying we do whatever we think is a good idea.
I agree, with a couple of caveats.
Firstly, our base instincts play an important role in our decision-making. We wouldn’t think of these instincts as needing a reason, they evolved for a reason, but they don’t need to reason within a given organism.
And secondly, the “good” in “good idea at the time” is ambiguous. Good in what sense?
It’s good for whatever objective we have at the time. Thus “seeking happiness” becomes “seeking whatever it is we’re seeking”.
Someone who’s dedicated their life to furthering the cause of stamp collecting will probably enjoy what they do (though not necessarily; they may be doing it out of habit, for example). But as I’ve said, the person is unlikely to be motivated by a prediction of a future mental state.
So is the “good” in this case, good for the person’s life overall? Again, no, not always. The stamp collector, if they were to sit down and consider whether their hobby has an overall positive affect on their life, might realise it doesn’t.
Because, though all people are motivated by their desire for happiness, only some people self-harm to seek happiness. It would have been moronic to say all people do.
Though, all the people who do self-harm, self-harm to seek happiness.
I note that your only refutation is to point and laugh. Probably because you can’t find a real flaw in it. (Hell, it’s not even complicated.)
You mean the part where you said “humans are like that to an extent”? The main problem with that was that it was BS. Human decision are not hardcoded beyond the ability for us to influence them by assessing the merit of obeying - if they were, they wouldn’t be decisions! Even the “instinct” to piss when you have a full bladder is first weighed against what pissing your pants in public will do for your image.
This didn’t fix your analogy. Your analogy only applies to things which are beyond human decision (like shedding dead skin). That you analogize it to things that are decideable, and thus claim it’s about human decision, is what makes it a bad analogy.
I’ve said repeatedly that we do what seems like a good idea at the time.
And your example with the job change demonstrates why I used the term “seeking”. Humans always work on incomplete information - even if they’re not simply ignorant they’re still limited by their inability to perfictly predict the results of actions they may carry out. So because of this a human cannot just maximize their happiness - they can only do things that they hope will maximize their happiness. That is, things that seek to do so.
Of course, sometimes this search is pretty short and pretty certain of its outcome - I just sought to slake my thirst by picking up the cup next to me and drinking from it, and look! It worked! And sometimes the end goal is so far away that the only happiness you really gain is that of having taken a step towards your goal. (In that case you seek both the short-term happiness and the long-term happiness, but the short-term happiness is found a lot faster.)
“Good” is defined in terms of the person’s own preferences - which would include the instincts, which are just inherited preferences after all. (Sometimes inherited preferences towards carrying out quite complicated sequences of actions - but if you have any choice at all in the matter, they’re still just preferences.)
The good is based on assessment of all known information based on all relevent personal values and moods. Or rather, all the information that the brain has managed to scrape together and correlated to that point - demonstrably it sometimes takes us a while to connect all the dots and figure things out. (And demonstrably our thoughts are influenced by our decision-making processess - people can decide to avoid thinking about certain things or to think harder about certain things.) For different people, different things are important - for a trivial example, some foolish people like the taste of peas, even to the degree of deliberately consuming the horrid little things. These people will think that eating peas is a better idea than I would, and so would be more likely to act on an opportunity to eat them than I would. This scales up to people who care a lot about the security of a gun and care little about the threat of having a gun around - or who have assessed the threat as one that is controlled and thus no problem.
And so, different people seek different things based on what they prefer - which is to say, based on what they personally define as “good”. Characterizing this as “seeking whatever it is we’re seeking” and claiming that it has nothing to do with happiness is, again, failing to delve further down to the actual reasons for things (perhaps deliberately). Seeking the things we prefer can be defined as seeking happiness - because the lack of the things we prefer causes unhappiness. For, what is wanting something other than being less happy because we don’t have it?
Well, since we were already talking about those who self-harm, to phrase it repeatedly as “Some people self-harm to seek happiness.”, as you did in your post, was strange. Was it in doubt that people self-harm?
But anyway, whatever.
I’m sorry but it is so daft I don’t know where to begin. I could parody it, but there’s no need. From now on every post I make in this thread will include the quote:
My point, in my previous post is that “good” here is ill-defined.
You agree that we don’t choose actions in an attempt to bring about a mental state.
And I don’t think it’s about improving life circumstances; I gave that example of a person who does an action, but can still say that objectively, it’s not beneficial to their life. You didn’t respond to that point, I’m not sure if you agree.
But if you do, then what are we left with? If “Good” is not “good feeling” and it’s not “good life status” then how can it be right to describe it as “seeking happiness”?
I admit I can be pedantic in restating premises, when the sentences in question are false without the premise implicitly taken as part of the statement. It’s a personal failing.
You know, I still can’t figure out what your problem with that statement is. It’s not that complicated - it’s a known and obvious fact that some people will attempt to hurt or destroy things that they believe deserve to be punished. Is your problem that you can’t accept that people do this because they would feel worse if this wasn’t done? Or do you just not acept that sometimes people will have conflicting motivations, and end up forced to pursue one at the expense of the other? (In this case “the other” that’s being neglected is physical comfort, of course.)
Regardless, you can quote that six times a post for all I care. You may think it makes me look dumb, but to me you’re just waving the flag of your own incomprehension of it. Which isn’t my problem.
Really? Here I thought I thought that people choose actions because they beleive they would feel worse if they did something else. That seems pretty mental-statey to me.
I’m not sure which example you’re referring to, or what kind of “not beneficial” activities you’re referring to (short term? long term? inconsequential?) so I’ll try to cover the bases.
People certainly assess a lot of factors when they make decisions, but in my opinion and according to my observations, the thing that comparisons always boil down to is current good feelings or bad feelings. That is, will I feel bad/guilty/stupid/evil, right now, if I do that? Or will I feel pleased/releived/smart/good, right now, if I do that?
The way long-term goals work into this is through the fact that humans can feel a sense of obligation about various things. A person feels obligated to make sure they have a place to sleep when evening comes, so if they don’t have one, they will often worry about it, and perhaps seek to alleviate that worry (ie improve their momentary happiness) by seeking shelter. If they choose not to seek shelter instead, they might reassess that decision as a bad idea, and feel that strange impulsion known as ‘guilt’ impelling them to go back and rethink their current course of action. At some point the unhappiness of the guilt might overcome the happiness they get by lazing around, impelling them to the action of looking for shelter, despite the search itself not being fun or happy on its own. It’s only happy-making in comparison to the guilt-ridden state of not searching.
The same thing is happening when a person seeks a wage or seeking retirement - you do it when it would make you feel worse not to. It should be clear how that relates to ‘seeking happiness’.
If you were speaking of cases where people made a decision like giving ten bucks to a guy with a cardboard sign, I’d say it’s pretty clear that such actions are made directly to improve momentary happiness - it makes you feel good to do it. Similarly, people do stupid things in pirsuit of momentary happiness all the time. It’s certainly why people buy rolls of lottery tickets. Sure, objectively it’s not beneficial, but who’s being objective? It’s fun and thrilling right now - so they do it. Good sense be hanged.
Inconsequential activities, like “Take the can of peas on the left, or the identical can of peas on the right”, would boil down to very small differences in happiness. So small they might boil down to things like “is it a hair closer?” or “do I habitually prefer to take the leftmost thing?” But there would still be a small difference and the ‘choice’ algorithm in your head would choose the best option, even if it only wins by a hair.
If none of these cover your example, clarify what it is and I’ll take another shot.
I’m going to have to go with “good feeling” - or to be more accurate, “pursuing good feeling and avoiding/reducing bad feeling”.
OK, and now a little explanation of why I think it’s silly.
First of all, “justice and good is served by punishing or hurting that which is loathsome”.
So that which is ugly, stupid, weak should be punished? That would be a very unusual moral / political opinion to have, and your argument implausibly requires all self-harmers to think this way.
But it goes even further. It must actually make self-harmers unhappy that the loathsome go unpunished, to the extent that punishing a single, loathsome individual should more than outweigh the pain, distress and embarrassment of harming themselves.
It’s just about the most absurd explanation I’ve heard on the SD.
Playing devil’s advocate, if I were trying to support your proposition, I’d say self-harmers are doing it as a cry for help, consciously or otherwise. I don’t believe that this is always the case, but arguments can be constructed more naturally than that mess.
OK, you’re making it clear you’re going on the mental state side of this.
That’s good, it’s clear where the disagreement is.
Let me try to state my position again, for clarity.
I’m saying that people generally do whatever seems to be a good idea. However, I’m saying that there is a difference between doing what seems like a good idea and doing what makes us feel good. Admittedly, it’s a distinction that many people find hard to make, hence why opinions like “there’s no such thing as a selfless act” are so popular, despite not standing up well to scrutiny.
What are the differences between doing what is a good idea and doing what we feel good about?
Well, let me give some examples of where the two would differ.
Firstly, many times we must make choices that we’re completely nonplussed about. You alluded to some of these situations. The reasons why A is better than B might be very minor (e.g. being a hair’s breadth closer) and certainly not something that would make any change to our level of happiness.
The reward “drug” in the brain is dopamine. We can measure the levels of dopamine being released at any time, and we can see that a person can make relatively far-reaching decisions with no measurable change to the levels of dopamine released.
Secondly, many times we need to react too quickly for “How would I feel if…?” to kick in, even if it were how we think.
A mother who sees her child about to put an unknown object in its mouth doesn’t act to end her own bad feeling. The mother is simply responsible for the child and is in a permanent state of wariness. Now, you and I would probably disagree about why she feels responsible. But it’s irrelevant: the simple point is that in the moment of acting she is not self-referencing her feelings, consciously or otherwise.
I actually think the majority of decisions are not motivated by pleasure, but I’m starting with the thin side of the wedge, to see how you respond to these examples first.
How did “loathsome” become “ugly, stupid, weak”? What happened to “evil”, “criminal”, “destructive”, and “corrupting”? Did you deliberately switch out the word for more innocent failings?
People attack the things they loathe all the time. Republicans and democrats go tooth and nail, often without regard for the position under debate. People sneer at criminals and toy with the idea of extreme punishments. Racism happens. And for the perhaps most blatant example ever -no, I don’t think I’ll godwinize the thread.
My argument requires that all self-harmers believe they deserve harm, unless they have some other reason to harm themselves. (We must keep in mind that we’re merely discussing one possible reason for it, there may be others.) I find it incredible that you find it hard to believe that this could occur. Have you never owned up to something bad you did and willingly accepted punishment for it? Arguably, the desire for self-punishment is what the feeling of guilt is.
To argue that people never desire to hurt that they hate isn’t the most absurd thing I’ve seen on the SD. (I’ve been around here longer than ten minutes - we get way crazy here, often.) But it’s still not exactly a strong argumentive position.
That’s a different proposition. It could easily be true for some people, I wouldn’t argue against that, and it’s certainly less of a mess than the claim that people never believe themselves worthy of punishment. But it’s still not a related argument to mine.
If you’re going to draw a disctincton between “what seems like a good idea” and “what makes us feel good”, that raises an obvious question: how do we decide what’s a good idea, if not by our reaction to it?
The nifty thing about not rejecting that we decide this based on personal preference for the outcomes is that it makes it possible to draw a line between your cognition and the ‘good’ valuation. “That which makes you happy is deemed to be good” may not appeal to you, but it at least makes sense. I have yet to see where your “good” valuation comes from. Did you state it somewhere I missed? Because it’s not something you can just gloss over. If you don’t have an explanation for this we have no choice to accept the only explanation we currently have, that goodness is assessed by personal emotional reaction.
And before you just throw something together, I should warn you that the first thing I will hit it with is “Why do we care?” If you propose “we call things good that assist humanity”, then I will ask “Why do we care what happens to humanity?” If you propose “we call things good that increase order” then I will ask “Why do we care about increasing order?” I will ask this of any explanation that does not answer the question of “why does that motivation motivate us, personally?”
I suspect that the only two possible types of answers that can answer this question are “becauase we personally have a positive/negative emotional reaction to it”, or “personally have a positive/negative physical reaction to it.” When faced with the latter, I will see whether the decision can not be made in favor of physical satisfaction, or if it can be forestalled. For example, if you say, “We care about breathing because if we don’t breathe, we’ll feel pain in our lungs”, then I will ask, “what motivations can lead us to hold our breath, and on what basis are they compared with and weighed with the physical imperative to breathe so that they may temproarily overcome it?” I strongly suspect that the only basis you will be able to find to make this comparison with will be a personal emotional basis. It is simply the only common ground that all these differing motivations have.
I recognize that you don’t like the statement “there’s no such thing as a selfless act” - and that if the position is not properly understood (or it’s being formulated badly by the one presenting it), that it can appear to fail a close scrutiny. But that doesn’t change the fact that when understood correctly, it is simply a fact that “all decisions are made by choosing amongst options with a preference for ones that inspire a positive emotional response” is the only cognitive model that doesn’t leave huge unexplained gaps in the decision-making process.
Unless you can propose another one? That makes sense and explains all cases? Because if you can’t, there’s no point in trying to pick imaginary nits in this one.
Um, what part of this is an example that shows a difference between what is a good idea and what we feel good about? The dopamine bit? Where did you show that dopamine was an active part of the decision-making process for all decisions - or even that it’s the only measure of positive preference that the brain uses?
Are you kidding? You’re arguing that emotional responses are too slow?
In real life, people don’t mull over to themselves, “Hm, I think I’d be unhappy if I whacked my toe into a table leg, like I just did. Perhaps I should consider feeling a little anger and irritation, when I get around to it.” That’s just nuts. These reactions are fast - happening below and separate from the slower types of conscious thought. You don’t have to think about being happy to be happy. And you don’t have to deliberate at length to feel fear on behalf of your imperilled child.
This notion of yours actually might be the most absurd thing I’ve seen on the SD - since the last religion/spiritualism/praise the free market thread, anyway. (I don’t intent to quote it back at you repeatedly, though - I consider that sort of behavior juvenile.)
As has been stated repeatedly, pleasure is merely one sliver of the personal positive/negative emotional reaction spectrum - and that’s been pointed out enough that to argue from pleasure alone now would be a strawman.
And I get that you don’t like the personal emotional reaction model. Care to point out what alternate model you do like? It’s one thing to prefer a different, more sensible model, and quite another to pound away at the lone explanation because you’re bitter about the way reality is.
I didn’t “switch”. I simply gave 3 examples of that which, by your reasoning, is loathsome. Remember, you’re saying that people self-harm because they consider themselves loathsome. Well, many self-harmers scratch things like “ugly” on their arm, for example.
So, again by your reasoning, such people must think the ugly deserve punishment. And punishing them makes them happy (and/or less miserable).
Those self-harmers are wicked people! Imagine switching on your TV and thinking “Oh look at that ugly person…it’s making me miserable that they aren’t being punished!”.
Simple: by whatever objectives we have at the time!
As I’ve pointed out, the “good” in your whole seeking-the-good proposition is ill-defined.
It seems to basically mean the context-sensitive “good” of the situation and the individual concerned. Well, of course we do the “good” under such a definition – it’s virtually tautological – it’s saying we do what we do.
Exactly why we do what we do is more complicated than is suggested by such a philosophy. And the fact your explanation is the only one, or if it indeed “makes sense” are not alone enough for accepting it.
For a start, the fact that it’s either tautological or ill-defined is a good reason for rejecting it.
Let me go back to an AI hypothetical.
Say I make a killbot. A Terminator. Now this machine is programmed to kill humans. But it’s sentient. It can decide exactly the best way to kill humans. I don’t, however, bother to program emotions into this machine.
Now, the first question is, is this hypothetical machine feasible? If not, why not?
And secondly, if a person were to ask “why does this machine kill?” it would be wholly inappropriate to say that it “likes” the option of killing, or that it is “seeking happiness”. It simply, instinctively, seeks death.
Now, how does this relate to humans? Well, we have similar instinctive motivations. I get instinctively curious, for example. It’s not about good or bad feeling, for any concrete physiological definition of “good feeling”.
It does fail. All that’s required is a recognition that there is a difference between the reason(s) someone does something and the eventual benefits of that course of action, and that the English word “want” has multiple meanings, some of which imply selfishness and some which basically just mean “our will”.
I’m trying to show that your proposition falls down as soon as you try to give a concrete definition for “good”. If good means “good feeling”, then that has a clear physiological corrolate in the brain and it’s demonstrably the case that our decisions are not based on bringing about “good feeling”.
You’re right, it was a bad way of putting it. But let me ask you: Are you saying that the mother protects the child because somewhere in her mind, consciously or otherwise, she’s actually thinking: “I must end my bad feeling, therefore I will protect the child”?
Yeah, I ramble on. Part of my brain thinks that when in doubt, explain more!
Your only obligation here is to yourself.
No, your logic is bad. Just because people call something ugly doesn’t mean that their motivation to do so was ugliness; the slur is the punishment, not necessarily the crime. The loathing could be based on other things that are less easily articulated (and harder to spell) than “ugly”.
That said, people have been known to pick on people just because of their looks. Cite: middle school.
Funny how the heroes are usually handsome or rugged or beautiful, huh? Not always, of course - ugly looks are only of many ways to work towards earning the loathing to justify that beatdown and dramatic death at the end of the movie.
Why do we care about those objectives?
Ill-defined my ass - it’s explicitly defined as that which elicits a positive emotional reaction or avoids a negative one. Can you think of no better way of arguing against me than deliberately and completely mischaracterizing my position?
If you want to see tautological, watch yourself swap “objectives” with “good ideas” back and forth for a while.
It’s neither tautological or ill-defined. You want to see ill-defined, look at “Exactly why we do what we do is more complicated than is suggested by such a philosophy.” That’s not defined at ALL.
Just like before, you’ve declared your robot to be “sentient”, but have utterly failed to define how it chooses amongst its many options for killing people, probably because you know full well that any method you invent will analogize to an selecting based on emotional reaction in humans. And you have utterly equivocated an action that is not chosen and is irresistably obeyed with to impulses and inclinations that humans choose wether or not to succumb to.
Once was clumsy argument. Twice is dishonest argument.
Why do we care about those reason(s) you mention?
Well, keep trying. There’s no reason to believe that brain chemicals alone drive all our decisions. We have all those neurons and their electrical signals too, and those may (and in fact verly likely do) codify our current brain state, knowledge, preferences, and reactions.
Yes, but it’s probably not articulated in words - this is likely a value-sum of electrical pulses in the brain. I’m a bit shaky on my brain anatomy, but from the little I know I’d suspect that concept that triggers the largest or smallest electrical reaction in the neurons triggers the brain to carry out the actions attached to the concept. Being electrical, this sort of thing probably happens pretty fast - you know how electricity is.
Spitefulness is still some way from being miserable that the ugly are not punished, which is broadly how all self-harmers are, according to your theory.
We just do.
If that is insufficient, consider: Why, according to your theory, do we seek happiness / avoid misery? Just because there exist positive and negative feedback in the brain, doesn’t, in itself, mean that such things would entirely dictate our behaviour. Your theory has the same explanatory gap as mine.
What I’m saying though is that “positive emotion” has a clear neurobiological meaning. And it’s demonstrably the case that decisions are not based solely on maximising positive emotions.
The only way to defend your conclusion is to define “positive emotion” to be whatever thing motivates any person to do anything. Which is tautological.
Phrasing the same thing two different ways is understandable in a thread that’s been going on this long. I didn’t criticise your change from “seeking pleasure” to “seeking happiness”, and why would I?
What I’m critisizing as tautological, is an argument that provides no new information, such as we’re motivated by whatever motivates us.
No idea what you mean by this. I simply said that our motivations are more complicated than is suggested by your theory that good feelings motivate us. What’s the problem with that statement?
Dishonesty is not responding to the argument. As best as I can tell from your response you’re answering “no” to the question of whether a sentient killbot could be constructed. And also, it seems you’re assering that a sentient intelligence that does not have pleasurable emotions is impossible.
I don’t see how you could defend these assertions.
Absolutely, and this is my position!
The difference is, I see no reason why all these different states, knowledge, preferences and reactions can be described as pleasurable, or seeking pleasure. Pleasure, as usually defined, is demonstrably an incorrect description of how instincts like curiosity or aggression work.
Hmm, I’m not sure how to respond to this, but very simply I would answer “no” to the same question.
If we put an implant into a person’s brain that hurting a baby would give a good feeling, and protecting a baby would give a bad feeling, how would the parent then behave?
Yet again you refuse to ask why does the person act spiteful. Your entire position is persistently refuse to look at the man behind the curtain - even when the curtain is pulled back.
It’s pretty insufficient from somebody criticizing me of being ill-defined and tautological.
And you are just living in a state of denial about what my theory is. My theory is EXPLICITLY that ALL behavior is dictated, at the point of the actual choice, by the positive or negative emotional reaction to the options. ALL behavior. Thus, no gap.
There is no basis for beliving that that single chemical is the be-all and end-all of positive and negative (AND negative, note) emotional reactions in the brain. In fact, it’s almost certainly not the case. So nothing is demonstrably the case except that you’re perfectly willing to argue from data that doesn’t actually support your position.
Or instead you could actually do what my argument says to do and ask why that thing motivates the person, which inevitably will be traceable back to a personal emotional reaction to the choice, which enables it to be compared based on that emotional reaction with other options that can be chosen.
It’s not as fun as ignoring my argument and crying tautological, but it does have a bit more explanatory power.
Another reason not to criticize it would be that for the entire thread I’ve clearly been arguing from happiness, and explicitly not just pleasure.
And curiously, arguments provide more information when you don’t completely ignore what they say.
The problem is that you are ciriticizing me for my theory being bad when the alternative you propose is worse in every way. Heck, it’s barely coherent, and certainly don’t provide the functionality to explain human decision-making behavior.
And the other problem is that it’s a bald unsupported assertion based on nothing but your own wishful thinking. There’s no reason to think that something more complicated than my theory is happening - largely because my theory is about what’s happening at the lowest level, beneath all the complexity. There would certainly be a massive web of neural complexity working to calculate how you emotionally react to every little thing, taking into account all your preferences, goals, knowledge, instinctual urges, biological conditions, emotional current state, etc. But when all that is done and you have the emotional assessments in hand, the comparison of options and act of choosing between them would be relatively simple, because most of the work is done.
I already rebutted this deceptive analogy. In posting it again unaltered, you were avoiding responding to my rebuttal.
As I previously stated, I assert that a sentient killbot could be constructed, but to be called sentient it would have to have a decision-prioritization method that would be analogous to the human emotional assessment - so close in fact that one could reasonably say that the robot likes things. (Like we compsci guys already say about programs sometimes.) So while you could choose not to call the robot’s decision-prioritization measure “positive/negative reactions”, it would be dishonest to say that they could not analogize as such.
(Emotions themselves are a retained state that is influenced by emotional reactions. Robots need not retain such a state, as I’ve said before. Humans do, but it’s not necessary to my model of decision-making.)
And, of course, if the robot has no more ability to choose not to kill than we have the ability not to choose to be effected by gravity, then that has nothing to do with the sentience and it’s a totally false analogy. Which it is. Which you know, 'cause I told you that before. But you repeat it anyway…
Why do you choose to act on your curiousity? Why do you sometimes refrain from doing so? Why do you choose to act on your aggression? Why do you sometimes refrain from doing so?
Because you like or don’t like the expected consequences of doing so in these cases? Naah, couldn’t be, that would be admitting I’m right, and doing that would make you a little less happy. It must be because…we just do!
Such complexity! Such explanatory power! Brilliant!
Well, now, that depends on just how much they like to kill babies now, now doesn’t it?
Hmm, well it seems we’re at an impasse at this point. I believe that emotions are but one factor in motivating our behaviour. You believe that they are the only factor. I’m certainly not going to budge.
And every post you come up with a bunch of flippant remarks that fail to address anything I’ve said, but that I must firefight.
It’s going nowhere.
No, again, you’ve failed to respond to the point.
The point is this: the question you keep asking of my position is why we’re motivated to do the things we do. Why the killbot is motivated to kill.
I’m saying, it’s equally valid to ask that question of your position. You’re saying we do X because we get reward Y. But why must we always do what collects the most rewards?
Well, sure, if you want to say negative emotions then you have to include other neurochemicals. But it doesn’t affect the fact that “emotion” has a clear meaning in neurobiology and demonstrably it is not the case that it motivates our every action.
I know of no neurobiologist that thinks that emotions entirely determine our motivation. Cite away.
So why are we motivated by emotion? It’s a “turtles” explanation.
Yep, point taken, it was Voyager that originally used the word “pleasure”.
But anyway, by tautological, I meant the logical definition of the word. Repeating an argument but phrasing it differently is not the kind of tautology I am referring to.
I’ve yet to see what information your argument presents, since you won’t define “happiness” or “misery” other than them being the motivators of our every action.
Yep, so your theory is more complicated.
My theory: complex system of motivations + system that compares motivations and chooses an action
Your theory: complex system of motivations + system that gives each motivation a pleasure rating + system that compares pleasure ratings and chooses an action
Wrong. I have not posted the same analogy. One was about a feeling AI. The other was about a non-feeling killbot AI. Not the same analogy.
I’ve worked in AI for some years, I’m a “compsci guy”. For linguistic simplicity we may sometimes talk of an AI “liking” an option, but we certainly don’t mean from that that it is “seeking happiness”. It’s seeking whatever it’s been programmed to seek (or whatever it’s learned to seek, as a result of its programming and environment…).
The whole reason for bringing up such examples is to show it’s possible to constuct a decision-making entity that requires no emotions whatsoever. So the implicit assumption of your theory – that all decision-making requires emotions – is false.
Of course, the machine does not choose its motivations, it just chooses how to accomplish them. The same is true for humans: although our experiences can affect our (many) motivations, we still don’t choose our motivations.
And of course, you’re saying the same thing, since you’re saying that we’re hard-wired to be motivated by happiness.
Why must I always seek happiness according to your theory?
Again, what explanatory power does your theory have?
If I made it feel arbitrarily good to kill babies, then this hypothetical parent would necessarily become a baby-serial killer?
You needn’t firefight anything - particularly things that don’t relate to your point. (I could say the same thing to myself. And probably should.)
Rewards? Who said anything about rewards? I’ve been talking, explicitly, about emotional reactions. Emotions of course being a reaction within the mind and cognition which could directly skew preferences in favor or against the action within the decision-making aspect of our cognition. In fact, we seem to see this sort of thing all the time at the macro level, or something resembling it at least.
I wonder why you’d swap in “rewards”, which seem external to the decision-making process, for “emotions”, which are internal. Hmm…
You first. Prove that the electrical signals transmitted between our neurons have nothing to do with our emotions. At any level. Because that’s what you’re saying by attributing it all to the neurochemicals.
I await your cite.
Only if you mischaracterize my explanation. I’ve been quite explicit in saying that after the emotional valuation is made of all courses of action (well, all the ones that have been thought of), the mind then compares the emotional reactions directly and chooses the best one (which it’s only able to do because a normalized basis for comparison has been created; the emotional reactions themselves).
Step 1: Several options are being considered. Myriad motiavations are present.
Step 2: Emotional reactions to each option based on the myriad motivations are calculated.
Step 3: Thus normalized, the emotional reactions to each option are compared.
Step 4: The option with the most favorable emotional reaction is acted on.
Oh, look. No more turtles.
Huh?
This is almost a relevent point. I have already stated that “happiness” and “misery” may not be perfect terms for what I am trying to explain. A more accurate terms might be “positive emotional reactions” and “negative emotional reactions”.
That’s as close as it gets to being relevent, though. All of us know what I’m talking about, even when I said “happiness” and “misery” - they’re internal reactions that we have to things. Most people know what emotions are without a roadmap.
Happiness rating, if you don’t mind. Let’s not leave the door open a crack for the false argument that I have excluded the emotional impact to things that don’t inspire euphoria.
And yes, my theory is more complicated than one that doesn’t work.
Suppose you were comparing, say, an apple and an orange. You have this complicated system of motivations, and based on some of them, you prefer the apple, and based on some of them, you prefer the orange. You have specifically argued against having an internalized normalized rating system that our cognition reacts to directly to make selections from…so what does your “system that compares motivations and chooses an action” do instead, when it’s comparing apples and oranges? How does it choose between motivations when they conflict?
Looks to me like:
Have motivations.
(A miracle occurs)
Profit!
All I’m doing is proposing a viable explaination of the mechanism for the miracle, which has the advantage of seeming to be correct based on available data.
Um, right. Well, as I pointed out, the only difference there is what you call the feelings. In either case, if the thing has a decision-making process, it has some method of normalizing the values for comparison - either by explicitly assigning each a numerical value, or by implicitly enumerating them by going through all the options and always choosing one type of option over the others if it can.
I don’t give a crap what you call reactions to things that occur directly in the cognition at the lowest level in your robot. In humans we call those emotions. So yeah, if your analogy is going to work, without invoking miracles, then it supports my position, not opposes it.
Ah! A fellow geek! I seriously never would have guessed. (Why the hell are you having such a hard time getting this?)
In your AIs, how does it assess the preference of an option, when it is forced to choose between them? I suppose if there are only three possible choices, you can hardcode them and tell yourself that’s not implicitly enumerating preference by the order control flows through the if statements. But what it preferences are learned? Do you have some other way of having it compare options and determine preference than calculating a normalized value for each option and comparing those directly?
Again, I don’t care if you call this normalized comparison factor a “Chipmunk Factor”. In humans it’s called “emotions”. That’s the thing about analogies, the labels can be different without demonstrating a functional difference.
Well, clearly you’re hardwired to be motivated be something - otherwise nothing would ever could ever motivate you. Do you think we’re hardwired to react to each specific thing? I have a genetic proclivity towards oranges? Towards computers? And how do these hardwired proclivities account for conflicting imperatives? How do they normalize them? How do they compare them?
In my theory, that’s the metric by which your brain makes comparisons and chooses things. It must always seek happiness because that’s just what the decision-making mechanism in your mind does. That’s how it works. My theory is explanatory, not prescriptive.
It sets forth a functional model of the decision-making mechanism. For comparison, yours completely handwaves how decisions are actually made. See the difference?
Dude. I could “arbitrarily” set the goodness value at “total revulsion at the idea”. Just come out and say whether the new positive emotional reaction to killing babies is enough to overpower the opposing negative emotional reactions to killing babies from all their other motivations or not.
I of course meant internal rewards, that is, “good” emotions.
I’m well aware of your position.
Didn’t find any cites eh? All the sites on cognitive neurology said that emotion influences behaviour but does not dictate it, didn’t they?
Well, sure, I can provide a cite on neurochemistry. However, I obviously don’t agree with the absurd straw man you just invited in – that anything neurochemical has nothing to do with neural signalling. :rolleyes:
In fact, I’m going to add that paragraph to the list of begbert2 classic quotes.
:smack::smack::smack:
You still haven’t actually responded to my argument here, so let me try again.
You won’t allow for the fact that we “just are” motivated by things like curiosity and aggression. You keep asking why we’re motivated in that way, that there must be some underlying motivation (e.g. sometimes we act out our curiosity because it will make us happy).
But it’s a turtles explanation because it requires as a given that we “just are” always motivated by happiness / reducing misery.
I can turn your argument around and ask why are we motivated by happiness / reducing misery?
It was a slip.
But I don’t know why you have such a problem with the world pleasure; it doesn’t imply euphoria (you can have a “pleasant commute” for example), and actually seems less loaded to me than “happiness”. But fine.
Well, it’s complicated. Current thinking is that there are at least 3 decision-making mechanisms in the brain; the Visceral, Behavioural and Reflective. And these are not abstract; they have separate loci in the brain.
The Behavioural layer is the only layer affected at all by emotions, and it is not dictated by them. And this layer is not at the highest level. If any layer can be considered to be “in charge”, it would be the Reflective.
What data?
This is the typical kind of flip-flopping that creeps into the “there’s no such thing as a selfless act” debate.
One minute you’re using words like happiness. The next minute you’re saying that you just mean the “best” option, and that any decision-making machine can be said to be seeking happiness, even, say, a chess computer.
Well, those are two different things. Of course any decision-making system must ultimately make decisions so it must consider some options better than others. I’ve already said that many times.
But to the hypothesis that that criteria is about inducing or preventing a feeling, a psychological state, whether now or in the future, this is simply not the case.
If I punch some guy, it’s not to feel good: I’ll probably feel “shaken up” for quite some time afterwards. But yeah, for whatever conscious reason at the time, I obviously consider it a good idea.
I know, it’s odd isn’t it? And I’m not just an AI-geek; I’m now persuing a career in neuroimaging, and studying the brain and cognition.
So…why can’t I agree with your theory? It’s a mystery…
Well, that’s your basic assertion, and it’s incorrect. Again, where’s that cite?
So again, you’ve refused to answer the question. Probably because the prediction of your own theory seems absurd even to you.
But fine, I can answer it. Even with an arbitrary amount of pleasure for harming babies, a person will not necessarily become a killer. Such a decision would be made at a conscious level and involve conscious reasoning. Such reasoning is (obviously) not entirely motivated by happiness, so even if it were clear that the happiness of killing would far outweigh any feelings of regret, there is no reason to assume a person would kill.