Yes, studies of the brain show atrophy in some areas when those sectors of the brain are not stimulated, but chemical imbalances (such as decreased seratonin output) are organic. Sure, neglect could trigger those, but it appears that the imbalances would have been present even in a “normal” upbringing. It’s sort of a chicken or the egg proposition.
Try to be clearer yet. How do you think nurture effects changes in behavior (any nurture and any behavior)? Do brain receptors and neurotransmitters levels/“balances” change as a result of experience or not or what?
The brain responds with changes in its physical structure and its chemical releases and its sensitivity to those chemicals. To hormones, to its own trophic factors, to activity within it endogenously produced and to activity exogenously produced. Experiences do not change the genes but they do effect the structures and the “chemical balance” and “imbalance” (a word that implies some standard of “normal” to be measured against).
In other words, you’re defining away any possible effects of “nature”. You reject the very notion of “male” behavior. OK, then…I’m done with that.
I mostly won’t touch this one, as DSeid said most of what I would’ve. Although I have to ask why you assume “the imbalances would have been present even in a “normal” upbringing.”?
Are you saying that you never lose your temper? No, that can’t be. As you said, you “still struggle with it today”. Once again, why is that not “nature” exerting its influence?
DSeid – I actually assumed post #37 was in response in to my request, as it’s pretty much what I figured.
It depends on what you mean by “male” behavior. If you mean that males have a tendency to be more agressive and competive, then there’s definitely a biological component, but if you mean things like the sterotypical “men’s behavior” like being less emotionally demonstrative, liking football, being “macho”, or being more controlling, those are all socialized behaviors.
Because they’re biological in origin. If you have schizophrenia, it doesn’t matter what your environment is, your brain will still be afflicted. Likewise, if your brain doesn’t release enough seratonin due to an underdeveloped gland (or whatever reason, you’re going to have depression, regardless of what’s going on around you. Can environment trigger these latent problems? Maybe-- I don’t know. It seems possible that stress would affect chemical outputs, but whatever organic flaws caused the problem would be there no matter what stiumuls you recieved.
As I said, I believe those two factors are always in a state of subtle conflict. Our “nature” speaks up in the form of temptation on a daily basis, but our socialization subdues it.
I saw a little kid buy an ice cream cone yesterday from the neighborhood ice cream truck. That cone looked really good, especially since I’d been working in the garden and was hot and tired. My “nature” would probably have been to go over and take it from her, but my “nurture” would not even let me entertain the notion.
I don’t know if the nero transmitters themselves change except as a result of disease. I do know that areas of the brain seem to grow more of them when that area is stimulated, and conversely can atrophy in cases of severe neglect.
Severe, of course, being the operative term. Even poorly socialized children (being able to run wild, so to speak) don’t show actual atrophy, because they’re still learning social rules and communication skills, even if they’re not the rules and values that society at large encourages.
Sigh. I’ll ask the unanswered question again, how does the brain cause any changes in behavior in response to any sort of nurture, in your understanding? Do all brains resond to the same nurture in the same way or do different brains respond to different stimulii in different ways from the get-go?
And, no, it doesn’t, as a general rule, grow more neurons as the means to effect behavioral change. In point of fact much of learning is pruning connections.
Do you read the links you post? Go back and read the “feral children” site. You’ll find they state that it an “affliction” is not necessary for abnormal development. Similarly, as DSeid points out, even with an affliction, the biological development can be affected. The major point, yet again, is that it works both ways.
Abnormal social development? Yes, I that’s what I said. It can be caused by socialization alone (or lack thereof) without any organic disabilites or imbalances.
Of course I lose my temper on occasion. Socialization is a process, not a destination. You never get done adjusting yourself, slowly adapting your behavior based on your accumulated life experience.
Every brain is different. Every socialization is different, with varying areas being stressed. I don’t believe the brain changes all that much except growing more neural connections in areas of the brain more frequently used. Only severe neglect seems to affect the brain as far as atrophy and things like that go.
It was my understanding that people who were more verbal had more neural activity in those regions of the brain. IIRC, autopsies on Einstein’s brain showed many more neural connections than average in the areas of the brain which control math and logic.
So it is your belief that the only changes that occur in a brain to effect behavior change as a result of socialization is “more connections” and your belief is that “chemical imbalances” are alternatively “biology” …
Well let’s start with your understanding of depression. Few who actually study the neurophysiology of depression, well okay none who actually study the neurophysiology of depression, would seriously describe depression as a “chemical imbalance” although it makes for a nice catch-phrase. Truth is that we know serotonin has a lot to do with it but we really only have guesses how antidepressants work and truthfully they only work moderately better than placebo. Depending on the study and population studied, they work no better than a nurture approach, cognitive behavioral therapy. Antidepressants also work well for individuals suffering from severe grief reactions and fMRI studies show little differences between those suffering from grief reactions (a clear nurture etiology) and clearcut biologically caused depression.
Nature, nurture? A question we are best off being past. Degrees of predisposition and degrees resilency. Circumstances, be they situational or hormonal, that are more potent or less potent triggers to particular individuals at varying degrees of risk.
As for how the brain changes: it changes by modifying its structure, especially in early development. The earliest phases are marked by an increase in dendritic branching and synaptic density and later much of learning is accomplished by pruning those connections. Of course some connections become more efficient by increasing receptor density and by changing the amounts of transmitters released. By doing so it causes certain areas to activate more with different activities and others to activate less with others. All very biological and much of it involving environmental input also referred to as nurture.
Do you still lose your temper in different circumstances than someone else whose temperament is intrinsically more calm? Is the only explanation for those differences that you were socialized differently?
Back to the AI, Dig.
Situation one: Rules are not set but the range of acceptable values are. Harming a human coworker for example would be placed as a very strong negative value and pleasing a human a moderately strong positive one. Letting itself get harmed and failing to repair its subsystems would be of moderately strong negative values and acheiving its goal moderately to strongly positive. Not harming other machine AI would be less of a goal value than not harming itself and far below not harming human coworkers. Its tactics for maximizing the positive values could be entirely flexible. It would have no rules for behavior other than what it learned either from other machines in a modelling fashion (okay passing along program bits or bytes if you will) and those it created.
Situation two: Rules are not set and values themselves are modifiable by how much reinforcement human coworkers give up to some broad limit as to assue human safety. It can modify itself as needed to accomplish a particular goal within the limits of material availability and its means to requisition them.
To me both case have plasticity as part of the design. The latter is more “socialized” insofar as it is adapting to please human society, but both have a large “nurture” compotent.
I’m not sure where you’re taking this, but – sure. The only part I’d quarrel with is the use of the word “large”. But I won’t deny that may be my engineering side showing through; it’s beyond me how one might design a functioning AI without defining behaviors (etc.). Y’know…it’s nice to sit back, imagine some sci-fi scenarios, and say “we’ll just use reinforcement learning”. Then reality sets in and one realizes how much structure needs to be there already.
If it wasn’t beyond me, I’d already have made my name in academia, have tenure, and be set for life. Ah, one can hope.
Only taking it so far as an analogy for this discussion, hoping to clarify. The second I think is quite analogous:
We have various drives with dynamically varying states. We have a range of values attached to each of these drives such that for each of them certain levels of drive satisfation are of varying levels of importance. To some degree the importance attached to each level of each drive satisfaction is modifiable by socialization; that is the satisfaction of the “pleasing relevant others” drive is a bit of a meta-drive that allows some limitted degree of modification of how important each point of the range of other drives are. Behavior has a range that it will vary around as a result. Each of us come with a tendency to different set points and a different degree of ability to modify each individual drives’ values by social approval.
The all nature side, if anyone actually took up the charge, would say that those drive values are unchangeable as a result of socialization.
The all nurture side would say that all individuals have the same initial set points, and that each individuals drives are equally and infinitely modifiable in each direction.
Both are ridiculous positions. The values for each drive are modifiable and we come with predisposed preferred set points that have constraints on how much they can modified and how difficult they are to modify. Currently the best data is that those predisposed set points and constraints are considerable but that significant modification of drives is certainly possible.
(Oh I realize that the actual doing of AI is mind-bogglingly complex. But theoretically the concept is simple. A program running to keep DriveA within a certain range, a program to keep DriveB, etc. Programs have certain behaviors built in as responses to situations but also have the ability to modify those responses on the basis of experience as well. Programs include ones that monitor the states of the both the other subsystems and to prioritize what has to be done and which one gets a extra boost of processing power from the meta-level processor that all of these sub-programs are reporting to and getting direction as to whether or not to execute their plans in real time. All are functioning in parallel. Theoretically a program to monitor the satisfaction of significant others is just another program, it is just hard to even imagine the processing power that goes into recognizing happy vs sad vs angry. The meta-program merely uses that input to learn its meta-rules and vary the importance of different subroutine results accordingly. That’s all. Have it done by next week.)
So long as it’s an analogy, and my discomfort with the assumption concerning how easy it is to implement behavior through socialized instruction is registered, I’ve got no issues with that. It pretty much describes my thinking – I don’t believe I’ve yet wavered from the position that the nature / nurture debate is a big, interdependent mess.
In some sense, I have to disagree. To be clear, it’s not really disagreement so much as…um…I guess, as I said before, discomfort. For example, my advisor chastises me for concentrating too much on “implementation details”. Yet, too many times have I run into the situation where my advisor said, “But it’s easy; you just do X, Y, and Z.” Despite my misgivings, I do X, Y, and Z, and…it doesn’t work, exactly like I thought it wouldn’t. (I should point out that often it does; I’m not claiming that I’m a savant or even better than average relative to others in my field.) My only point in this is that it’s easy to make broad claims like “behavior can just be socially learned; now implement it”. Then you get to the “and then a miracle occurs” moment, and realize that, huh – the apparatus to actually do a lot of this also has to be implemented. There’s a buttload of “nature” required; until one actually sits down and makes the attempt to figure it all out, it seems beyond most people to grasp that.
Interestingly enough, this is what I’m working on. In particular, at this point in time, incorporating a “reasoning system” into a robotic system’s real-time, distributed (and hence parallelized) infrastructure. So, something like reflexes, but reflexes on steroids, as it’s more than just a simple stimulus / response action. I cast it in terms of reflection, as you know; the neat part is going to be integrating the low-level system “intelligence” with the high-level cognitive architecture. Next week is a bit soon, but I should have a preliminary implementation by December, as I should have my dissertation finished by then.
But the reason I bring all this up is simply to point out that recognizing happy vs. sad requires very little processing power (depending on implementation, of course). I’ve been contemplating the use of indirect measures; in my mind, it’d be akin to hormones (and I use the term to convey nothing more than an extremely loose analogy, in no way meant to actually reflect a biological model). Function would be affected by the “happy-related hormones”, while the level of hormones would be affected by function. Computer science is all about levels of indirection. Is the robot happy? Well, what’s the current level of the happy-related parameters? A trivial reading of a value in memory. The processing power would all be involved in sensory processing and homeostatis.
In case you haven’t realized, that’s not gonna be done by December.
In case you haven’t seen it before, Left Hand of Dorkness linked to Wikipedia’s Nature vs. Nurture in another thread, which I’d not read before. :smack: I thought it provided a pretty decent explanation and overview.