FAQ |
Calendar |
![]() |
|
![]() |
#801
|
||||
|
||||
Given his favorite topic of blather, may I suggest he be called a rAcIst?
|
#802
|
||||
|
||||
Btw, that's a double ignore of Wolfpup.
Welcome to the club. May our man bring you your smoking jacket? How do you prefer your scotch? |
#803
|
||||
|
||||
Damnit. What do I have to do to get on the list? I've been refuted with Wikipedia and everything!
|
#804
|
|||||||||
|
|||||||||
I'm going to back up here a few days. I wanted to tie both posts together because I think they are related, and I've been out-of-pocket. But, there are a few unresolved questions I'd like to re-ask for clarification. . . Again, I'm in a civil tone for ya.
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Bottom Line: You're implying a mechanical, digital-based utopian society, that is currently indefensible as a future prospect. You even admit this is indefensible with your comment that: "I don't claim to know the answer to your question because I don't know the way the future will go." So what are you positing for discussion? I offer that "when" humans are "converted to a computer" is completely dependent on the more pertinent question of "if". If you differ, please make your argument. Tripler An open discussion, SamuelA. Last edited by Tripler; 01-26-2018 at 07:46 PM. |
|
||||
#805
|
||||
|
||||
Quote:
While I'm under no illusions that I can achieve a triple ignore myself, hope springs eternal, and there are so many opportunities that I can't help but make another effort, to wit: Quote:
I would assume that the corresponding theory is that the Negro is genetically predisposed to be stupid and eke out a career dealing drugs and robbing gas stations. Those Negroes who might graduate magna cum laude from Harvard Law and become president of the United States are, of course, freaks of nature and can be ignored. So I am anxious to hear SamuelA's view of the Negro, cast in the same light of "genetic adapation" to the white Aryan culture in which he has -- in so incredibly non-racist a manner -- cast the brilliant Asian. The genetic contribution to societal productivity is certainly an important concept to all non-racists and non-Nazi non-eugenicists like SamuelA, so we would like to hear more from this eminent authority. |
#806
|
||||
|
||||
Lucky you, K9friendfinder.
|
#807
|
|||
|
|||
Quote:
This is the same thing as Receiver = MAC(Sender). Branch(Receiver) We can, right now, today, trivially make computer chips that do this fundamental operation in 1 clock cycle, and run at ~2 ghz while doing this. Most modern GPUs run at between 1.2 and 2 ghz, and contains thousands of hardware subunits doing this very operation. You need not thousands, but trillions - a vast data center crammed full of custom chips that would resemble a GPU in some ways - but you could actually build a machine, if this were a Manhattan Project style effort, that has the same scale and scope as a brain. The reason this is up your alley is the biggest weapon on the planet isn't nukes, it's the human mind that allowed us to bang rocks together until we had nukes. While you have to actually program a computer that has the same physical capability as the brain with the algorithms that make it sentient like the brain - a far harder task than building the raw hardware, which is why we have not yet done it - when that problem is solved, this would be roughly the same relative advance as going from conventional to nuclear weapons. A machine mind that runs at 2 ghz would be 2 million times quicker, give or take. It would make a nation that had just one, with the same capability as one human but 2 million times quicker, unbeatable given time to take advantage of it. You know the idea of a Gantt chart, right? The key idea here is that all complex projects, whether it be making a new jet fighter or anti-ballistic missile or some other strategic level weapon are limited by a single "critical path" of steps that must be done. You can put the best people in the world on that path, and work them 16 hours a day, but it still is going to take you years to decades to develop to a deployable state a major new weapon. So if you had a super-AI that could do the key process steps and get you new prototypes in hours, where you just have to wait for them to be automatically fabricated, you could compress that timeline down to probably months per generation of weapon. You'd do similar compression steps for developing factories to build you more computers so you can have more AI nodes, factories to make you more factories, and so on. The logical thing to do would be to develop enough defense capability against nukes that you then start a world war and beat everyone else. A few nukes getting through your defenses won't knock you out because the only thing that matters are these self replicating factory nodes and AI nodes, and just 1 of each has to survive and they can copy themselves. All the logistic problems with invading every nation on earth at the same time and controlling every surviving human after you win go away when you can do it all with machine intelligence. This is one scenario. There are many others. But the lure of it is very, very tempting to a lot of nations for national defense reasons. What are the possible reasons that this won't happen? Because it will unless something incredible happens. a. A nuclear war ends civilization first b. It turns out that human beings have mystical 'souls' that provide us our sentience c. All the major powers agree that AI research is too dangerous and refuse to do it and nobody cheats and everyone honors the agreement and a world police force is formed to inspect all nations. d. It turns out that the problem is too hard and you can't just write an algorithm you can describe in a few pages and kick the ass of any human alive at a well defined task. Oh, whoops, you can. e. It's going to take so long that you and I will both be dead of old age first. Most board members who think about this probably just assume (e) is the answer, to be quite frank. And I can't deny the logic, progress on this seems to be accelerating dramatically but I can't say if it's going to continue accelerating and we hit machine sentience before 2030 or not. Last edited by SamuelA; 01-26-2018 at 09:00 PM. |
#808
|
|||
|
|||
Alright, SamuelA, this is a hipshot; You've described 'Point "B"' knowing where we're at now. You're talking about the when we get there.
I'm point blank asking you that if we get there, how is it going to happen. We're at Point "A". Your 'Point "B"' is too esoteric and nebulous to argue without the 'how' to get there. Tripler Bridge that gap, brother. |
#809
|
||||
|
||||
[Moderating]
SamuelA, saying "fuck you" to other posters is a violation of the Pit's language rules. Please avoid this in the future. No warning issued. [/Moderating] |
|
|||
#810
|
|||
|
|||
Quote:
You know, if during the Manhattan project we had decided to go all in on just one of the 3 main methods (calutrons, centrifuge enrichment, plutonium breeding), we'd still have gotten nukes. Slightly sooner, even. And once we had nukes, going back and exploring the other methods would have been a lot easier to justify. In fact, more recently, we found a fourth method. Right now the method that to me feels the most valid is we work on lower level systems than machine sentience. We use the shit we've already demoed and adapt it to run robots that do just limited scope tasks. Pick this weed, pick up that can, restock those shelves, pickup that rock, drill that ore vein, install that gear, drive that car. Each task is something in the physical world that humans are currently doing. It's something where there is a correct answer, every time. It's a task you can break into smaller substeps. Where you can clearly define rules for doing the task "better". (finishing the task without dropping something and faster and without hitting the robot arm against something all make your solution better) And it's a significant fraction of all jobs on Earth. Once we get all that working real smooth, we get robots that blow past human ability at doing these defined tasks (they aren't just more physically capable and tireless, I expect them to be smarter. They'll find ways to do these tasks that use less motions and take less time and make less errors than a human would, even without their actuators being better) we can push it further. Make intelligence systems that use predictive models of physical reality generated from the collective experiences of millions of robots. What I mean is that if you stick any collection of random physical objects that any of the robots in the pool have experience with in front of this new system, it'll be able to predict what will happen if you manipulate them. It'll know from experience that the red rubber ball will bounce and by how much. That the chip bag will crumple and how. That the gear edges are sharp and can do damage to the robot's own wiring and hydraulic lines. And then if you ask it to accomplish a task that requires building a rube goldberg machine, and write some additional task solver modules, it'll be able to do it. Not all on it's own, humans wrote the extra software to do it, but humans taking advantage of the existing knowledge and ability the machine pool has. I think you could iterate that way until you crack things like full machine self replication and you could probably crack nanotechnology the same way. Even non-sentient agents could predict how some carbon atoms are likely to move along a surface in a vacuum chamber when dragged around by atomic force microscope probes. Advanced agents could plan a sequence of steps to move the atoms to form some assembly. Really advanced agents could design an assembly that accomplishes a goal. You could eventually bootstrap your way up to agents that design for you whole nanoscale assembly lines and armies of nanoscale robotic waldos, and eventually achieve self replication. (note that this is NOT what we think of as sci fi nanobots. It's these big flat plates that are very fragile and covered with I/O ports. The machinery lives in a vacuum chamber and can never see pressure or even visible light without being destroyed. There's a maze of plumbing supplying various gases to the ports. It sucks a lot of power and there's a huge flow of coolant going in and our. The products are either a fine powder or more flat plates.) I don't know how to go from this to what we think of as full sentience. I'm not really worried about it, I think what I have described is already way beyond human ability in many areas, and I think you would be able to build various "meta" modules that self-optimize other AIs, analyze human speech, and one day you'd reach a critical mass of complexity and self-improvement loops that gives you the AI we've wanted this entire time. Last edited by SamuelA; 01-26-2018 at 11:25 PM. |
#811
|
|||
|
|||
Well the problem is that if you can't tell us how we're getting from A to B, if you cannot offer proof of your argument, or do some research to know what is being done and how, then you're just postulating. Expressing a guess. An opinion.
Don't be the guy who goes to the machine shop and says "I have an idea that's going to make us a billion dollars! Build me a machine that can move individual molecules to build larger structures." Machinist says "Great, tell me how to build it." Genius says "Oh no, I just gave you the idea. Now you build it." |
#812
|
||||
|
||||
"I have a great idea for a screenplay. I tell you what it is, you write it and we'll split 50/50."
It's really easy to enthusiastically speculate on creating particular effects on the world. One can paper over any expected difficulty, wave away possible impossibilities as it's all happening within one's mind according to what one wishes. I really wish SamuelA spent as much time actually working on his ideas concretely as he does going on about it. Just taking a one week break from this forum might do him well. Last edited by MichaelEmouse; 01-27-2018 at 12:03 AM. |
#813
|
||||
|
||||
I hate myself right now, but he's not saying this is a progression. He's saying that it's multiple choice.
Quote:
|
#814
|
|||
|
|||
You're absolutely right and I thank you for actually helpful advice.
|
|
|||
#815
|
|||
|
|||
Quote:
The programming part of the project was allocated 30 days. Gave my notice, moved on, people got pissed at me for doing it. Another 5-6 months later the project was shitcanned and the entire 110 person division laid off. |
#816
|
||||
|
||||
Quote:
Sometimes, it's easy to get so focused on something that you tense up, hyperfocus, lose perspective and small things appear much bigger than they really are because you associate them with something in your past or your sense of self. If you step away for a while, like a two-day vacation you give yourself to enjoy something light and pleasant fun, you might benefit from a second, fresh look. The worst that will happen is that you'll wind back up right where you are now. Last edited by MichaelEmouse; 01-27-2018 at 03:47 AM. |
#817
|
|||
|
|||
Quote:
I really feel slighted here. |
#818
|
|||||||
|
|||||||
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Tripler Open ears. |
#819
|
|||
|
|||
Quote:
What are you talking about by "hacking"? Or "giving information to the machine?" That's not what reinforcement learning is. Humans build the plumbing but the reason the machine would "know" a bag of chips crumples because it has subsystems that do that and those subsystems figured it out from observation. A simple one would just have a neural network that takes the output from the classifiers. That's the module that looks at the camera feed and labels the different parts of the image. Like "chip bag". Other subsystems would reconstruct the geometry from a mixture of stereo cameras and lidar. And those subsystems feed into a simulator. That's a neural network that predicts the new state of the system. It would have weights and would predict that the future state of the chip bag, post pressure, is pressed inward more, with geometry distortions predicted by these numbers that were found from the data. It's a very complex topic to be honest. I can't really do it justice. I just "know" we can get these pieces to work extremely well, and to build agents that do more complex tasks. And there's hundreds of billions of dollar being poured into it. I also "know" that the problem I have described : various common objects inside a robotic test cell, with several robotic arms and a defined goal that requires the machine to "invent" a rube goldberg machine to accomplish the task, is the type of problem that is very solvable with the current state of the art. |
|
||||
#820
|
||||
|
||||
How could you possibly "know" this?
|
#821
|
|||
|
|||
I don't know who that is, but he must have left quite the impression on Sam.
|
#822
|
|||
|
|||
Quote:
Quote:
Quote:
Tripler Can we at least agree to hate the Soviets? |
#823
|
|||
|
|||
Quote:
The simulator/predictor is a neural network, such that Predictor_Convolve(S0) = Predicted S0 + dt. That is, it's making the prediction that after a small amount of time, there will be a new state. You can obviously keep re-running the predictor and the predicted states are going to become increasingly uncertain for moving objects and stay pretty firm for stationary objects. The key trick is that after dt actually passes, you feed back what the environment actually did back to the predictor. And you adjust it's matrix of numbers in a way that will cause it to give more accurate readings next time. Then the other key component of this system is a planner. This is a system that guesses possible paths that might accomplish your goal. So if it's "shove the red ball to the left touching nothing else", the "goal" is just a matrix of numbers that contains a shift to the red ball position. The planner will come up with possible guesses as to sequences of robotic arm motions that might accomplish what you want. The planner's guesses get optimized by comparing them to what the predictor will think will happen. And then the system picks the best path and does it. It uses the results from that path to update the planner. Given enough data, planner has "machine intuition". This is where this starts to really work. These algorithms need not be even a tiny fraction as good as human brains. But if you can give them the collective experience of a million separate robots working for 1 year, that's a million years of experience. Or maybe 1000 real robots and 999000 simulated robots. Either way, this vast pool of data will mean that the predictor has truly "seen everything". The planner has tried many, many strategies and knows for a given configuration what type of things are actually going to work. This is why you get superhuman performance. Your machine has far more experience doing what it does than any human alive. Also, it always does it's best. At all times, it's faithfully working out the optimal answer from the data it has. It never gets tired or angry or bored. You can see how this type of algorithm slowly gains on humans. You could build one that knows how to fight jets in a dogfight. It has millions of years of experience in aircraft simulators and a smaller amount of real flight time. So it's always going to be calculating the path that optimizes it's chance of victory, but doing so using expected value calculated from the sum of all the outcomes that typically happen in a given scenario. Last edited by SamuelA; 01-27-2018 at 01:35 PM. |
#824
|
|||
|
|||
Quote:
Tripler To me, they're the folks that say I can't say stuff. |
|
|||
#825
|
|||
|
|||
Quote:
So in my example of a "rube goldberg constructing robot", the machine has several cameras and a lidar. There is say a red ball on the table, a chip bag, and a gear. Classifier converts the large digital video frames of from the robot's cameras to [ objects found] [identities for each object] [positions in 6 axes for each object] [velocities for each object] Last edited by SamuelA; 01-27-2018 at 01:53 PM. |
#826
|
||||
|
||||
Quote:
In fact, modeling how the mind really works is something that cognitive science is only in the most primitive early stages of even beginning to understand. What we do know is that the computational paradigm is only a small -- albeit important -- part of the theory of cognition. Of course it's "simple" in SamuelA's world -- so is everything. Simple, and wrong. The brain is "computational" only in the most trivial, unscientific sense of the word. Modeling how the mind works is also of only marginal relevance to AI, as the most effective practical AI's today have been built by applying a wide range of different technologies and heuristics that have practically zero relationship to how the mind may or may not work. Their potential has also been consistently overestimated. When the first language translation systems appeared, it was widely believed that language would soon cease to be a barrier to human communication and human translators would all be out of business. And then someone tried translating "The spirit is willing but the flesh is weak" into Russian, and the machine rendered "The liquor is good but the meat has gone bad", which became emblematic of the magnitude of the contextual problem and AI over-optimism in general. To say that SamuelA's ideas about the brain and AI are gross oversimplifications would be incorrect. They are in the category of "not even wrong", completely missing the fundamental nature of the problem. The best way I can describe it is that if there had been a SamuelA a couple of hundred years ago, he would be positing that the future of aviation will be premised on dipping yourself in glue, covering yourself with feathers, and flapping your arms real hard. It's simple! To be clear, I believe in the future of computational intelligence and we've made big strides since the early days of AI, but we have a very long way to go. IBM's DeepQA project, for example, has impressive potential but like all other AIs it still operates only in very narrow domains of competency, and has to be painstakingly trained in each one. Further, it's almost impossible to predict the trajectory that emergent systems will take, and even less so their societal impact. No, it isn't "simple". |
#827
|
|||
|
|||
Quote:
Remember, in Mother Russia, AI translates you! |
#828
|
|||
|
|||
I'm a latecomer, I read the first post and then SamuelA's first post in the thread and then this page.
Seems like his primary point is that: if we had the physical details of a brain, down to the appropriate level, that we could use it to build a simulator that functions substantially the same as the original. That position seems like a reasonable position if we make the assumption that we don't need to simulate down to the level of quantum interactions (I assume that becomes problematic) and if we assume our behavior is based on physics/energy, not some unknown component like a "soul." Is the primary disagreement with how many decades or centuries it will take for humans to be able to capture the state of a brain? Or is the disagreement that the level of detail required is so great that capturing the state will also alter the state so the result would be invalid? |
#829
|
|||
|
|||
Quote:
Quote:
|
|
||||
#830
|
||||
|
||||
Quote:
|
#831
|
||||
|
||||
For me, the disagreement further is that we should go ahead and kill folks now, instead of letting them die, so that we can save some of the data. We don't know how the brain works. You and he may think it's computational, but it's not a model that we've developed yet. We don't know how dementia works. We don't have a way to save/store a brain. We don't know how to re-animate a brain. But, sure, let's go ahead and skip to the "inevitable" and start killing folks now to "save" something, when we know nothing at all about what we're doing. All of this because to Sam brain = computer. Maybe someday it does. Today it does not. Today it's tech we cannot replicate, and when it stops working, it's gone.
What he proposes is not inevitable, in this, or any number of other areas. Hand-waving away discussion because something will happen, with no thought to how, or why, is ludicrous. Refusing to talk to posters who question him is childish. |
#832
|
|||
|
|||
Quote:
So it sounds like you feel that cell preservation alone would not be enough. That the electro-chemical soup they reside in needs to be preserved also? That seems accurate to preserve exact point in time state, but to preserve general state (e.g. personality), it's possible that could be re-created/re-balanced due to the state of the cells that naturally maintain those things. Kind of like waking up in the morning but it might take 5 days of waking up for the system to get back in harmony (maybe). Although an additional complexity is that to simulate on the computer, you would probably need to get into neuron DNA methylation level of detail because that drives synapse maintenance after learning and probably many other things. Meaning if you ignored it, you might have all the synapses mapped, but the system would not maintain them because the flag that says to maintain it is set in DNA methylation. |
#833
|
|||
|
|||
Quote:
I mean we might both be wrong, Raftpeople, it might not be "just computation", but the evidence says that at the present time! |
#834
|
|||
|
|||
Agreed, that's a tiny bit optimistic.
|
|
|||
#835
|
|||
|
|||
Quote:
So I think this evidence indicates that the synaptic weights (you measure them by simply counting how many receivers are on the receptor cell and what state they are in) and wiring topology is probably all you actually need. The "soup", the myelination states, etc is probably all temporary. Like starting up a computer system again where you've cleared the RAM, but the hard disk is the same, and this computer system is very robust where you can scramble a random 30% of the bits on the hard disk and it will still run the same as before. You would physically do this count by tagging the receptors with a molecule that will be visible on an electron microscope and is specific to the type of tag. So your reconstruction only needs to recognize the rough shapes of the actual axons and the probable destinations (for the topology) and there's a strength estimate by counting the amount of tags of a particular type at a synapse. Probably all the rest of the information doesn't even matter. And even if it isn't, that doesn't matter. Minds are about change. If you can get even sorta close, I think a person could re-learn everything, much like post-stroke someone can re-learn basic tasks. Except if their brain is no longer squishy, inaccurate flesh, but is neural weights in a very large and very fast and accurate computer, it would be like re-learning everything when you have an IQ of 300. Last edited by SamuelA; 01-27-2018 at 04:33 PM. |
#836
|
||||
|
||||
My (non-confirmed, because nobody knows) suspicion is that the mind is a process, and that slicing up a preserved brain can no more revive that process than slicing up the CPU and RAM of an computer can restore the game you were playing when the power went out, no matter how fine your microtome slices.
|
#837
|
|||
|
|||
Is it? What if the 95th percentile outcome for their illness and present stage is 1 week to live? Or 1 month? You're trading off a small amount of remaining lifespan where the person is probably in a lot of pain and fear for a non zero positive outcome.
I mean, there's no negative outcome. If the cost to freeze them and for 300 years of coolant is less than the cost of 1 week of ICU care and the cost of a funeral, then you're ahead financially*. Even if you can't ever revive them in 300 years, it's still a positive EV, and you measure the chance that you can revive them times the utility if you succeed, and that's obviously a large positive number, however you weight it. *the coolant is about $1000 a decade, $10k is a century, and that neglects the fact that you could invest money at 1% interest to pay for coolant next century. There are also site and security costs, but you could easily put a million patients per underground complex and thus get the cost down considerably. Last edited by SamuelA; 01-27-2018 at 04:40 PM. |
#838
|
|||
|
|||
Quote:
1 - follows a set of rules (e.g. physics) 2 - random (e.g. quantum) 3 - something else outside of nature Would you argue that the brain relies on something other than physics? |
#839
|
|||
|
|||
No it isn't, it's just not technically true that we have proven our model beyond all doubt by creating an emulation that does exactly the same thing. Yet. We have a model. There are synapses. They fire when they reach a voltage threshold. Signals are all or nothing. They travel at speeds much less than lightspeed, limited by retransmission nodes. Each synapse has a weight and either adds or subtracts from that voltage level. Some synapses are connected to glands that can emit something that goes brain-wide and affects all the synapses of a specific type.
Everything here is straightforward to emulate, you just need to use a computer with sufficient memory and bandwidth to that memory to even remotely approach realtime speeds. And you need a scan of all the synapses, which is very expensive to get. Last edited by SamuelA; 01-27-2018 at 04:45 PM. |
|
||||
#840
|
||||
|
||||
Quote:
One of the key questions in cognitive science, and specifically in the computational theory of mind (CTM), is the extent to which mental processes are, in fact, computational. There is strong evidence that some are, and also evidence that many are not or that we just don't know how to characterize them that way. CTM is a strong theory but no one pretends that it's a complete explanation of how the mind works, much less that it can all be described in terms of classic computation. Mental imagery is a good example of some of the controversy. Do we process images according to this computational syntactic-representational model, or do we have an internalized mental "movie screen" on which we project remembered images? There's evidence for both theories. Some have shown that the visual cortex is involved in such recollections of mental imagery, while others provide evidence of the former (for instance, a priori knowledge influences the interpretation of mental images, making them immune to things like the Muller-Lyer illusion). CTM remains an important pillar of cognitive science but the computational nature of the mind remains controversial and elusive. |
#841
|
|||
|
|||
Quote:
If I've ripped open the guts of some machine and I don't really know how it works, but I find the wires come together into these little parts that I do understand, because all they seem to be doing is adding up and emitting pulses, how does what you are saying prevent me from making another copy of that machine if I tear one down and slavishly duplicate every connection? Another really fascinating question is let's say I build a machine-learning classifier real quick. But it's one that doesn't start out with tagging. It just looks at camera images with a LIDAR overlay and starts to group contiguous objects together. Say there are just 2 objects you ever show it, from different angles and distances. At first the classifier might think there are hundreds of different objects, but let's say some really clever algorithm converges it back down to just 2 that are rotated at different angles. So at the end of the process, you have this sequence of stages that goes from <input sensors> to [ X X ], where the outputs are [ 0 0 ] (neither present) [ 1 1] (both present) [ 1 0 ] (object A present) [ 0 1 ] (object B present). I'm really curious how this machine, which we could actually build today, "counts" in your computational theory. Note that we don't have to build it as a python script, we could program separate computer chips to do each stage of processing and interconnect them physically, thus making it resemble a miniature version of the real visual cortex. |
#842
|
||||
|
||||
You think you have a model. You might have ideas of where to start. Your core argument is that because a brain uses signals and a computer uses signals, they must in the end be equivalent. How does the brain use those signals? Different types of signals mean different things. Sometimes the same signals mean different things. Brains re-route to work around damaged sections, sometimes. They self-repair, sometimes. I could go on, but high level, the point is we don't yet understand enough about the brain to make a model about brain function to emulate. You are at step one, which seems plausible, but that's not the same thing as a model that will, in the end, be the right model. We don't know how the brain works. We need to know that in order to know what we want the computer/AI to do. Simply saying we want it to replace the brain with the AI is not sufficient. It is aspirational, but not in any way a methodology for how to get there.
A scan of all the synapses will achieve little on its own, because we don't understand what they do. It's a step, only. It's like mapping the human genome. Great, we've got it. On its own, without further research it's just data. Last edited by Sunny Daze; 01-27-2018 at 05:13 PM. |
#843
|
|||
|
|||
Quote:
In addition, there are different types of connections, some electrical, some with neurotransmitters, and then there is glia with gliatransmitters, and neuron DNA methylation that trigger protein creation to maintain synapse stength due to learning, etc. etc. etc. There is no current understanding of all of the pieces that either perform computation or maintain/alter physical state which impacts computation. I agree with you from the perspective that it's physical and could theoretically be simulated in the future, but I do not agree with you that we have enough information today to simulate even one single neuron properly/completely. |
#844
|
|||
|
|||
Quote:
But from the perspective of physical simulation, it's possible to be successful without understanding how the higher level computation happens. Determining what level of physical detail to simulate is clearly a non-trivial issue, and getting accurate state of that level of detail is non-trivial. |
|
||||
#845
|
||||
|
||||
Quote:
Quote:
CMC fnord!
__________________
It has come to my attention that people are stupid. We, the smart ones, should be coming up with plans for how to remedy this, but we're all too busy watching Battlestar Galactica. — wierdaaron |
#846
|
||||
|
||||
Doctor to patient: "Well, we have two options here. Option one is that you get to live for a few more weeks--months at best. The quality of your life will go down, but we will try to manage the pain as much as possible, and you will have some more time with your family. Option two is that we cut off your head now and maybe in a few hundred years total strangers will decided to make a computer program based off a thumbnail sketch of your memories. We'll let you think it over."
I don't know why you keep acting like this is some sort of escape from death. You will still be dead. No matter how good the little computer game based on your brain might be, you'll still be dead. You'll never know anything about it, ever, because you will be rotten, dead meat banished to eternal insensate oblivion. So why should you care whether some computer program in the future thinks that it is you? It won't be you. You won't know about it. You will be nothing. Ever again. |
#847
|
|||
|
|||
I'm starting to feel I am not doing my part, not having gotten even a single ignore, and he doesn't even know my name.
Part of this is because I am not as versed in some of these subjects as others, so I don't have much to contribute to an argument about computational models or how they relate to simulating and replicating consciousness and stuff like that. I could go to Wikipedia U and get an "I read an article on it" degree, but the nitty gritty of it doesn't interest me quite enough to devote even that much time to it. I like to think of myself as being a bit above average intelligence, and my interest in science and technology puts my knowledge of such things well above the average layman, but far below an expert. Basically, it qualifies me to, along with an additional 6-10 years worth of intensive study, actually start to understand what is being explored at the most fundamental levels. Here's the thing though. There are experts in these fields. There are some really smart people who have devoted their entire lives to understanding these concepts, and they have much less confidence in how they will develop than Sammy, who is at best a well informed layman, does. It's fun to explore our future, and the possibilities that may lie ahead of us. But the entire reason for that is because the future is uncertain. None of us know what is around the next bend. I think that at some point in the future, (assuming trump doesn't kill us all), we will be unrecognizable as we become more one with machines, achieve functional immortality, and spread across the galaxy and universe. As far as timeframes or precise paths that are taken to reach this state, there are many and varied, and we don't actually know which ones are viable yet. It is hard for me to get mad at optimism. The arrogance is annoying, but it does come from a place of believing that mankind can and will achieve many great things. I like being ignorant. It means that there are things I get to learn. As long as I acknowledge that I don't know everything and am not the expert on everything, I find that I learn something new in nearly every interaction. It is when it is assumed that one does know everything that one can no longer learn. And that is where Sammy is, he thinks he knows everything there is to know, and so refuses to learn new things. This turns the innocence and wonder of not knowing into the contempt for learning new things that is willful ignorance. Willful ignorance leads to many irritating and antisocial behaviors, one of which is racism, which our friend has been showing signs of of late. Not the racism of hate or contempt, but the racism of ignorance. Ignorance can be cured easily, as can racism based on ignorance. Willful ignorance is not so easily treated, and really requires some level of humility on the part of the willfully ignorant in order to change. Humility is also not something that Sammy has demonstrated. That would be the first sign of growth of him as a person. |
#848
|
|||
|
|||
Quote:
The actual experts in neuroscience have scanned and emulated sections of animal brains and have gotten promising results. They have managed to duplicate at a high level most of the behavior we see. Fuck, the actual experts in flight think hypersonic aircraft are very possible. It's the engineers trying to deliver who are struggling. Last edited by SamuelA; 01-28-2018 at 02:50 PM. |
#849
|
|||
|
|||
Quote:
All the experts agree that the finish line of their field is within sight? Sure, self replication is easy, living things do it all the time. So, just do what living things do, and we are all good, right? And of course, when you do that, you will keep none of the shortfalls and limitations of living things, but have only the robust perfection of machines? AI is coming a long way, and does many things quite well, and will probably do other things better in the future. But just putting white collar manages out of work because a computer can allocate resources better, faster, and cheaper than a person is not the same thing as actually replicating actual human thought. The brain scans have been "promising" in that we are learning about things on that scale. They are not "promising" in that we now understand everything about them to the point of being able to make accurate predictions as to how they work, or even a timeframe or roadmap to seeing how they actually work. But, that all comes back to my point. Yes, there are experts who are optimistic about their fields. But there are also experts that are not so optimistic. You only listen to the first group, and assume that the second group doesn't know what they are talking about, because they do not confirm your positions. Ignoring the group of experts that are less optimistic about the outcomes is willful ignorance, which leads to the arrogance that many posters have indicated makes you rather off putting. ETA: Your second edit about hypersonic aircraft (which is a new topic) actually explains what you are lacking. It is the optimistic theorists that you are listening to, while the engineers, the people who actually make theory and reality meet, that you are ignoring. Last edited by k9bfriender; 01-28-2018 at 03:01 PM. |
|
|||
#850
|
|||
|
|||
Quote:
The Air Force flew the X-15 at speeds over Mach 6 over 57 years ago. Catch up with technology dude. |
Closed Thread |
Thread Tools | |
Display Modes | |
|
|