If we knew this or that what could we invent?

01101110 01101111 means 01101110 01101111.

It took me a moment to identify what part of 01101110 01101111 I didn’t understand. :slight_smile:

  1. I think it would interesting to speculate what the possibilities are if we discovered a superconductor that could operate in the normal temperature ranges that exist on Earth. A material that functioned as a superconductor without any need for cooling

  2. In E.E. “Doc” Smith’s 1928 novel The Skylark of Space, he described a metal called arenak, which is 500 times stronger and harder than the strongest and hardest steel, and does not soften even if heated to the surface temperature of the sun. Roger M. Wilcox wrote an essay that speculates if such a metal actually existed.

http://www.rogermwilcox.com/arenak.html

The problem with arenak/unobtanium/whatever hypothetical super-strong substance you prefer is that we don’t have any tools hard enough to work it. A lump of arenak ore is always just going to be a lump.

Well, the biggest reason our manned space program has been in a virtual state of arrested development for decades is that we do not have a means of propulsion that would make it practical to send manned missions deep into space.

So, let’s say we discover the graviton, which is the hypothetical quantum elementary particle that mediates the force of gravity, and learn to manipulate it so that we can use it as an extremely powerful, weightless, and cost free form of propulsion. Suddenly, the entire solar system is open to exploration, colonization, and the harvesting of its resources. It would represent the greatest leap in civilization yet. People would be able to move off world and live. The options that opens up are virtually limitless.

No, you have it backwards. Consciousness is not something that is analytically understood and then explicitly built. It’s just an abstraction that describes one of the properties that emerge from certain types of sufficiently advanced intelligence. And there is no clear distinction between “weak” and “strong” AI, just as there’s no clear distinction (actually, no distinction at all) between computation and thinking. There’s no magic going on here, no mystical thresholds, it’s just a very broad continuum. Philosophers like John Searle disagree, but among both AI researchers and cognitive scientists, he’s generally regarded as a short-sighted moron.

The practical implication is that we’ll have intelligent machines that are smarter than we are and that exhibit what we call consciousness, and we still won’t know what it is.

Killjoy. :wink:

That some concepts of science fiction have been reproduced in reality—like videophones, rockets, and lasers—does not mean that all science fiction concepts are physically practicable given sufficient knowledge. The matter transporter of Star Trek, for instance, would require advances in many sciences (quantum mechanics, computation, energy projection) that we have no reason would ever be valid. And of course, it was not conceived based upon some projection of existing or proposed technology, but rather as a way to avoid the expense of building shuttlecraft and flying crew up and down from the ship to planet. We can imagine all manner of things that simply cannot happen, from dragons to making babies with bumpy-headed aliens, which doesn’t mean they can occur.

Subvocalization is a specific cognitive activity relating to reading. Most people do not subvocalize when reading, and nobody generally subvocalizes their internal thoughts as a matter of course. At any rate, this isn’t mindreading any more than looking at someone’s involuntary facial responses are.

People often espouse the idea of “scanning the brain” to get information, but hardly anyone ever explains what they mean by this. We can currently observe the activity of the brain (“functional neuroimaging”) by a few different methods, but all except for electroencephalography require the head to be essentially stationary in a scanning device, and even the best cannot image at the level of specific networks of neurons which would be necessary to identify specific thought processes at a level where you could literally interpret someone’s thoughts. The best we can do is identify specific areas that certain types of cognitive activities tend to activate. Even if we could somehow “scan” the brain at the level of individual neurons, recognizing those processes across the 86 billion neurons would be far too computationally intensive for any kind of real time interpretation even if we could somehow make a one-to-one correspondence between neural activity and specific thoughts. The likelihood that we would ever be able to make a high resolution scan of a brain and apply it to some kind of a computer model to replicate the subject’s internal cognitive concepts is beyond any extrapolation of neuroscience. At best, we might be able to develop a model of someone’s affective responses and assess their emotional states.

Searle isn’t a “short-sighted moron” but like philosophers in general he is dealing with artificially constructed thought experiments that make certain assumptions and models that are very weak approximations of reality. He does assert that we will probably not be able to produce human-like artificial intelligence in a digital simulation, and I think that is probably true for a number of reasons not the least of which that the brain is not just a network of neurons that can be replicated in discrete logic states but is largely driven by the underlying interactions of neurons with neurotransmitters and sensory feedbacks which produce the layers of different graduations or cognition and self-awareness that produce what we perceive as consciousness.

At a minimum, a human-like machine intelligence would have to sufficiently represent human physiology down to the cellular level, and I’m dubious that could ever be done by emulating on top of digital hardware. If we really want to produce a human-like cognition, it probably needs to be done on ‘wetware’ that looks and behaves like the human brain, even if constructed by synthetic biology. This doesn’t mean we cannot make a machine intelligence which is capable of remarkable feats of cognition and calculation—in fact, we can currently make intelligent systems which can outthink the most capable humans in specific narrow tasks such as information recall or chess, and certainly in data processing and visualization—but they don’t fundamentally work like a human brain. When we build a digital artificial general intelligence (AGI) that is capable of functionally performing many human intellectual tasks, it will not look or work like a mammalian brain and given its own self-determination to produce original ideas or creative works will probably produce something incomprehensible to us.

Stranger

Oh, I don’t know about that.
A laser beam or a plasma torch will cut even the hardest materials.

If we could solve the halting problem, we’d have to throw out all mathematics, or close to it, because we’d have found a major inconsistency in basic logic.

For example, consider this program:




Stinker(code) {
    if (Halts(code)) {
        while (true) ; /* Loop Forever! */
    }

    return true;
}



That little program, Stinker, sees whether a given program halts with no input. If it does halt, Stinker does not halt. If it doesn’t halt, Stinker does halt.

So, what happens when we feed Stinker to itself? If it halts, it can’t halt, and if it doesn’t halt, it must halt! Contradiction!

I wouldn’t call Searle a moron, but his Chinese Room experiment is transparently bogus in that it misrepresents a key concept, perhaps deliberately: The system of the Chinese Room is the student plus the rule book. The fact the student can’t read Chinese is meaningless, just like the fact my thalamus doesn’t speak English is meaningless: I speak English because I am more than just a thalamus plus some other fatty jiggly bits. The “thalamus plus fatty jiggly bits” system is called the brain, and my brain, as a whole, speaks English just fine.

I refuse to give any credence to the accident of implementation that is bringing a book into this mess. Having every part of the intelligent system be made out of the same stuff is either tautological (we’re all matter) or meaningless.

The Chinese room certainly has problems but this particular objection was one of the first thrown at it and he responded to it immediately (decades ago). He asks that we modify the hypothetical to allow the person to memorize the rules of the book, still without knowing the semantics of what he/she is saying.

You might consider it a poor retort, but you can’t suggest it’s the elephant in the room that he ignores / misrepresents.

If the person memorizes the book, then there’s a different person inside of the person. No ordinary person can memorize a sufficiently-complex rulebook, and that’s not just a quibble, because if you had people with sufficient memory that they could, then you’d need an even more complex rulebook to mimic such a person.

He’s still trying to split something which can’t be split: A system with the knowledge of how to follow the book plus the book yields a system which can speak Chinese. If he’s at all willing to accept that systems can be built from smaller systems, he can’t suddenly draw a line and say that this system can’t have any other element added to it to make a new system, and he can’t deny that systems can be built from smaller systems because we know for a fact that brains are built from neurons, and neurons are built from atoms, and atoms are built from sub-atomic particles, and that it’s absurd to ask whether an electron speaks Chinese, even if that electron resides in Xi Jinping’s head.

It depends what attributes you ascribe to “human-like intelligence”. You’re certainly correct that our behaviors are largely driven by sensory feedbacks. Also, I’ve argued in another thread that some attributes of human behavior, like emotions, are intrinsically biological. However, I don’t regard those attributes as being fundamental to intelligence or to some of its emergent properties, among which I count consciousness.

The exact replication of human behavior is fairly pointless and probably futile, but the creation of artificial general intelligence is a different matter and will be profoundly transformational to our existence. It’s the difference between trying to build an artificial bird and building a jet airliner. A jet airliner fails many tests of bird-like capability, such as hopping around on the grass looking for worms, or shitting on my car, but no one considers those features to be essential to the fundamental utility of flight. In that regard, I fully concur with your last sentence.

I’m not sure that this logic follows.
Let’s take a step back.

Is it possible for someone to follow a rulebook and perform a complex action without really understanding that action? Yes, absolutely, humans do that all the time.

Is there any difference between a person with a book and a person with understanding? Yes, because for one thing, if a problem occurs outside of the domain of the book, the person with understanding might be able to apply their understanding. The person following a book point by point is sweet out of luck.

And really that’s as simple as the point is and needs to be. The Chinese room gets a lot of heat, rightly and wrongly, because it seems like an argument that’s easy to attack. The first time I heard the Chinese room I felt like some trick had been played on me and even now, while I think the main counter-arguments don’t work, it’s still not an argument I ever use myself.

I think it’s better to point out the difference between syntactic operations and semantics, machine vs computer, and that it’s not necessarily the case a software that behaves like a human must have the same conscious states as a human.

Remember the cat

No, that’s just arbitrarily redefining “understanding” to mean “able to operate outside the bounds of the problem domain”. All I have to do to address that argument is expand the rule book to satisfy whatever expanded bounds you had in mind, which, in effect, is what generalized human intelligence does. It’s also what some AI systems do to some extent, like robotic spacecraft and avionics software.

Let’s use a chess program as a simple illustrative example. During the opening, it tries to literally follow a book of opening plays as much as it can, but during most of the game it faces many situations that were never specifically anticipated. My favorite current chess program does all kinds of look-ahead strategizing and even uses the idle time when I’m thinking about my own move to plan ahead even further. But I can cheat and go into the menu and arbitarily rearrange the pieces, or take, say, the computer’s queen right off the board. When it’s the computer’s turn again it’s faced with a completely new situation; all its carefully laid plans are out the window. But it deals with it, and recalibrates its strategies according to the new situation, notwithstanding that I’ve cheated and broken the rules of chess.

There’s basically no limit to how much screwing around I can do, including setting up arbitrary positions to have it solve chess puzzles. So tell me this: does the computer “understand” how to play chess, or not? If no, what does it not understand? If yes, why can’t this strategy be expanded to any arbitrarily large domain of understanding?

The last part is definitely true – “not necessarily the case” – but not particularly relevant. But on the first part, semantics can and does arise from syntactic operations and this is equally true both for AI systems and for many if not all aspects of human cognition.

One of the lesser-known war-cries.

But one of the most inspiring! :smiley: