I agree. We can only hope that having created it, it gives the AI the goodwill to create a subroutine or two to solve these trivial problems for us. No guarantee that that will happen, by any means.
We don’t know what we don’t know - that’s fine, but you appear to be using that as a lever to try to dislodge what we do know.
I knew that! <shifty eyes>
By which I mean: yes, we could be all wrong about physics, but why only raise this as a concern in the specific context of AI and the brain? If you think we could be fundamentally wrong about physics, why aren’t you shouting “How the hell can transistors/lasers/etc possibly work?” - because these things are equally dependent on our understanding of physics being kinda correct.
In other words, ASICs (Application Specific Integrated Circuits for those who don’t live in Silicon Valley where these acronyms are on billboards). I’m an old ASIC hand, though I mostly do processors today, with just a few ASICs. since a lot of the functionality has migrated onto the processor chip.
And you reconfigure by reloading the memory which controls the CLBs. That is the thing that takes the time. You’d need a very, very fast memory, which is going to be expensive. Lots of people are using GPUs for some problems, but I don’t think they’d be useful for AI.
Trust me, even when you have a system which is mostly ASICs, you still have circuit boards. You need the interconnect. Back 25 years ago or so there was a drive for wafer scale integration, where you put the interconnect on a wafer, not a board. Gene Amdahl was a leader in a company called Trilogy. I was peripherally involved with the AT&T effort. None of it worked. We do have multi-chip modules, and there is a ton of work on 3D packaging, where you put another chip, often cache, on a processor and connect it going up and down instead of sideways.
A lot of my PhD research was around reconfiguring computers (minis back then) for specific applications. Burroughs had the D-machine for this purpose. This didn’t catch on, since processor speeds ramped faster than application specific processor speeds, and processors did not lock products in.
I’d guess that we’d start small with networks of processors (we use a ranch of about 10,000 for simulation) then go to FPGAs and then go to ASICs when the design has been thoroughly verified.
It’s been this way for years already. Samuel’s checkers playing program from the early 1960s wasn’t programmed to play checkers (except the basics) but learned how to play by playing lots of games with itself. I ran a project where a program learned how to diagnose printed circuit board failures by seeing what worked.
These things often start with some base knowledge, and learn from there, like our natural language understanding capability is inborn not learned. The program I mentioned did better than we expected since the experts pre-loaded it with their experience, which let the non-experts use it to do almost as well as the experts.
Indeed - what we haven’t yet built is a suitable environment for a whole artificial mind though, but I don’t see any fundamental reason why it can’t happen.
Not a human mind, but there have been some pretty convincing cellular automata and genetic algorithms that have closely mimicked insect behavior.
That would be one of the easier parts. I suspect that long before we have the AI program licked we’ll have this kind of environment already. In a few generations Google Glasses will recognize where you are, recognize people you are speaking to (great for someone like me with no visual memory to speak of) recognize what you are looking at and tell you where to buy it more cheaply - and report it all back to the marketing databases. This last part is why people will pay for the development of this capability.
Given that, plugging in an AI once we have one will be simple.
That’s the thing - we won’t be plugging in an AI - an AI will simply emerge, once the conditions are right. That’s my opnion, anyway.
I’ve been reading a fair bit about complexity and how the brain works recently. Simple brains like those in worms really do appear to be nothing more than little state machines governed by rules. More complex insects also appear to behave in a rule-based way, but once the complexity gets to a certain level the behavior starts to have a little more nuance to it, or at least it looks that way to us because the rule system is fairly complex. But it doesn’t look like there’s any ‘thinking’ going on there - no more so than a rules-based cellular automata programmed into a computer ‘thinks’.
As creatures become larger and more sophisticated, simple rules and states are no longer sufficient. There’s some kind of integration and judgment going on, but it’s not happening at a mechanistic level we can see. The complexity is arising out of the network of neurons. The ‘wetware’ of the brain looks very similar in most animals, but once you get enough cells you get the emergence of higher-order behavior and decision-making.
Finally, at some point animals become ‘conscious’ and truly intelligent in that they have volition, complex memories, and a higher form of reasoning. We don’t know where that point is - is a rat sentient? How about a dog? A monkey? When we do MRI scans of the brains of different creatures we can certainly see patterns of neuron firing, and we can even localize those patterns and correlate them with pain, happiness, vision, etc. But that’s just the wetware. The mind itself is software, represented by the neural network of interconnections that have evolved as a species and configured as an individual grows and learns.
I suspect that if AI is possible at all, it will just happen and won’t have much to do with us intending it. We’ll just build bigger and ever-more complex systems with increasing numbers of networked connections, and one day we’ll hit some threshold and the whole thing will suddenly start thinking for itself. But really, this is just a guess because no one knows how to make an AI, or if it’s even possible.
I’d have said that you’re the one who put the subject into play, but I’m perfectly happy to stipulate that the principles of known physics are sufficient to the day 99.999…. percent of the time.
You seem to be saying (in somewhat tautological fashion) that if we assume, as physics ostensibly does, that everything is calculable, then we can safely relegate the idea that any incalculables exist to the unscientific discipline of philosophy and move on. I’m suggesting that the one place we probably shouldn’t do that is when we approach the nexus between a living organism and a machine, or living and mechanical brains. I wish I could explain the potential interplay I see more effectively, because I’m not the anti-science Luddite you may suspect me of being. It’s hard to tackle such a huge. largely unexplored, topic without seeming a little vague.
How does physics express the difference between life and non-life? Surely we can agree that such a categorical distinction exists? Or do we? How does physics describe the fundamental attributes and imperatives of an organism vs a machine? When the heart of physics is predictability, how are we to calculate the differences between my movie goers? We can identify all kinds of typical behaviors, but that doesn’t make them “predictable” in any scientific sense of the word. How many unanticipated results does it take to blow a theory out of the peer reviewed water? Sometimes one is enough. How do we address human behaviors (or thought patterns) which are simultaneously not random and not predictable? That’s a pivotal question, isn’t it?
The belief that physics can ultimately “solve” any riddle seems like something of a leap of faith (and I’m sure I wouldn’t be the first to characterize the ever-elusive Singularity as a scientific version of the Rapture). My expectation is that it will, in fact, be the salience of putatively philosophical issues, now so easily dismissed, which becomes clearer over time, especially when it comes to AI.
In the meantime, I can’t get excited about a task based machine which can learn to play checkers, for instance. That kind of advancement in more recent forms may be a real, even stunning, achievement, but it’s kind of a computational no-brainer too. It seems to me that the sheer complexity of human intelligence and imperatives may be more than a mere stumbling block along the way to AI, and conceivably an inscrutable “thing” in and of itself. In any case, however, I would wager that rather than opening the door to superior AI “beings” of some sort, experiments in brain simulations are far more likely to lead to the enhancement of both human intelligence and organic (organism-istic?) functions. Instead of isolating intelligence, and trying to obviate the need for supportive biological paraphernalia like, say, oxygenated blood, we’ll be looking at whole mind-body systems with a view to delivering chemical expanders and inhibitors, and attaching mechanical devices from artificial limbs to computational chips, and developing wireless externalizers to move computer cursors with our minds (oh, wait, we’re already doing that) or tap into underutilized brain capacity. Brain transplants are not inconceivable.
I can easily imagine a point at which we might look a lot like integrated mechano-biological organisms. I don’t see a day when the mechanics become the task masters with entirely optional biogenic add-ons, and, in what seems like the sticky wicket here, I obviously don’t feel obliged to assume (or concede) that it’s possible.
Ha! Either way, that would make a great movie, wouldn’t it? Let me just run it through my visualizer…
What does sentience mean? My old border collie was highly intelligent. He could plan (at least over short time horizons.) He could abstract. He could be devious. He had an excellent map of the neighborhood, and when he wanted to go to the yard of a school would try to turn down streets he had never been on before but which led in the right direction. But he was not self aware.
Clarke had a story where the telephone system became sentient. I don’t buy it. Our conscious mind is based on being able to read our own thought patterns and react to them.
My subconscious can solve very difficult problems. It can write programs. It can plot stories. But I have no visibility into how it does it, or even what it is doing. I see a dog’s mind as my subconscious without the conscious monitor.
Our subconscious can react to stimuli and perform actions. Lord Buckley had a bit about how you go out driving in the country and start thinking about that memory or that girl or that plan. Who’s driving? Your subconscious mind, that’s who. Just like a dog herding.
We need to build a subconscious computer first, and then figure out how to do the monitoring. Until then our computer will never become conscious.
You`re a border collie guy? Cool. I’ve got one lying at my feet right now. Our fourth BC over the years.
I disagree about the sentience of dogs, but I can’t prove it. And I could certainly be wrong, as I’m not sure how to tell the difference between real sentience and just really complex automated responses. Behavioral scientists have a couple of tests for self-awareness: one of them is to see if the animal has a sense of ‘self’ when it looks in a mirror. There was an experiment where they would put something on a monkey - a flower, or a mark of some sort, then have it walk past a mirror. The Monkey would stop, look at the mirror, then touch itself on the spot where the mark was, indicating that it understood that it was looking at its own reflection and not another monkey. Very few animals pass this test. My dog doesn’t, but I don’t know if that’s because she’s not self-aware, or because she’s not quite smart enough to put the concepts together.
I don’t think sentience is necessarily all on or all off. There’s more likely varying stages of it. Cats have more neurons than dogs do, but dogs seem to use theirs more for social cues and ‘fitting in’ with a pack. This at least makes them appear to be more ‘sentient’. Cats have more complex behaviors in terms of exploring, hunting, solving problems and that sort of thing, but as someone who has a cat and a dog, cats always seemed to be more instinctual overall. Make certain motions in front of a cat and it can’t seem to help responding to it. The dog may or may not, as she sees fit.
But I do believe that sentience is an ‘emergent’ property of a complex adaptive system. It’s not something that’s designed or stored in our genetic code. Our genes tell our brains how to form and what various structures to build, but that’s not where sentience is. Sentience happens when the neural network constructs itself as we learn. Newborn babies are probably not sentient, IMO. They form memories and years later may be able to remember things that happened early in life, but I don’t know that there was self-awareness at the time. It takes a while for that property to emerge. But emerge it will, if the underlying structures are present to enable it.
So just having a large network of nodes in a digital ‘brain’ is probably necessary, but not sufficient. But what happens when we build a computer with enough nodes that we can simulate a neural network in a high enough resolution to represent the analog potential buildups in the brain, and give it the ability to reconfigure itself along some defined rule-based pathways as the brain does? And just for good measure, let’s add in the digital equivalents of all the different cell types in the brain and the membranes and barriers that coordinate and regulate electron flow and neural plasticity. I guess we’d also need the digital equivalent of an endocrine system, just to be safe. If we got the physical stuff right, then let that system adapt in an environment that rewards it for solving problems, what would happen?
I disagree on both counts. Thanks to the magic of effective field theories, we actually don’t need to know the precise underlying physics in order to provide a complete and consistent predictive framework for everyday-level effects, that is, effects on a scale larger than the Planck scale, as present theories stand. Basically, you can ignore the detailed microphysics, giving you a statistical description on the larger scale that is independent of the details further down. Think about thermodynamics: it gives a (on all everyday scales) complete description of the dynamics of heat and work, and of the behaviour of gases and other aggregate phenomena, and it was found long before the atomic picture of matter allowed its derivation from microphenomena. So independently of the detailed microphysics, we can make predictions about larger scales using thermodynamics. Sean Carroll has made this point more forcefully.
Similarly, I do think that we are at least beginning to have a good grasp on the origin of creativity. The work of Schmidhuber (PDF link) I think is especially interesting in this regard. The basic idea is that you can formulate a theory of inference (Solomonoff induction) based on data compression: any regularity in a data set can be used to compress it, and this data compression algorithm can be used to produce a universal prior probability quantifying how the data set will most likely be continued; that is, you find a formal way of arriving at unique predictions for how a given situation is most likely going to develop. Schmidhuber basically postulates that finding novel ways of prediction, i.e. better compression algorithms, is thus a desirable goal for every agent, as it improves its capacity for prediction; this instigates a motivation to seek out novel and surprising patterns and stimuli.
This is essentially done using a trial-and-error process, coming up with random variations of known patterns and evaluating whether or not they simplify matters; this is really the only way to perform such a process, as the underlying quantity, the Kolmogorov- or program-length complexity, is not computable, i.e. there exists no algorithm telling you for a certain object how much it can maximally be compressed. Thus, one needs to exhaustively search the space of solutions to this problem; coming up with novel solutions then is what creativity is all about.
In fact, similar ideas can be used to create an agent that is theoretically (asymptotically) optimal at solving arbitrary novel problems, Marcus Hutter’s AIXI.
It’s strange that you should bring up t’Hoofts ideas, because these explicitly start with a model of computation—a cellular automaton—underlying all of physics. In his model, there is literally no way that anything could be noncomputable, because ultimately, the universe at the foundational level is a computer.
The argument here seems to be ‘something unknown may be doing we don’t know what’. And of course, that’s true. But it’s also a stance that puts the burden of proof on you: if one has always only observed white swans, then the one claiming that ‘there are only white swans’ is false has to produce a black swan in order to make their point; it is simply fallacious to say that one couldn’t assert that sentence just because there might be differently-colored swans (because trivially, then one could never assert any contingent or a posteriori proposition).
This is also a very often heard argument: a giraffe can’t understand differential equations, why should we be able to understand the universe?
But I also think this argument doesn’t hold, because there is one very significant distinction between us and giraffes: the fact that we can perform arbitrary symbolic manipulation. This means that we can calculate any effectively calculable function, which, given the validity of some form of the Church-Turing thesis, entails that we can compute anything that can be at all computed. The giraffe can’t do that, so we can’t generalize from its failings to ours; and indeed, if the world is computable, then at least in principle, we can understand the world (though of course, the computation may be to complex to be carried out by any one human in a reasonable amount of time; but thankfully, that’s a limit we know how to work around, by working together, and enhancing our capacities using computers).
This is another problem I have with the noncomputable: say there’s a question I have, and you arrive at the answer with some noncomputable means. How could you ever convince me of the validity of the answer? You couldn’t simply, as we do nowadays, write down a way to derive the answer from known facts, because that would entail that there is a computation capable of reproducing the answer. I would have to take your word on faith, and in the last consequence, this means that all answers one could come to this way would be just of the ‘your guess is as good as mine’-variety. Of course, this doesn’t mean that nature can’t be that way; but I sincerely hope things don’t turn out to be that boring and arbitrary.
Another point is that there are very simple conditions on the way the universe might be that directly imply that everything in it must be computable—generally, everything that forbids you to go to infinity and back, so to speak, such as for instance a finite density of information in space, i.e. a finite amount of information in any bounded volume, as is implied by the Bekenstein-Hawking entropy (or Bousso’s covariant bound if you want to get sophisticated about it), or the impossibility of accomplishing infinitely many tasks in a finite amount of time, etc.
I don’t, and I don’t really think anybody has since the demise of vitalism. If you dial back the complexity of living organism, there simply is no hard and fast border where what you have suddenly ends up being ‘non-living’. That one arrangement of atoms is a lifeform, while another is a puddle of goo, is simply a functional characteristic of the ordering relationships between the atoms, not an extra quality added to them, the way the property ‘being a house’ is just a characteristic of all the parts of the house standing in the appropriate relationships to one another (i.e. being assembled in such a way that what comes out is a house). For an early, but very intriguing investigation into the matter, I’d recommend Schrödinger’s What is life? The Physical Aspect of the Living Cell (PDF link).
From the differences in their experience, surroundings, genes, etc. etc.
That they’re not predictable is far from established, but even if that’s the case, this doesn’t blow computationalism out of the water: because there exist undecidable questions about the evolution of computational systems. One is, for instance, will this computation ever reach that specific state (this is related to the key result in that area, the undecidability of the halting problem)? Or, in cellular automaton terms, will this particular pattern ever be produced?
But the existence of such questions does not spoil the computability of the system, as given its state and evolution rule, every future state can be exactly calculated.
As I subsequently noted, “I’m perfectly happy to stipulate that the principles of known physics are sufficient to the day 99.999…. percent of the time,” and I don’t think that’s at variance with my other observations
Well, yes, if you start out presuming that the universe is a computer, then everything is necessarily computable. Such tautological thinking seems emblematic of the AI brain discussion. Running into Alice & Bob, “’t Hooft speculated that some new law of physics might harmonize particles’ properties with humans’ measurement choices.” [Emphasis mine] Then again, as I mused, it might not.
I’m certainly not arguing that we shouldn’t try to understand the universe. “Arbitrary symbolic manipulation” is not the only significant difference, of course, but the are also significant differences, if of a seemingly lesser order, at every rung of the living ladder. A possum’s brain is more limited than a dog brain which is more limited than an elephant, than a chimp, than a man. When every other brain on the planet is limited in some way, how is it more reasonable than not to assert that human brain function cannot, itself, be limited in any substantive way? From whence cometh the apparent confidence that we are singular endpoint of evolution? Now you can relegate potential human limitations to the realm of irrelevant unknowable guessing games, but when we accept the proposition that if we feed an unknown but finite amount of data into a machine of sufficient but as yet hypothetical complexity, we can predict the future ad infinitum, then aren’t we categorically asserting that all unknowns are, in fact, knowable? I suppose you’d need an infinite amount of time for that, because you’d have to figure out how to feed historical data into the machine before it has already happened, or you’d never catch up, would you? But never mind.
At a more practical level, animal/human brain comparisons are not irrelevant to the matter of AI and simulation. Knowing that the expressible differences between gorilla and human brains are minuscule, I suspect that the genetics lab would give computer science a run for its money in the super brain competition, sans bio-ethical constraints. They seem far more likely to genetically engineer a brain that is cognitively superior to the ones we are working with now, than those trying to do so with mechanically engineered brain simulations. Shoot, a genetically altered brain might even be so significantly distinct from our own, that it could comprehend the universe without the massive date feed that even predicting the funny quotient of a joke would require.
Creativity theory is still all over the map, and there’s a long road to travel between observing/describing the creative process and identifying its origins in any useful way. I can certainly see the appeal of an AI enthusiast’s algorithms to other AI enthusiasts, though. I think Schmidhuber may be well on his way to developing an artificial mathematician, if learning and curiosity are task oriented (better modeling), where the goal and the intrinsic reward are essentially the same (newly reductive simplicity). Once again, there’s a certain convenient circularity at work here, because innovative mathematics can obviously be expressed and tested mathematically. Schmidhuber claims that his theory is comprehensive, but when it comes to art, music etc. I don’t think he even comes close, for a host of reasons I won’t try to enumerate here, but would, if need be, time permitting. In the world of Schmidhuber’s artificially intelligent scientist, the ultimate point of creativity is to reduce the universe to one elegantly irreducible equation. That’s not getting a grasp on creativity, that’s redefining it – which happens a lot in the context of this discussion.
That looks like serious avoidance to me. Physics/mathematics cannot express the difference between life and non-life, ergo no border between them exists. Calling life an additive “extra quality” does make it sound less plausible than calling it an attribute which one ought to be able to define. What suddenly happened to the “for all practical purposes” standard, where there is an obvious difference between a live person and a sculpture, and a between a live person and a dead one, between animal evolution and geological processes, etc. For all practical purposes, we’ve got a better handle on that than we do on creativity. Yes, we are all made of star dust and presumably conform to the same laws of physics that everything else does, but when we can distinguish between the properties of photons and quarks it seems like a pretty spectacular failure to falter at describing the difference between a man and a spark plug. Surely there’s a scientific explanation for why rocks aren’t creative, when physics can supposedly tells us why a feminist is sitting in a theatre composing her blog – not to mention everything else she could possibly doing instead, but for some reason, which we’ll eventually pinpoint, isn’t. Ironically, the nature of life is one thing I could easily imagine tackling successfully. I’ll take a look at the Schrodinger, though, because I can pretty safely predict that he’ll be interesting on this subject.
Oddly enough, that’s kind of how I feel about AI and mathematical determinism.
Well, Skynet doesn’t kill us, it turns us into living batteries.
It took billions of years of evolution for us to be able to enjoy anything.
That is not dead which can eternal lie.
And with strange aeons even death may die.
Anything can happen.
Let me know when computers can mimic daggitts.
Our brains have 100 billion neurons and 125 trillion synapses. Our bodies have a trillion neurons.
It took us 3.6 billlion years of intense evolutionary pressure to develop intelligence as we know it. I don’t think we will ever get there absent a lot of smart people having a lot of time on their hands (which won’t happen until after we already get there). What incentive is there to invest the resources to create this sort of technology and how would we ever get there incrementally?
I googled that and am having a fine time on Wikiquote’s Lovecraft page!
If you’re saying that you expect new physics playing a role in human cognition, then it is; it’s like expecting quantum physics to play a role in the calculation of planetary orbits. The relevant scale on which cognitive processes (or their associated biological correlates) take place is about equally far removed from the scale on which we could reasonably expect new physics to become important.
What he’s talking about is the so-called ‘superdeterminism’ loophole regarding Bell experiments: not something that aligns particle spin with experimenters mental states but quite the other way around, a pre-fixed choice determining both the experimenter’s decision and the particle’s spin.
But it’s the big one, as it allows universal computation rather than just special-purpose. It’s a qualitative, not merely quantitative difference. And I’m certainly not saying that mankind is the ultimate end of evolution, such a thing basically being a contradiction in terms, but with regards to taking an input and converting it via a sequence of logically connected steps to an output, then yes, there’s nothing that can in principle do more (given the Church-Turing thesis, of course).
No, the end use of creativity is to become a more accurate universal predictor. As for art and music, well, we’re once again in a black swan situation: you don’t want to believe that it is reducible in such a way, and so you postulate that something else comes in somewhere, like you postulate that something else beyond the computable must occur in the universe; but historically, all black swans proposed in this regard have turned out to be just a bit muddy, and the onus is really on those proposing they exist, prior to which assuming they don’t is the rational position. So, what do you think are the features of creativity, and which do you believe isn’t reproduced by his theory?
I don’t think that’s a fair characterization of my position (besides, there exist physical characterizations of life, based on entropy balance—Schrödinger discusses this, too). The thing is just that to assume that there exists a principled difference between living and non-living matter without being able to point to any such difference, with both being composed of the same fundamental building blocks, simply isn’t reasonable. If I were to build a robot that walks like a human, eats like a human, procreates like a human, and does all other things a human can do, then would that creature not be alive? Or if it would, what should prevent me to build such a robot? Certainly, nobody has done so thus far; but claiming that this is evidence against the possibility is like the old creationist chestnut that microevolution is not evidence for macroevolution.
Well, to each their own, though I must admit that I would find it hard to get very enthusiastic about a world in which things just happen without reason, and ultimately, you’ll just have to shrug your shoulders and accept that that’s just the way things are, if you don’t like it, get your own universe. I might be a romantic in that regard, but I think that the world is ultimately accessible to reason, and furthermore, I don’t think that that is limitative, as so many of the ‘science can’t explain that!’-bend seem to hold; in fact, I think that quite to the contrary, the ability to search for ever deeper explanations enriches the world far more than some irreducible mystery would. And as I pointed out, there will always be room for novelty, creativity, and surprise: even in a computable world, some questions cannot be computationally decided, but only through exploration of the possibilities; I think that a far larger part of the interesting phenomena in the world happen just along this line (the term ‘edge of chaos’ used to be fashionable) than is generally acknowledged. But that’s a different discussion.
With regard to t Hooft, you’re the one who posited the universe as a computer. I was talking about uncertainties.
Perhaps I should have made it clear that I was referencing the artificial scientist which Schmidhuber has talked about elsewhere. In any case, he is the one doing the postulating. Nay, he starts out simply asserting that creativity is generated by “an intrinsic desire to build a better model of the world and of what can be done in it,” and ends up assuming that prescription is universally true. It is telling that you refer to the end use of creativity, because Schmidhuber is describing a single task oriented use of creativity, himself, and asserting that all creativity is intrinsically directed toward that same end. All babies are creative as Schmidhuber defines creativity (as is basically all of science really). What he’s primarily describing, however, is the process of learning, a term he uses almost interchangeably with creativity, but about which he is actually far more specific, and from which he appears to derive his algorithm. Novelty seeking is the ostensibly distinguishing feature of creativity, vis a vis general learning I suppose, but I don’t see him describing any appreciably dramatic differences between the intrinsic incentives and rewards involved. How many among us aren’t bored by monotony? If we don’t assume that creativity is tied exclusively to better predictive modeling, a premise I don’t see Schmidhuber defending with any real intellectual rigor, things suddenly start getting a lot more interesting (call it the non-compliant creativity factor) – and a lot less susceptible to a single irreducible algorithm.
You and Schmidhuber are the ones who are only seeing black swans here. Whenever I suggest that maybe we should look around for some white swans, I hear you saying there’s no practical reason to bother. I’d have thought it might actually be helpful in designing ways to test Schmidhuber’s black swan theory. I’ve said I can see Schmidhuber fashioning an artificial mathematician, but when I tell you that my own creative experience does not conform to Schmidhuber’s model, I hear you saying that the idea that I am not a black swan is just an illusion designed to make me feel better – as if Schmidhuber is not extrapolating from perceived human experience, himself! If tell you that neither the intrinsic reward nor my artistic output has anything to do with predictive modeling, I hear you saying that the burden is on me to prove that it’s not – even though it’s Schmidhuber’s theory, and he hasn’t even begun to meet his burden of proof for substantiating the claim that his algorithm encompasses creativity wherever it’s found. His formula may prove useful in constructing an artificially intelligent entity tasked with better modeling, but I think he’ll be running into a lot of white swans outside the lab.
Somewhere in the Time Magazine archives, there’s a cover which featured an infant and announced that science had confirmed that babies were born with a complete set of human faculties. I have to admit that some things do seem self-evident to me.
For all practical purposes, however, physics cannot predict the behavior of my movie goers, yet you apparently don’t regard the notion that it could as fanciful. I am not sure why the one thing you explicitly say physics won’t do (distinguish between animate and inanimate), doesn’t give you serious pause, when that’s a fundamental classification upon which so much of our science rests. Life is definable as a distinct, physical attribute in almost every other discipline (not to mention seeming obvious to laymen), save physics and math? Ironically, we could even say that very attribute is actually what generates extraordinary measures of Schmidhuberian creativity, as we devise ways to reach out into the universe searching for signs of…. life. We seek out novel, better predictive models and tools to facilitate that search. We explore the possibility that other forms of life could develop in a sea of methane. How is it that physics will putatively explain every facet of our mental processes and behavior, but it can’t describe what we were looking for in potential Martian water or recognize live beings on other planets? What’s surprising is that you’re willing to believe that physics will eventually be able to explain the first, but not the second. Given the foundational importance of the animate/inanimate concept in so many scientific and human endeavors, would it hit your comfort zone, if I were assert that it’s not unreasonable to postulate a principled difference? I’m tempted to suggest that it might be one of those “questions [which] cannot be computationally decided,” but I expect that’s a bridge too far.
If you can replicate a live human being then, yeah, it would be a live human being, but whether I say yea or nay, that’s a question which I can entertain, but physics apparently cannot. I can envision an animate/inanimate hybrid form of life, but physics would say, meh, nothing new. Where’s the intrinsically rewarding novelty in that?
What, no multi-verse? I haven’t even come close to arguing that everything happens for no reason, or that the end game is an irreducible mystery. I’m not sure why you keep conjuring up denialists of one shape or another, as if I’m representing them here, but perhaps I seem to be doing the same to you, in reverse. In actual fact, I’m fascinated by the astrophysical workings of the universe, even where I don’t understand the math. I think it’s exciting when we bump up against something we don’t understand, and 99.99999… percent of the time I’d just say science can’t explain that yet, while awaiting news, sometimes breathlessly [see: Neutrino] of further developments. One of the apparently novel possibilities I’m willing to explore, however, is the idea that human cognition may have limits which affect our comprehension, and another that animate/inanimate issues are relevant to human brain simulations – neither of which seems the least bit unreasonable to me. Many a scientific advance has had much more farfetched antecedents.