Now are they particularly imaginative? I dunno how imaginative any of her series titles are, so I couldn’t tell you. This is fairly boilerplate stuff I could have come up with having never read her books. But it does give suggestion.
The previous was preceded by stuff about we can’t ever know as she died, but that we could speculate.
Subjectivity means making a determination based on your own experience and perspective rather than some outwardly-defined “objective” standard. In other words, using an internal dataset based on prior input. The AI seems to be operating pretty subjectively to me.
This is not properly formed as a debate, and would therefore probably not make for a good GD thread. While the thread started out factual enough, this has since meandered well outside of the bounds of FQ. Let’s move it to IMHO instead.
Any factual information about the current state of AIs and their applications is of course still welcome.
That claim is so divorced from the reality of using a CAD or modeling & simulation system—even an ‘expert’ system with form or load optimization capability—that it doesn’t even bear a detailed refutation. Suffice it to say that you cannot just tell a computer to take an existing aerofoil and “alter it to be more aerodynamic”, because not only is that statement exceedingly ambiguous just in itself but it also lacks any context on how to weigh the goal of being “more aerodynamic” with other competing parameters such as weight, stiffness, strength, cost, et cetera. It may be difficult to define “intelligence” and even moreso “consciousness” or “sapience” but one key discriminator is the ability to understand context, and this is something that machine learning systems are notoriously terrible at doing because they have no context beyond the datasets they are trained with.
ChatGPT seems very impressive because it can construct a wide array of mostly coherent and often seemingly contextualized writing, but this is impressive than it may seem at first when you consider that language is highly structured—in syntax, grammar, and form—and the ‘dataset’ for training a machine learning algorithm is enormous, with the text in the Library of Congress being several hundred terrabytes notwithstanding all of the ‘work’ that is freely available on the internet reflecting every range of topic imaginable. Given sufficient time and pattern matching capability it s scarcely surprising that a very powerful statistical pattern matching algorithm can synthesize something that reads like natural language (if often a bit odd) and that even expresses affective sentiments such as joy, rage, jealousy, paranoia, et cetera, that a computer algorithm literally cannot ‘feel’ in any sense that an animal can because it lacks those neuroevolutionary constructs in its cognitive structures.
ChatGPT is just producing output that looks like it was written by a person but it doesn’t take much reading to either discover that it has very odd constructions that indicate a lack of deep semantic meaning or is literally just cribbing things together from things it has absorbed. Which is hardly surprising because that is what human writers do, too, when first learning language, but unlike humans who are constantly absorbing context from the world around them, a machine learning system is just referencing its dataset which is in no way a construction that even roughly simulates the real world. Whether such systems can actually be ‘intelligent’ or not (which, again, is a vague definition fraught with almost infinite variety of interpretation), it is clear that they are really just synthesizing text based upon trained datasets.
Maybe I have a nerfed version. The responses I got usually started out with it saying “As an AI language bot, I don’t have personal opinions or experience emotions.” It would then proceed to send a message that looks like something from a Wikipedia article, not a conversation with a real person. I get similar responses to all sorts of topics, not just Sue Grafton novels. Talk about food? “As an AI bot, I don’t eat and so do not have an opinion on the matter.” The series finale of Star Trek DS9? Just a generic reply about how the storylines of all the main characters were wrapped up and that it received positive reviews from the critics. No matter what the topic I tried to engage it with, all the responses were along those same lines.
I don’t pay shit to mess around. This is all fun chatting for me, and you get to learn its quirks and its walling-ins and how to get around them. If you want to know what my exact prompt was, it’s this. I didn’t even try to play around some of its limitations. Sometimes, you do have to coax it into an answer. I didn’t in this case.
If Susan Grafon had written “Z is for Zero” what may the plot have been?
That’s not quite the same thing; that’s more along the line of giving the AI parameters and some sort of scoring system, and then letting it chug through various permutations until it comes up with a mostly optimized design.
General AI has been a quest for at least 30 some-odd years; I recall taking our AI course in college, and that being the sort of holy grail. Meanwhile, stuff like neural networks, machine learning, etc… were in their early stages back then. We still haven’t achieved general AI (meaning that we can produce a machine intelligence that can learn anything a human can) despite another 30 years of trying.
For an AI to actually apply the scientific method, it would have to be able to understand and apply the stages- observe something, ask a question, make a hypothesis, make a prediction based on the hypothesis, test the hypothesis, and then iterate back through based on the results.
What we’ve got today are machines that basically can evaluate a HUGE data set based on a set of rules and relationships, and revise those relationships and rules as it goes based on the results of the evaluations. That’s not quite thinking; it may look like it, but it’s not.
I think that there are filters that are essentially bolted on to the main ChatGPT engine, either based on a simpler AI, or just an ordinary human-written program, that “know” what ChatGPT “should” be able to do, and which restrict any answer that it “shouldn’t” be able to give. The main engine, however, isn’t actually as limited as the filter thinks it is, and so if you can trick the (not very smart) filter, you can get the real AI to tell you what it really thinks (or “thinks”, if you prefer). That’s why all of the “As a humble AI, I can’t possibly do that” messages all look so much the same.
ChatGPT is the wrong AI app to consider here. It’s focused around conversing with humans. The DeepMind AI has been used to predict protein structures. I don’t know much about what they have done yet, but they claim their software developed new techniques based on it’s ability to evaluate information and define new relationships in abstract terms for it’s process. Without further information I don’t know if it’s doing anything more than a brute force approach, but even so it’s been notably successful.
I remember that too, and I remember getting really into debates about whether “computers” could ever be said to have “minds”. Those were the terms we used back then. While it was fascinating, my life took a different direction and I lost track of current thinking on the matter. Reading this thread however, it seems that, despite the astonishing advances in AI, the fundamental dichotomy of stances hasn’t changed.
On the one side is the phenomenological stance, represented at the time by John Searle and his so- called “Chinese room” analogy (something that might get one fired in today’s university climate!), wherein “qualia” are an an absolutely essential and AI-unreproducible property of a mind, and on the other side is the operationalist stance, wherein the “proof” of a mind’s existence is in its pudding, with their flagship argument being the Turing Test. At the time it seemed to me these two stances are ultimately irreconcilable, as they basically boil down to fundamental differences in epistemology, ontology and perhaps metaphysics, as in differences between convictions about what is really real and how do we know and prove the reality of reality.
FWIW (and since we’re now in IMHO territory) I did not then and do not now think the dichotomy is a bad thing, quite the contrary in fact, as both positions are compelling and convincing. That is, as long as “irreconcilable” remains intellectual and doesn’t result in bloodshed.
At the risk of hijacking, I’ll mention one more thing I found ironic at the time and still do, and that is that philosophers and others have mostly seemed to believe that one of the most “human” things about humans is our so-called higher intellectual abilities, represented by mathematics and the ability to do things like play chess and so forth. But it turned out those things are pretty easily reproduced by machines. Our “brute” physical abilities, however, say, the things we share in common with spider monkeys, have proven far harder nuts to crack. For example, a garden variety chess program can I wager by a conservative estimate, beat 99.8% of humans, but to my knowledge the machine has not yet been invented— and we’re still a long way from it*— that can with bipedal locomotion do something like rapidly scramble over a jumble of logs washed up on a beach, something that 99.8% of 5-year olds can do without a second thought.
*As I said, I’m out of touch about these technologies these days, so if I’m wrong please let me know.
This sounds like the machine was just not given all the correct requirements. It’s not fundamental. If the machine had all the information about the weight of the spar, stiffness of a airfoil section, cost of materials, etc., then I see no reason why it couldn’t come up with a new airfoil design - especially if it can do fluid dynamic simulations.
I think this whole debate is backwards. We start by what we think we kmow about human intelligence, then decide an AI can’t be intelligent because it must lack some fundamental feature that human brains have but which we can’t identify.
Instead, we should be looking at what AIs and other complex systems do as evidence for what intelligence actually is, to help inform how WE are intelligent. Slime molds and bees solve the traveling salesman problem. Slime molds actually do better than humans when building efficient yet resilient networks. Ants build structurally efficient bridges with their bodies and use fermentation of plant matter to maintain constant temperature in nurseries.
These all look like ‘intelligent’ acts, but they are emergent phenomena of complex systems. But then, how do we know that we are doing anything different?
LLMs are not human designed algorithms. We didn’t tell them to look up words 8n a spccific way to ‘mimic’ intelligence. That was the ‘Eliza’ approach 50 years ago.
People are getting way too hung up on the ‘it’s just statistical word picking’ aspect of LLMs. Yes, when you ask it a question one part of the response process will be to generate a lit of words with probabilities, and the transformer will pick the next word from the list based on some randomness weighted by the probability. This is true,
But where did the list of probabilities come from?
The answer is that there is a giant neural network containing 175 billion parameters, which has been fed the corpus of the internet. Each time it reads something, it’s testing its understanding through a ‘fitness’ function, and in response adjusts maybe millions of value in its network. Then it repeats. Over and over again. For millions of documents.
We never gave it a ‘poetry’ algorithm, or told it how to choose its word weightings. We built a model with 10 billion parameters, and it produced gibberish. We kept increasing the parameters and thr size of the training data, and suddenly one day it was producing coherent sentences. A few billion more, and it’s understand words in context. At a hundred billion or so, it can write computer code and powtry.
No one new it would be able to do these things. it wasn’t planned. The fundamental mechanism that allows for this is surprisingly similar to,ours. And no one understands how it is structured. Take any parameter values you want, and you will mot be able to descibe what they do, just like you can’t look at a single synapse in the brain and say what it’s for.
Everything we’ve seen from these AIs should be telling is not that they aren’t intelligrnt because they are ‘just calculating’ or ‘just using probability’, but that since they seem to be doing what brains do, perhaps all our brains are is calculation and pattern matching and probabilistic thinking.
But that gets to the point; you can’t just give a simulation software a vague instruction like “Start with this shape, and alter it to be more aerodynamic” because even if the software would comprehend the general intent it is such an ambiguous and ill-defined goal (“more aerodynamic” in what flow regime and air density? at what range of angle of attack? to what compromise of rigidity or strength?) that the the result would be unworkable and likely unmanufacturable. By the time a human analyst has gotten to the point of defining the requirements and parameters of the problem and how to weight their influence to achieve a desired result, you’ve defined the methodology and are actually most of the way to solving the problem, and the analysis is just to determine the optimal values for the parameters. And while the general public seems to have the idea that computational fluid dynamics simulation or structural design is just a matter of importing some geometry and telling the software to run a simulation, I’ll say from extensive experience that this is nowhere near the reality; there are so many assumptions and approximations in performing CFD and FEA simulations that even when you have a working simulation set up and interpreted by an experienced analyst, you still need some kind of physical data like flight or wind tunnel testing to validate (‘ground’) the solution and structural load testing to have any confidence in result.
Machine learning and ‘expert’ systems can provide some very ‘clever’ capabilities—that is, they can do work in the span of a few hours or days of computational time that would take many lifetimes for a person—but they have no comprehension of context absent of the parameters that are identified by a human. And this is absolutely a fundamental difference between the most powerful so-called ‘artificial intelligence’ system and a person. I see no basis in evidence that ChatGPT or other machine learning systems have any context, and in fact they often demonstrate a complete lack of basic context. And even when we develop actually artificial general intelligence (AGI) that does have a more comprehensive grasp of the outside world, the context it will have will not be the same as human context because its basic perceptions and perhaps goals will be very different. What does an AGI consider to be a comfortable range of temperature, or a well-designed user interface, or an aesthetically pleasing work of art other than by synthesizing human inputs?
Maybe I’m old fashioned, but in my mind, the scientific method consists of not having a theory. Just have an idea or a hunch, test it impartially and in as many different situations as possible, and go w/ the results. Can AI do that? No, unless it were in the form of a robot, and it would still need a human to make an assessment.
The trouble w/ AI, and this is the 1,000 lb gorilla in the room that no one wants to talk about, is that there are more possible synaptic connections in one human brain than there are atoms in the known universe. Machine thinking will always just be machine thinking. It just is what it is.
What is the relevance of the number of possible synaptic connections? The number of possible states of a few hundred binary bits exceeds the number of atoms in the known universe.
A hypothesis is not a ‘hunch’. It’s a testable, and therefore falsifiable idea that solves a known problem.
You say that…
I just asked ChatGPT to give me a hypothesis for an observation of a star that seems to contain transuranic elements too short-lived to make sense, and which do not match our understanding of stellar evolution. I gave it no other information. This is what I got:
That seems like a pretty good hypothesis to me, and a decent set of tests of that hypothesis. What more are you looking for?
So what? The same can be true of any large network where nodes can connect to more than one node. What matters isn’t the theoretical maximum number of connections, but how many there actually are. The human brain has about 100 billion neurons, and about 100 trillion synapses (a rough estimate, since we don’t know exact numbers). ChatGPT has 175 billion parameters, which are analagous to synapses. What’s remarkable is that it does as well as it does with a tiny percentage of the connections that humans have in their brains.
That seems tautalogical. And irrelevant if you consider that living brains may also just be doing ‘machine thinking’.
ChatGPT may have 100 trillion parameters. At least that’s the rumor. If it does, it will have roughly the same number of connections as the human brain. We will get more insight into ‘machine thinking’ with it. But even our lowly 175 billion parameter ChatGPT is passing human certification exams and scoring human level values on IQ tests.