The Miselucidation of Whack-a-Mole

I get the point you are going on to make, but that framing is rarely useful. We just define “intelligence” so poorly and messily. We conflate together self-awareness and human consciousness with it. The discussion then invariably becomes a mess.

There are and will be various sorts of intelligence, of application to various sets of problems. Some intelligences may be better a different sets than ours are and worse at others. Ours may be, relative to others, a mile wide and an inch deep, who knows. Self-awareness may not be required or even may be a limitation to intelligence in some of those problem sets, even as it serves us as social creatures well.

There is no clear bar and solving problems in the same manner we do, having self-awareness, so on, need not be part of the definition.

Wrong thread, meant to put it here

That’s not a yes or no question. There are clearly different opinions on the matter. At best, it’s a Yes, No, or Maybe question. Out of curiosity, I presented it to GPT4:

https://chat.openai.com/share/1dc3241c-3aaf-46c4-bc3d-65f4317e392d

In essence, the answer was “maybe”.

…that’s the same, rambling bullshit answer that I got, that wolfpup got. Looking through the literature, the answer is more definitive.

We have different ideas of what’s rambling BS then. What literature did you look through, and what was the more definitive answer that you got from it?

In conclusion, language and human cognition is a land of contrasts.

I don’t think it’s a settled question. While I lean strongly towards the “yes” side myself, I have to admit that there are a number of counterexamples that give me pause, like the KE Family and other cases of neurodivergence where either cognition or linguistic capability is significantly impaired but not both.

Gun to my head, I’d still say “yes”, but I think it’s not unfair to give the sort of answer the AI gave.

However, I think I’ve found ChatGPT to sometimes equivocate on questions I find much less controversial. Though I don’t have the time to test this myself at the moment.

The point of posting this example was to show how incredibly far AI has advanced in the past 60 years. In a very broad sense Eliza and GPT are doing what can be regarded as essentially the same thing, yet the differences due to scale and technological sophistication are so vast that they are qualitatively entirely different things, with GPT actually exceeding the capabilities of most humans across a wide range of knowledge and intelligence tests. Your whinging about the alleged “rambling bullshit” of this example response is not only wrong and irrelevant, it so completely misses the point that it’s nothing short of idiotic.

…it was rambling bullshit. I learnt a heck-of-a-lot more reading the top google search results than I did from what chatGPT wrote.

How much time have any of you spent learning various prompting techniques? The way in which you prompt an LLM can have drastic effects on the output.

Here’s a paper illustrating some prompting techniques:

For edample, adding ‘think step-by-step’ can improve answers that require multi-stage reasoning. Prompt engineering is a thing, and it’s not simple. And it makes a big difference.

The more powerful your tools, the more inscrutable their control panel. Everybody wants a big red [Easy] button that does whatever the half-formed thought and probably contradictory in their head might be.

That particular technological advance ain’t gonna happen.

Even using a standard search engine requires knowing the proper question to ask and an ability to properly interpret the results and separate useful results from garbage. Why shouldn’t this be any different?

Although that’s true, it’s not relevant to the present discussion. It can be interpreted as saying, “you’d get a better response with a better prompt”, which is not at all the issue here, since the response in question is factual and informative, at least in my view.

Well, since prompting is not a one-shot deal, there’s lots of opportunity to change and improve your prompts as you go along. In fact, helping one iterate from a half-formed thought to precise information is one of the things that the interactive GPT model is exceptionally good at.

Agreed, except that a standard search engine is much, much less friendly in that regard. If you’re vague about what you’re looking for, you could easily end up at a dead end. A standard search engine’s shallow, literal interpretation of individual words with no contextual understanding is precisely the kind of limitation that Bing Chat is trying to solve. It’s also the basis of the IBM Watson system being applied as effectively a research assistant in various fields, though aside from the impressive Jeopardy game show win its real-world performance so far has been less than stellar.

But the point isn’t how far it has come, how much better it is than what it was, or even if it is intelligent for however that gets defined or redefined.

The question is whether or not it is useful to use it in threads like the subject of the op did.

Most of us seem to be solidly convinced that it is not ready yet to be used in that way. I’m thinking from some of your posts that, as more enthusiastic about it than many of us as you are, you don’t think it is either, but that in the future it will be. …

I’d like to quickly circle back to this:

There is one fairly major problem with the explanation: it says it “rearranged” s=d/t to t=s/a. You can’t just rearrange the first equation to get rid of distance (“d”) and introduce acceleration (“a”). TBH if it’s going to bring in “s=d/t” as an axiom without further explanation, it should just start with “a=s/t” as an axiom and rearrange that to “t=s/a” and skip the irrelevant “s=d/t” entirely. Either way, it’s hardly a useful explanation for somebody who needs help with this sort of question.

I think that’s an accurate summary of my position. I think even in its present state it can be a very useful tool for retrieving, summarizing, interpreting, formatting, and otherwise facilitating access to information, but its responses need to be vetted and in the general case posting them directly is bad practice for the various reasons already noted in several threads.

It may be that this didn’t reflect the full response, or it may be a current deficiency in Bing Chat. In any case I had no such issue with the putatively less capable ChatGPT 3.5. I posted part of the response earlier but omitted the equations because they’re a pain to encode, but this is the prompt and the more complete response in GPT 3.5:

A vehicle is capable of an acceleration of 1g. If it can theoretically maintain this acceleration indefinitely, how long would it take it to reach 500 mph?

The acceleration due to gravity on Earth is approximately 9.8 meters per second squared, which is equivalent to 1g. When a vehicle accelerates at 1g, it experiences an acceleration of 9.8 m/s².

To calculate the time it takes for the vehicle to reach a certain speed, you can use the kinematic equation:

v = u + at

where:

v is the final velocity,
u is the initial velocity,
a is the acceleration,
t is the time.

Assuming the initial velocity (u) is 0 (starting from rest), and the acceleration (a) is 9.8 m/s², and the final velocity (v) is 500 mph, we need to convert 500 mph to meters per second:

[I’ve omitted the trivial conversion calculations, but it came up with the correct answer: 500 mph is 223.52 m/s.]

Now we can use the kinematic equation to solve for t:

223.52 m/s=0+(9.8m/s^2)×t

Now solve for t:

t = \frac {223.52 m/s} {9.8 m/s^2}

t≈22.8s

Therefore, it would take approximately 22.8 seconds for the vehicle to reach 500 mph with a constant acceleration of 1g.