Are AIs capable of using the scientific method?

True, but for example, with my AI, it might say there is some process P that must exist to make the algorithm complete; however, it will not know what that process might be. It only knows that it must be there. Could an AI determine what that process is? Sure, it is possible, but it is certainly not an easy task. My AI certainly couldn’t (not that I tried). :slight_smile:

[quote=“Schnitte, post:5, topic:980092”]
Until recently, things like understanding language and forming coherent yet creative sentences were then considered to be the ultimate characteristic of intelligence. ChatGPT has shown us that we as humanity shouldn’t be so sure about having a monopoly on that, so people will redefine intelligence yet again so they can cling to the notion that AI is not truly intelligent.[/quote]

This is just not true at all: ChatGPT does not understand language; nowhere in its functions is there a definition stored; what ChatGPT does is predict the next word or words in a sentence based on the statistical relationships between the words in its very large set of training data. ChatGPT neither thinks about nor has any way of knowing the definitions of the words it uses.

The amazing progress seen in AI these past 12 years has been in areas that concern human-like fuzziness : deep learning, reinforcement learning and similar approaches can do something that vaguely resembles a brain’s pattern matching in the pursuit of evaluation or classification of objects, including language recognition, visual pattern recognition, etc. And also the generation of text, speech, images, video. I’m not sure there’s a way to make those kinds of systems have very rational thoughts about scientific experiments.

There are different branches of AI that deal with preprogrammed logic processing (PROLOG etc.) or formalised semantics. Those were traditionally used in theorem proving. And there hasn’t been much progress on those systems in the past 25 years or so.

At this point, I’m not even sure that the scientific method can be formalised in a self-consistent, mathematical way.

But we’ve all been wrong before about what AI systems can do. So I can’t answer.

I think our definitions of brute force differ. I’ve never written a chess program, but I’ve been deeply involved with other search space heuristic programs, and not only do they limit the depth of the search, they order the search according to an evaluation function. The decision tree isn’t really smaller, but you explore less of it. That’s why heuristics don’t guarantee optimal results.
Chess programs have gotten smarter much faster than computers are getting faster.
I’d call a brute force approach a depth-first or breadth-first search through the entire tree until a satisfactory result is found. Brute is usually a synonym for stupid in this case. Fast, maybe, but still stupid.

That’s what I’m trying to get at. Are there any AI’s that capable of coming up with some novel solutions that has not been previously thought of? Humans came up with string theory as one hypothesis to try to unite relativity and quantum physics. I wonder if an AI is capable of using its imagination, for lack of a better term, to come up with such ideas.

I can’t find a reference just now, but there’s a toy example I’ve read from autonomous driving. 4-way stop signs where priority depends upon order of arrival are a confusing system that humans find difficult to deal with in cases of near-simultaneous arrival. An autonomous driving system spontaneously came up with the approach of backing up slightly to remove any ambiguity about ceding right of way to the other vehicle.

I think there’s a famous example of a highly unconventional strategy invented by AlphaGo vs Lee Sedol, I’ll try to find the details.

See my first sentence. :slight_smile:

It depends, so kind of but not really. My AI, for example, discovered a solution to a problem that was more efficient than the existing well-known solution. In fact, I didn’t believe it and spent some time proving it right (I was trying to prove it wrong because I thought it was a bug). However, it isn’t as easy as giving it a problem and saying “Hey figure this out”. There are a lot of details that make it more complex and limited. A lot of it comes down to the quality of the data, and whether a good model can be built to evaluate candidate solutions. This isn’t always easy (in fact, it is often not easy at all). So if we cannot model the interactions between relativity and quantum physics sufficiently, then it would be almost impossible for an AI to find a solution to reconcile them.

Do you think AIs might get to the point where that is possible? What if rather than telling an AI “hey, figure this out” if the instruction was “what do you think about this?”

This a good evaluation of what they do. It’s a little more complicated than that sounds, but that’s basically what’s happening. These AI programs are about as intelligent as the average person, and the average person doesn’t use a formal scientific methodology for anything. Why would they?

Someone could program the methodology into an AI but if they can reach a conclusion it will be based on prior knowledge. They could search and try variations on existing knowledge much faster than humans could, but in most cases that’s all humans do. The computer can just search and try things faster.

Nyah, I’m not working on it anymore. :rofl:

(100% joking of course)

It is really hard to say. That’s kind of scratching the nose of strong general AI. We simply do not know whether strong or general AI is possible. There isn’t even a really strong candidate for a path to creating such a thing. If you’re asking my opinion, then I do think so.

Yes, I was asking for your opinion. Thank you :grinning:.

There’s a very wide range of predicted timescales for the development of AGI, but I think it’s fair to say that virtually nobody in AI research thinks that it’s impossible for any theoretical reason. That would amount to a claim that the human brain is doing something other than computation.

I’d say that probably reasonably true. Most scientists don’t like to say something is impossible until it is proven so. They would be more inclined to say “It might be possible” or “It might be impossible” (depending how likely they think it to be). That’s actually how my PhD work got started. A top researcher in that area told me that my planned work might be impossible.

If it walks like a duck and quacks like a duck, then calling it a duck is not so far off. You can ask ChatGPT a question in natural language, and chances are that ChatGPT will come up with an answer that a reasonably intelligent human would also be able to come up with (but ChatGPT would be faster). In my book, that justifies giving it a label that we would also apply to the reasonably intelligent human being, despite the difference in the cognitive processes that are going on internally.

My challenge to those who are in the “AI is not truly intelligent” camp: Define, in tangible (i.e., objectively verifiable) terms a cognitive achievement that you think a computer should be able to achieve in order to be called “intelligent”. Chances are that some time in the next few decades, a computer will be able to do exactly that. But of course, by that time people in that camp will have changed their definition yet again.

I don’t care about “true intelligence”

You are misrepresenting what ChatGPT is doing; a human being knows why the words “green” and “leaf” are connected, ChatGPT only knows that those two words are used together with great frequency - the human and ChatGPT are doing two fundamentally different things when they answer your question

To make this thread somewhat meta, I issue the following challenge. An AI should make a profile for the SDMB and participate as a regular poster on a variety of topics, without it being obvious that it’s an AI that is posting.

Again, I concede that, cognitively, the two are doing very different things. My point is that the deliverables produced by ChatGPT versus a human can be very similar to the point of indistinguishability (which was precisely the point of the original idea behind the Turing test: Define “intelligence” by how it appears to an observer, not by what’s going on inside the machine or brain). I acknowledge that it’s possible to have different views on whether it is appropriate to define intelligence that way, but I don’t think it’s fair to accuse me of misrepresenting what ChatGPT is doing.

Can an AI produce something truly original? Can it design a new plane? A new car? Come up with an original hypothesis?

Can it show curiosity and then explore what is unknown to it to satisfy that curiosity? It would then not only access its store of data but then combine and understand that data and formulate a whole new notion.

Current AI does no such thing.

Not yet. But will you admit that AI is “truly intelligent” if an AI does any of this within your lifetime? My prediction is that the first best-selling AI-generated novel is not that far away. We’ve seen first steps in that direction in the visual arts.

Can you do those things?