Are AIs capable of using the scientific method?

I’m not sure if the should go in FQ or Great Debates. Let’s say an AI is tasked with forming a testable hypothesis. Take some natural phenomena that is not explained / not adequately explained by the currently accepted theories. Something like the questions raised in current thread about Sagittarius A*. Another possibility would be something more ambitious like asking an AI to come up with a theory to unite general relativity with quantum physics. The AI is programmed to use whatever information it has available and formulates a hypothesis. Yet another scenario would be cheating by withholding information, and trying to get the AI to come up with a hypothesis to a question that has a generally accepted theory, like evolution by means of natural selection.

Have we tried anything like that, and if so, what sorts of hypotheses have AIs generated?

I do not think AIs today are real AIs.

They are clever databases capable of parsing human language and coming up with an answer based on a huge dataset to draw from.

But, they are not formulating their own questions in the pursuit of knowledge.

Is designing antennas close enough for you?

There are a lot of different versions of “the Scientific Method”, and what actual scientists do doesn’t usually match up exactly to the “official” version that’s laid out in elementary school science classes. But one thing that every notion of the Scientific Method has in common is that it’s based on observation of the world, and none of the current crop of AIs has the hardware to be able to do that.

That’s because people continuously move the goalpost of what it takes to be a “real” AI. A long time ago, doing complex mathematical calculations was considered a hallmark of human intelligence. Obviously, we’ve automated that long ago by building machines which can do this much better and faster than a human ever could. So people began to consider things like strategic thinking, as exemplified by playing chess, as the definition of intelligence. When computers began to outperform us in that, the definition moved to yet other things. Until recently, things like understanding language and forming coherent yet creative sentences were then considered to be the ultimate characteristic of intelligence. ChatGPT has shown us that we as humanity shouldn’t be so sure about having a monopoly on that, so people will redefine intelligence yet again so they can cling to the notion that AI is not truly intelligent. Of course, as things currently stand, all AI is “narrow” in the sense of not being able to do the entire range of things a human brain is capable of. But it is getting wider (in the sense of closing the gaps) by the day, and in many fields it is already outperforming us.

Edsger Dijkstra famously said that the question whether a computer can think is as meaningless as the question whether a submarine can swim. It’s ultimately a matter of semantics whether we want to stick the label “swimming” or “thinking” onto what the respective machines do; but in any case, there is no doubt that what these machines are doing is not that far away from any reasonable definition of “swimming” or “thinking”, and that machines have got very, very good at it.

I’d say that inductive reasoning is just as much a scientific method as deductive reasoning is, and that all of AI (or at least the machine learning part of it) is, essentially, induction in the sense of generalising from known past observations to unknown new ones.

I would suggest the difference is the AI is only doing math humans have already thought of and not discovering new math.

Nor is the AI designing a game like chess or Go but just playing the game through brute force (and a little cleverness).

Very recently a Go player has beat some of the best Go AIs there are because they found a blindspot in the algorithm running the AI. Something that would not fool a reasonably decent human player.

I’m not sure what you mean by this. Why isn’t data from the real world available to AIs?

Because it wasn’t included in the data sets that the AIs were trained on. It could be made available, very easily. But it hasn’t.

I don’t think this is true. Current go and chess playing algorithms are not simply trying a gazillion of possible moves through brute force to identify the best one. They are employing an adaptive method to evaluate the favourability of a move that’s under consideration. I can’t speak for chess, but I am somewhat involved in the go community, and AI has had a huge impact there. Machines have begun to develop a distinctive playing style that’s noticeably different from what humans would have played until recently (e.g. the already famous “Move 37” in Lee Sedol’s second match against AlphaGo in 2016), and books about “playing go the AI way” are being published. Granted, computers haven’t designed the game of go, but neither has any of the humans who have played it for the last few millennia.

And yet, a high ranked (but not top ranked) amateur Go player beat an AI considered on par with Alpha Go 14 of 15 games.

(see link above)

I think the difference I would make is a lack of creativity from AI and and lack of…curiosity. Do AI’s sit around and ponder the meaning of existence and the universe and come up with novel questions that they seek an answer to?

I would say they do not (yet at least).

Yes, true, and the AI will learn from it. Which is precisely the point: The AI is not infallible, it just keeps getting better and better by learning from past experience. Which is exactly what humans are doing. Only that computers are doing it much, much faster.

I think it is precisely the point. A decent human player would have caught on to this strategy very quickly. The AI could not and did not. The AI is not truly creative and slow to adapt.

Certainly a human will step in and tweak the algorithm but it was not something the AI was able to do on its own. Maybe after thousands of games it would adapt but a human would get it in 1-2 games. That’s an important difference I think.

I still don’t understand what distinction you were drawing in claiming that current AI has some kind of hardware constraint and does not observe the world. AI that can (for example) caption photographs is obviously trained on photographs of the real world, and captions photos of the real world. AI facial recognition software analyzes vast amounts of data from cameras quite literally observing the world in real time.

Current AI may or may not currently be doing anything that could strictly be described as autonomous “science”, but I don’t see that limitations on access to data about the real world has much to do with it.

What do you mean by “truly” creative? Are you claiming that human creativity is something other than computation?

So, no human player ever lost a game after a new strategy was introduced? The equivalent of what would happen - that the player would go home, look at the game, and come up with a counter - for an AI would be to add the new strategy to the training set.
That you speak of brute force and algorithms makes me think you don’t understand how these things work. Even 40 years ago chess programs didn’t use brute force. The complexity of the game would overwhelm such a strategy. I understand go is even tougher.
And there is no algorithm, for chess at least. When I studied chess playing programs there were a set of heuristics exploring paths through the search space of the game, combined with libraries of openings and end games.
Good go software is pretty new. The first chess programs couldn’t beat grandmasters, now they can. Give it time.

As with so many questions dealing with AI the answer is it depends, kind of but not really. Certainly, it is possible to create an AI that can search for different possible explanations for some input data, and to test those candidate solutions in a model (or in some sense perhaps tuning a model). This was the basis of my PhD research (using AI to find the algorithm that explains a set of input data). However, there are limits. To use your example of uniting general relativity and quantum physics, an AI might conclude that there is some mechanism that unites the two, but cannot explain it because we do not understand it sufficiently. An AI can only use as its building blocks those things that already exist within human knowledge (this is the biggest limitation of my PhD algorithm, it can only build algorithms using tasks that are already known).

Sure they do.

Certainly they do not ONLY rely on brute force because the decision tree is just far, far too big to reasonably do it that way. The programmers have employed clever strategies to make the decision tree smaller.

But make no mistake, the computer is still running trough far more moves than a human can to come to an answer. Something far, far beyond any human could do. So, how do humans come close to giving those AIs a challenge?

But this is surely not a qualitative constraint. All expansion of knowledge builds upon current knowledge, and progress will be faster if the intelligence (whether biological or artificial) is more powerful.

What hasn’t? I’m not aware of any systems that people have tried to make do the scientific method. Lots of new discoveries in paleontology come from looking at fossils locked in the collections of museums once again. If there are pictures of these, an AI could do it. I’d think that finding examples of things that could falsify a hypothesis wouldn’t be too difficult.
The theoretical physicists I knew in college tried never to observe anything directly, and I think they count as doing the scientific method.