That’s not the hard part of trading, though. The hard part of trading is convincing another player of what their goals are. Humans playing Catan will do things like “I don’t want Jim to win, because he’s too cocky about it. If you trade me wood for sheep, that will help prevent Jim from winning”, and the human on the other end of that proposed trade might consider that worthwhile. Or they might not want me to win, and so they’ll refuse that trade. That means that, in order to master playing Catan, you need to master the understanding and manipulation of human motivations, which is the most complicated game known to anyone.
But back up; I think we need to define some terms better. First off, “AI”. Nowadays, when most people talk about AI, they’re referring to large language models (“LLMs”), such as ChatGPT. But folks were using the word “AI” long before ChatGPT existed.
In its simplest meaning, it just means “a program sufficiently-sophisticated to accomplish some task”. In this sense, there are AIs to play lots of games. Most computer games intended for player-versus-player gaming will include an AI for the game, so folks who buy the game can play even when there’s not anyone else available. Most of these “AI” programs were written entirely by humans, and follow simple rules that humans laid down. For instance, a Starcraft AI might follow a set script of building certain units in a certain order until either someone attacks it, or it has a certain number of units. If it reaches that certain number of units, then it attacks, and so on. The AI’s decision tree is probably pretty complicated, but it was all designed by humans. The AI that comes packaged with the game is good enough to be entertaining to a casual player, but not good enough to beat the top humans. But others have written better AIs for it. And in some games like chess, this sort of design, comprehensible to a human programmer, with enough computing power behind it, is enough to beat even the top humans (Deep Blue, which beat Kasparov, was programmed by humans).
But not all AI is programmed by humans. Some of the more advanced AIs aren’t so much programmed, as grown: Humans create some sort of computational framework, but then that framework is allowed to develop on its own, with little or no human guidance. AlphaChess and ChatGPT are both examples of this, but they still work in very different ways: AlphaChess, once it knew the rules of chess, just played bazillions of games against copies of itself, and learned in the process what worked well. The same basic framework (but with different rules) was also used to learn and master other games. And it was fantastically successful: Once AlphaChess was fully trained, they tested it by having it play 100 games against the previous best computer chess program (one that was programmed by humans), and it lost zero of those 100 games (it tied a lot, but that’s common in high-end chess). How does it do it? Nobody really knows, because nobody programmed how it chooses moves: It figured it out for itself, and we don’t know what it’s “thinking” about when it chooses any given move.
ChatGPT was also “grown” without direct human intervention, but in a very different way: Instead of lots of self-interaction with a goal to “win” (because the sorts of things that ChatGPT does don’t have a well-defined “win condition”), it was given a huge data dump of lots and lots of human writing, and it was set to work on finding all of the patterns in that existing writing, so that it could produce writing of its own that fits those patterns.
Now, chess can be reduced to text. And people have played it that way, and there are probably a good number of such games in ChatGPT’s training database. So it’s at least possible in principle that ChatGPT, which finds pattens in text, might find the patterns that correspond to how to play chess. Certainly it can learn enough to do something that superficially resembles playing chess. But there probably aren’t nearly enough chess games in ChatGPT’s training data to find enough patterns to lead to good chess play. Probably there couldn’t be: ChatGPT’s non-interactive method of learning would probably require a number of games so ludicrously huge that it couldn’t fit on all of the computer storage in the world.