I posted recently in an AI thread an example from the game of “Connect 4” (and also the more well-trodden case of chess) to demonstrate that consumer-facing paid-version AI engines do not reason in the ways required to make decisions in games, even if they can recite and seemingly interpret the rules.
As I like throwing problems at the modern general-purpose (or advertised as such) AI tools, I have on occasion taken pictures of board games’ states and asked, say, ChatGPT or Gemini to evaluate something about them, even just “what’s the current point tally for all players”. They always get things wrong despite “knowing” the rules, sometimes hilariously wrong, in ways that go along with my comments in the thread I linked above.
However, “AI”* has been used to serve as the brain for computer-run players in games for a very long time now. If you have an app that implements any reasonably complex board game, and you can have a computer opponent in it, that opponent may very well be based on AI/ML techniques and, if so, was likely trained in part or in full via unsupervised learning. These can be as good as you want them to be. They will usually be bespoke networks built for that game, but the network architecture for a game could be slapped together in an afternoon by a game developer familiar enough with machine learning.
* in quotes since that term is a bit too all-inclusive in this context to be very useful