Chris H. wrote:There is another java based game that has a MinMax type system. I have read the author's messages and I can somewhat understand the decision that he made. The author suggests that his card pool will be small as this limits the evaluations that the code has to support.
I find this to only be true in the following cases:
- The AI is allowed to fetch any card from the card pool by means of cards such as Glittering Wish and Ring of Ma'rûf (A.K.A. Ring of Ma'ruf *). If the decisions for these cards are not restricted to exiled cards and the AI's sideboard, then the game tree broadens by the size of the entire card pool. For a complete implementation of MtG to date, that would be more than 11,000 nodes: one for each unique card ("by English name") ever printed. (I must admit, evaluating a game tree of that magnitude does sound oddly exciting to me.)
- The AI is considering probabilities involving imperfect information (http://en.wikipedia.org/wiki/Imperfect_information) about the contents of other players' decks and hands. Such an AI would be fearsome indeed!
- Although it is theoretically unnecessary, the authors may desire that some specific cards have code to assist the AI. Some cards may also require custom tuning of the Position Evaluation Function. In those cases, reducing the card-pool reduces the amount of such coding. This could reduce the size of the game tree.
*for the ASCII-inclined (which includes the board's feature of hovering over card-names for card-descriptions)
Other than that, the size of the card-pool is nigh irrelevant to the number of evaluations. Rather, this number depends on the size of the game tree (
http://en.wikipedia.org/wiki/Game_tree), which is a (monotonically increasing) function of the size of the Game-State*. In other words, to limit the number of evaluations, one must make the Game-State smaller. To do that, one does not use a smaller card-pool; instead, one might
play massively exiling cards like
Apocalypse and
Decree of Annihilation.
*The number of evaluations is
at least exponentially proportional to the size of the Game-State, on the order of O(n to the nth power), where n is at least the sum of the number of tokens in play, permanents, cards in the AI's hand, and cards in all graveyards. This order is further exponentiated by the depth of the game tree that the AI is permitted to predict, which is probably user-configurable. (I personally prefer having the option of limiting the AI's actual (clock) time in its predictions. This provides for a predictably smooth play-experience, and it allows the AI to automatically improve as hardware improves.)
If the AI could prune the game tree based on the fact that there are two or more identical permanents, cards in hand, or cards in a graveyard, that would reduce the number of evaluations. (The pruning would occur based on what possible decisions the duplicate cards themselves would generate, not on the decisions provided by other cards, such as a single
Fireball or
Comet Storm in hand.) However, the number of identical cards has little to do with the size of the card pool (except for very, very small card pools*); rather, it has to do with how many duplicated cards each player has in her or his deck.
*A card pool of around 14 cards would
nigh require each player to play the same decks in an ordinary, constructed format.
Inference 1: One could reduce the size of the game tree by using the above optimization and by allowing more than four copies of each card.
Inference 2: The above optimization is much less useful in singleton (highlander) formats. It would only work on basic lands, tokens, and perhaps some
Clone -type cards.
In any case, a smaller card-pool means less coding.