It is currently 23 Apr 2024, 12:37
   
Text Size

Magarena 1.37

Moderators: ubeefx, beholder, melvin, ShawnieBoy, Lodici, CCGHQ Admins

Magarena 1.37

Postby melvin » 27 Apr 2013, 13:52

Download Magarena 1.37 for Windows/Linux
Download Magarena 1.37 for Mac

This release includes two new AIs, a honest Monte Carlo Tree Search AI and a
cheating Vegas AI. We have been experimenting with an honest Monte Carlo Tree
Search AI for a while but its performance was rather poor. The breakthrough was
to use a random selection of the opponent's library as the opponent's hand
during the random playout phase of the AI.

We benchmarked all current AIs. The strongest is the cheating version of the Monte Carlo Tree
Search AI at level 8. Surprisingly, the honest version at level 8 is the second
strongest, outperforming even the cheating Minimax AI at level 8.

Includes contributions from melvin, sponeta, and hong yie Huang.

Changelog:
- added non cheating monte carlo tree search AI
- added cheating vegas AI
- removed random AI
- updated AI comparison on wiki

- improve speed of program startup by loading groovy/java code only when a card is
needed in a match
- ignore corrupted card images and allow player to download new ones instead of crashing
- added Planeswalker to card type filter in card explorer
- draw mana icons for all permanents that generate mana, previously only limited to lands

- fixed: players always discard down to 7 cards regardless of their maximum hand size
- fixed: Planeswalker uniqueness rule not applied correctly as it was missing some Planwalker subtypes
- fixed: Esperzoa doesn't have flying
- fixed: evolve triggering when opponen's creature enters the battlefield
- fixed: desk strength calculation too slow

- added the following premade decks:
Graveyard_Eater.dec, Squirrels.dec, Swords_of_Enduring_Pain.dec,
UBmagarena.dec, UW_Spirit_Control.dec, Eldrazi_40.dec, Eldrazi_60.dec,
Elemental_Rage.dec, Fungus.dec, Human_Sacrifice.dec, Myr.dec, Weaken.dec,
Reanimation.dec

- added the following card:
Garruk's Packleader
User avatar
melvin
AI Programmer
 
Posts: 1062
Joined: 21 Mar 2010, 12:26
Location: Singapore
Has thanked: 36 times
Been thanked: 459 times

Re: Magarena 1.37

Postby ubeefx » 27 Apr 2013, 17:19

Thanks Melvin. Here are some thoughts.

On the comparison page it is shown that MMAB-H-1 outperforms MMAB-H-8. Is this according to reality?
Or are there not yet enough simulations performed to make this statistically relevant?

Also do you use the same deck for both players (mirror match) or two randow decks.
The deck can have a larger influence than the AI strength in the game outcome.

I like the concept of random simulations, which is why I did the basic Monte Carlo implementation called the Vegas AI.
Why do you think Magarena is currently the only Magic engine using this technique successfully? Or are there already others?
One thing I like about MiniMax is that it does not always needs the full thinking time, so the games are faster.

The Vegas AI uses multi threading, did you implement this already in MCTS AI?
This could help improve the number of possible simulations on multi-core processors.

Maybe you can also add level 4 or 6 to the comparison to see the scaling of AI performance.

It seems like the technique of getting a random opponent hand sample could be included in the honest Vegas and MiniMax AI implementations.
User avatar
ubeefx
DEVELOPER
 
Posts: 748
Joined: 23 Nov 2010, 19:16
Has thanked: 34 times
Been thanked: 249 times

Re: Magarena 1.37

Postby melvin » 28 Apr 2013, 01:25

ubeefx wrote:On the comparison page it is shown that MMAB-H-1 outperforms MMAB-H-8. Is this according to reality?
Or are there not yet enough simulations performed to make this statistically relevant?
Yes, it is statistically significant, both variance are small. I didn't show the variance of the score on the wiki page as I am waiting for confirmation on the definition of variance from the author of the WHR library. My best guess as to why this happens is that looking further ahead with the hidden information actually leads the AI astray in thinking that the opponent is in a poor position because the opponent's library and hand are all hidden cards that can't be used. Note that level 1 and 8 here are not the main phases but the number of seconds allowed, I've changed the main phases restriction to a time based restriction.

ubeefx wrote:Also do you use the same deck for both players (mirror match) or two randow decks.
The deck can have a larger influence than the AI strength in the game outcome.
I used two random two colored decks. Two new random decks are used in each game. So in the ten games between two AI, ten pairs of random decks where used.

ubeefx wrote:I like the concept of random simulations, which is why I did the basic Monte Carlo implementation called the Vegas AI.
Why do you think Magarena is currently the only Magic engine using this technique successfully? Or are there already others?
Dreamblade uses a hybrid Minimax/MCTS AI that is doing quite well. There is some mention in the forum that Mage has such an AI but there is no report on the performance. My guess is that MCTS with hidden information is hard, so folks who do not want to include a cheating AI would be stuck with a weak honest MCTS AI which they do not release. For Magarena, I felt that even though we couldn't get the honest version working well, we can still release a cheating version and mark it as such for the user to choose.

ubeefx wrote:The Vegas AI uses multi threading, did you implement this already in MCTS AI?
This could help improve the number of possible simulations on multi-core processors.
No, MCTS is single threaded. I've been looking into making it multithreaded but it's trickier than I imagined.

ubeefx wrote:Maybe you can also add level 4 or 6 to the comparison to see the scaling of AI performance.
Yes, I'm looking into this.

ubeefx wrote:It seems like the technique of getting a random opponent hand sample could be included in the honest Vegas and MiniMax AI implementations.
This is already included in the honest Vegas AI. I'm not sure how to apply it to Minimax though. Would be it something like this: each time Minimax is at a opponent move, randomize the hand/library?
User avatar
melvin
AI Programmer
 
Posts: 1062
Joined: 21 Mar 2010, 12:26
Location: Singapore
Has thanked: 36 times
Been thanked: 459 times

Re: Magarena 1.37

Postby Aswan jaguar » 28 Apr 2013, 07:26

Well,besides games,card games and Magarena I do love statistics and would love to see a more close to reality who is the strongest AI.And we know that to win a game in MTG besides the ability there more basic factors like the deck-strength(even if they are random there is room for an AI-A to have in 6 out of 10(or even more) games a better deck than the other AI-B opponent) and then is the hand you begin with(AI-A having a better hand-according to his deck more times than AI-B)
To hand factor-I would love to see what an AI can do to determine if it has more chances to win if AI mulligans a poor hand to a second one-according to chances and how each AI performs in mulliguns.(if that is possible but I guess it is too hard if not impossible to achieve something like that).

So since there is little to do with hand-factor,I agree with Ubfeex that we can certainly minimise luck-factor and have better results by giving the same deck to both AI and then testing them in 10 same decks and 10 games (the same you did if I understood correctly but now with same decks)to each to have safer results.Even better you could also have them use a mix the same 5 lets say random decks and 5 of the premade decks that make use of more combos as this way we can have also how each AI can perform with combos or plan ahead and with this mix we can have more accurate results imho.

Again thanks to all of you for providing so many different and efficient AI to choose from.Another unique feature of Magarena. =D>
---
Trying to squash some bugs and playtesting.
User avatar
Aswan jaguar
Super Tester Elite
 
Posts: 8078
Joined: 13 May 2010, 12:17
Has thanked: 730 times
Been thanked: 458 times

Re: Magarena 1.37

Postby ubeefx » 06 May 2013, 20:38

I found this page about MCTS, maybe interesting to read : http://mcts.ai/.

These drawbacks are given for the method :

Playing Strength
The MCTS algorithm, in its basic form, can fail to find reasonable moves for even games of medium complexity within a reasonable amount of time.
This is mostly due to the sheer size of the combinatorial move space and the fact that key nodes may not be visited enough times to give reliable estimates.

Speed
MCTS search can take many iterations to converge to a good solution, which can be an issue for more general applications that are difficult to optimise.

They mention that adding domain knowledge can help improve the performance significantly. This is actually the case in Magarena.
The way mana costs are paid, combat is handled with simulation, targets are filtered, etc. uses domain knowledge and vastly reduces the move space.
Without these features also the MiniMax AI would be much worse and not able to think that far.
User avatar
ubeefx
DEVELOPER
 
Posts: 748
Joined: 23 Nov 2010, 19:16
Has thanked: 34 times
Been thanked: 249 times

Re: Magarena 1.37

Postby jeffwadsworth » 09 May 2013, 16:43

The MAGE AI is quite broken at the moment. That is the one area that we have serious deficiencies in. It is playable, but at a very basic level. We can only dream of the AI implementation of Magarena. Hehe.
jeffwadsworth
Super Tester Elite
 
Posts: 1171
Joined: 20 Oct 2010, 04:47
Location: USA
Has thanked: 287 times
Been thanked: 69 times


Return to Magarena

Who is online

Users browsing this forum: No registered users and 13 guests


Who is online

In total there are 13 users online :: 0 registered, 0 hidden and 13 guests (based on users active over the past 10 minutes)
Most users ever online was 4143 on 23 Jan 2024, 08:21

Users browsing this forum: No registered users and 13 guests

Login Form