It is currently 20 Apr 2024, 04:11
   
Text Size

Using Project Firemind to test speculative AI improvements

Moderators: ubeefx, beholder, melvin, ShawnieBoy, Lodici, CCGHQ Admins

Re: Using Project Firemind to test speculative AI improvemen

Postby PalladiaMors » 09 Mar 2015, 15:58

mike wrote:Does anyone know about cards or card combinations the AI currently handles badly?

I'd like to include some more challenges for future AI implementations that show that it "knows the cards better" than old versions.
It plays quite poorly with mass removal in general. Death Cloud is a good example, the AI typically plays it for 4 mana, clearly to remove one creature. I feel the problems with how is uses that particular card are two: it doesn't try to gain card advantage with it, and it seems to focus solely on the creature removal part of the spell - I haven't seen a play that seemed intent at destroying the opponent's land base or discarding his hand. Of course, that's what it feels like to me - I really know nothing about how the evaluation processes take place. Improving handling of mass removal in general would be important, and Death Cloud, in particular, still seems to be a relevant card in some formats.

I also tried an Astral Slide / Lightning Rift deck in Firemind once and it was a disaster. The AI would skip using the effects upon cycling, the timing was completely off... However, those decks seem to have been out of fashion for a long time, so I'd see that as a secondary priority.

It also seems to handle madness based decks badly. Again, from a few glances I've taken recently on mtgsalvation, probably a secondary concern right now if we were thinking about Firemind resembling competitive play a bit more closely.

Obviously the biggest hole is combo. Since we've been using some analogies with chess: chess engines are known to play the opening way below their strength in other areas of the game. I remember once removing Chessmaster 7000's (a GM strength engine already) opening book and leaving it thinking for 24 hours over the starting position. It came up, as it's #1 rated move, with 1.g4, known to be a terrible opening move. Even today, without opening books, engines will play the opening sub-par. In order to compensate for this weakness, they're given massive opening books. Perhaps in the future something kind of similar could be used here - the AI could be given "combo books" of sorts, telling it how to use certain card combinations: what board situation it should reach, what order the cards should be played, or whatever. Dunno if this makes sense, just some wild speculation here.

Edit: Also, Magarena sometimes uses Fireblast's alt cost early, destroying its own game. I think the alt cost should mainly be used as a finisher. This and other "sacrifice a mountain" alt cost cards might need some tweaking.
Last edited by PalladiaMors on 14 Mar 2015, 14:14, edited 6 times in total.
PalladiaMors
 
Posts: 343
Joined: 12 Jul 2014, 17:40
Has thanked: 36 times
Been thanked: 22 times

Re: Using Project Firemind to test speculative AI improvemen

Postby muppet » 09 Mar 2015, 16:06

try playing against stockfish with no opening book I guarantee it will not be insane in the opening.
muppet
Tester
 
Posts: 590
Joined: 03 Aug 2011, 14:37
Has thanked: 33 times
Been thanked: 30 times

Re: Using Project Firemind to test speculative AI improvemen

Postby PalladiaMors » 09 Mar 2015, 17:59

^ Sounds interesting. I think the issue should be more significant in difficult positions that have been analysed for decades. I don't know what stockfish's rating is, it's probably around 2800 - but can even Stockfish play perfectly, once you remove its book, in something like the Leipzig gambit, the Fegatello attack, the monster variation of the Vienna game - you get my point, crazy tactical positions that took decades to be thoroughly explored by humans. I realize how strong computers are, but I'd be surprised if Stockfish can get 100% correct moves in one of those lines without its opening book.

@Mike, I just remembered: if you can, when you have some time to spare, could you please "cleanup" the requests page? A lot of already implemented cards are still listed there, I do look at that list from time to time to figure out if there's anything doable there. Right now, it's kinda cluttered.
PalladiaMors
 
Posts: 343
Joined: 12 Jul 2014, 17:40
Has thanked: 36 times
Been thanked: 22 times

Re: Using Project Firemind to test speculative AI improvemen

Postby mike » 09 Mar 2015, 21:27

@muppet @PalladiaMors Thanks for your feedback. I added decks that represent the named "problem areas" for the AI to the gauntlet.

@Mike, I just remembered: if you can, when you have some time to spare, could you please "cleanup" the requests page? A lot of already implemented cards are still listed there, I do look at that list from time to time to figure out if there's anything doable there. Right now, it's kinda cluttered.
Done. For simplicity's sake I'll just delete them in the future once the card gets added.

Perhaps in the future something kind of similar could be used here - the AI could be given "combo books" of sorts, telling it how to use certain card combinations: what board situation it should reach, what order the cards should be played, or whatever. Dunno if this makes sense, just some wild speculation here.
It does make sense. The issue is just that keeping track of all the possible combos by hand is cumbersome and they are often more complicated than putting Splinter Twin on an Exarch and just winning. Just imagine Amulet Bloom or Ad Nauseam. What would be really cool is adding a memory to the AI so it will remember game winning combinations and try to assemble them more frequently.

Then again, most combos could probably be figured out by the AI by simply going through the available options and arriving at a winning state. I'm pretty sure right now the only thing preventing this is the inability of the AI to even see some "options".

That actually got me thinking about a "goldfish" option where the AI simply ignores what the opponent does/could do and spends all its resources just evaluating the possible lines it can take with its own cards. Some combo decks like storm, ad nauseam or arguably amulet bloom really don't care (or even can care) about their opponents game plan. They just need to be as fast as possible and playing around a counter spell usually is not the right line anyway.
User avatar
mike
Programmer
 
Posts: 128
Joined: 05 Jul 2013, 17:00
Has thanked: 0 time
Been thanked: 29 times

Re: Using Project Firemind to test speculative AI improvemen

Postby muppet » 20 Mar 2015, 08:47

I had another thought about how the AI works. You may have already realised this and be doing it I am sure the chess people have a solution.

Currently is what the AI does to simply pick the candidate move with the highest win total?
If so this means the AI is making the move that is the best only if our opponent is playing randomly.

Let me give an example. We find a move which wins 99/100 tries great we think this is better than the other moves we'll pick this. Unfortunately the other 1/100 we lose every time immediately to a really obvious play which is what the opponent will always do. This means the 99/100 move is not the best to pick and we should have a way to find this.

Having thought of the problem I came up with a solution which you may already use and which I am sure chess people will have thought of. You simply go to the next move in the chain and look at that. so from our example above move a= 99/100 move b = 90/100. Now go to the next move within these branches and assume 3 options for the opponent and calculate the win percent for each of these.
This gives a1= 50/100 a2 60/100 a3= 0/100. b1= 50/100 b2= 60/100 b3= 40/100.
So we can see if we play move a and the opp uses response 3 we never win a game. So we can base our pick on these second level values and we see b3 is the least bad we can do if the opp makes the best move he can do we pick move b instead of move a.

Now this does assume the opp plays the best move which is another assumption but it is a better one than that he plays a random move. I ave thoughts about how this can be improved too but I thought I would start with this to check I understood what was going on and what you are actually doing in the code first.


I realise this might be much more computationally expensive but it might well make a massive improvement if not already used.
muppet
Tester
 
Posts: 590
Joined: 03 Aug 2011, 14:37
Has thanked: 33 times
Been thanked: 30 times

Re: Using Project Firemind to test speculative AI improvemen

Postby melvin » 20 Mar 2015, 09:26

Thanks for thinking about AI improvements. All the AIs make use of this technique except for VEGAS. It is known as minimax in the literature, see https://en.wikipedia.org/wiki/Minimax
User avatar
melvin
AI Programmer
 
Posts: 1062
Joined: 21 Mar 2010, 12:26
Location: Singapore
Has thanked: 36 times
Been thanked: 459 times

Re: Using Project Firemind to test speculative AI improvemen

Postby PalladiaMors » 20 Mar 2015, 22:34

I don't mean to keep bothering with questions all the time, but I can't resist, I'm too curious about how the AI development is taking place. When you guys have time, could you please explain a little bit what's been changed in the AI recently? I've been keeping an eye on the Firemind ratings, and noticed some significant changes lately. Right now, the #1 rated deck in Modern is a dedicated control deck, which looked unlikely a few months ago. I was quite happy when I saw that, I thought it was a sign that the AI has been reacting quickly to the changes, and playing better with deck types that gave it trouble before.

Is it possible to explain in layman's terms what variables/parameters/processes/whatever have been changed? I was also wondering if everything that's been tried has been working, or if it's actually been hard to find improvements? Waiting for the new version next month to see if I can sense some differences!
PalladiaMors
 
Posts: 343
Joined: 12 Jul 2014, 17:40
Has thanked: 36 times
Been thanked: 22 times

Re: Using Project Firemind to test speculative AI improvemen

Postby melvin » 21 Mar 2015, 01:30

In 1.59, there were two main AI changes:

1. Modifying the way MCTS counts the length of the game and how it uses the length to modify the score of a simulation. The score is a number between 0 and 1, 0 is bad for the AI while 1 is good for the AI.

Previously it counts the total number of "actions" (MagicAction), and scores a simulation as follows:
Code: Select all
  if (game not finished)
    return 0.5
  else if (ai lost)
    return actions/(2 * MAX_ACTIONS)
  else
    return 1 - actions/(2 * MAX_ACTIONS)
Now, it counts the number of events (MagicEvent) and separate between AI's events and opponent's events.

Code: Select all
  if (game not finished)
    return 0.5
  else if (ai lost)
    return opponent_events/(2 * MAX_EVENTS)
  else
    return 1 - ai_events/(2 * MAX_EVENTS)
This change as two impact, the number of events is more course grained compared to the number of actions, but it correlates better with the number of decisions.

Secondly we wanted to reduce the AI making useless plays to increase the number of game actions when it is losing. Now when AI is losing, it prefers plays that increases the number of opponent's events, which it has only indirect influence.

2. Adding a new AI (MTDF), not tuned/tested against other AIs yet. Planned to be evaluated post 1.60 release, and released for use as an opponent in 1.61

All the above is available in 1.59. Did you notice any difference in MCTS (1.59) compared to the previous versions?
User avatar
melvin
AI Programmer
 
Posts: 1062
Joined: 21 Mar 2010, 12:26
Location: Singapore
Has thanked: 36 times
Been thanked: 459 times

Re: Using Project Firemind to test speculative AI improvemen

Postby PalladiaMors » 21 Mar 2015, 14:53

All the above is available in 1.59. Did you notice any difference in MCTS (1.59) compared to the previous versions?
I didn't really play too many actual games this month, mostly playtesting cards (and then I set the AI to level 1 so it plays faster). The few games I played, I didn't notice any delaying actions like I used to, but I guess that depends a bit on what cards are in play. I've also recently switched from Monte Carlo to MiniMax. I trust the result stats, but to me it looks like MiniMax makes less obvious errors like suicide attacking. Of course, since MCTS is higher rated, it must compensate somewhere else - possibly it makes more good plays that I don't notice as much as the errors. MiniMax also feels more 'agile': sometimes MCTS thinks for the full 8 seconds even in the first turn, when it shouldn't have that many options, while MiniMax seems to select an option quickly in those situations.

Still interested in hearing something about what's going on in Mike's "front"! I know there was a change to mulligan decisions, but I don't know anything else.
PalladiaMors
 
Posts: 343
Joined: 12 Jul 2014, 17:40
Has thanked: 36 times
Been thanked: 22 times

Re: Using Project Firemind to test speculative AI improvemen

Postby muppet » 21 Mar 2015, 15:13

current ai, games won by muppet 76%, duels won by muppet 84%
version 1.58 games won by muppet 83% duels won by muppet 90%.

So seems a big improvement.
muppet
Tester
 
Posts: 590
Joined: 03 Aug 2011, 14:37
Has thanked: 33 times
Been thanked: 30 times

Re: Using Project Firemind to test speculative AI improvemen

Postby melvin » 22 Mar 2015, 02:34

muppet wrote:current ai, games won by muppet 76%, duels won by muppet 84%
version 1.58 games won by muppet 83% duels won by muppet 90%.

So seems a big improvement.
Which AI is this and at what level?
User avatar
melvin
AI Programmer
 
Posts: 1062
Joined: 21 Mar 2010, 12:26
Location: Singapore
Has thanked: 36 times
Been thanked: 459 times

Re: Using Project Firemind to test speculative AI improvemen

Postby muppet » 22 Mar 2015, 09:41

level 8 MCTS as per muppet games.

Would it be easy to add a level 60 and a level 300 on the same time scale it would be interesting to see if it made much difference. An infinite mode would be good too but might require extra work adding a button to make it play now for example. While I'm on with added features a swap sides button might be interesting too as usual no idea how hard things I'm asking for are.

To give you an idea of my standard I was ranked about 50 in the UK when I played in wotc events. I did beat the number one ranked player in the semi final of a top 8 once.
But this was a long time ago and the standard has probably improved a lot.
muppet
Tester
 
Posts: 590
Joined: 03 Aug 2011, 14:37
Has thanked: 33 times
Been thanked: 30 times

Re: Using Project Firemind to test speculative AI improvemen

Postby muppet » 09 Apr 2015, 10:49

Is there a way to bias AI decisions. For example it might be good to have some things needing say 75% win rate to do, or more usefully needing the use of the ability to be x amount better in the results than not using it. This would prevent the random using of some abilities when it is wrong just by the sheer number of chances it gets to use them at the wrong time compared to the right one.
muppet
Tester
 
Posts: 590
Joined: 03 Aug 2011, 14:37
Has thanked: 33 times
Been thanked: 30 times

Re: Using Project Firemind to test speculative AI improvemen

Postby muppet » 09 Apr 2015, 10:57

Oh and there is a sort of modern combo deck possible now with Tooth and Nail if you count that as combo.
Initial problems I am having is the AI casting it to get 2 1/1 creatures not the Emrakul.

I uploaded one to firemind. It is ok deck but AI is pretty bad with it.
muppet
Tester
 
Posts: 590
Joined: 03 Aug 2011, 14:37
Has thanked: 33 times
Been thanked: 30 times

Previous

Return to Magarena

Who is online

Users browsing this forum: No registered users and 53 guests


Who is online

In total there are 53 users online :: 0 registered, 0 hidden and 53 guests (based on users active over the past 10 minutes)
Most users ever online was 4143 on 23 Jan 2024, 08:21

Users browsing this forum: No registered users and 53 guests

Login Form