Page 1 of 1

Scoring in JPortal

PostPosted: 28 Nov 2011, 14:42
by Malban
Hello,

I would like to describe the current implemented scoring system in JPortal.

1) What Scoring?
The latest AI implementation in JPortal goes like this:

a) there is a current game in Progress
b) calculate a current score of the game, create a virtual game and on that virtual game:
c) try all possible things there are
d) for each such try do again calculate a current score
e) if the score is higher than the initial one it is a good move
g) take the highest found score
h) apply the best moves to the real game

The scoring is a crucial part of the new AI. A bad scoring will result in a bad AI.

2) When does JPortal score
- As mentioned above, JPortal will generate a "BASE" score. This is the initial score for the current turn.
- AI generates different actions that will be applied to the (virtual) game.
- AI does so for a number of phases (this is configurable - but for best results should be till after the opponents next combat phase, so the blocking of the opponents attacking phase can be taken into account.
The number of phases correspond to the DEPTH of the decision tree)
- The FINAL score is taken after the last simulated phase
- (there are scores calculated in between, but as of now they serve no purpuose, I might add an option to only follow the hightscores of ## leaves (The number of options per phase correspond to the WIDTH of the current decision tree))
- only the FINAL and BASE the scores are compared, the path that was used to get from the BASE to the FINAL score are the actions AI plans to take
(AI checks each round if the PATH is still valid, if so no further calculation is made. If some requirements of the path were changed - AI does a new simulation and possibly a new path will be generated
- all of this can be visualized in the AIDebug window)

3) Two part Score
The score of a game situation is allways generated twofold.
The current situation of the game is scored from two points of view.
a) The players (that is the AI)
b) The opponents

The score of the game is than:
SCORE = playerScore - opponentScore;

Obviously a positiv number is a good score for the player, and a negativ number is a bad score.
The calculation of each of these playerScores must be done with the same algorithm to ensure they are comparable.

4) SubScores
Implementation of the scoring can be found in the package:
"csa.jportal.ai.enhancedAI.weighting"

The class "Weighting" is used to interface all scoring methods.
The scoring itself is divided into configurable subScorings. This might sound complicated but is rather simple.

I implemented different scoring "elements" for different points of view of the game.
The are implementations to score:
- the battlefield
- the hand
- the land
- the life
- the library
- the graveyard

There is a scoring class for each of the above (all implement the interface "Scorable":
Code: Select all
 public interface Scorable
    {
        public String getName();
        public int getScore();
        public int computeScore();
        public void setData(VirtualMatch vMatch, int playerNo);
    }
).

Each of these subscores can also have a weight, as of now the weighting is:
Code: Select all
public int[] scoreWeighting =
    {
        // SCORE_LIBRARY
        1,
        // SCORE_BATTLEFIELD
        3,
        // SCORE_HAND
        2,
        // SCORE_LAND
        2,
        // SCORE_GRAVE
        1,
        // SCORE_HEALTH
        4,
    };
The sum of all (weighted) subScores is the final score.

Following calculations are currently implemented:

a) battlefield
Code: Select all
score = sum(AllCreatures: Power, Toughness, Number of abilities) + sum(NonCreatures: ManaCost)
b) graveyard
Code: Select all
score = deckSize - (graveSize-roundsplayed) - (sum(AllCreatures: Power, Toughness, Number of abilities)/2)
c) handScore
Code: Select all
landHand = if (landsSize-7 <0) if (round <8) ((landsPlayed + landsOnHand - 1) >= round): landsOnHand
landHand = if (landsSize-7 >=0): landsOnHand/2
score = landHand + sum(CardsNotLand) + (sum(CardsPlayable: manaCost))
d) healthScore
Code: Select all
  if (health >= 40) score =  80 + (health-40);
  else if(health >= 10) score = health*2;
  else if(health < 10) score = 20 - 4*(10-health);
  if (health < 0) score -= 10000;
e) libraryScore
Code: Select all
   score =  lib.size();
f) landScore
Code: Select all
  score = (colorsHave * 10) /colorsNeeded;
  if (lands.size() < 10) score += lands.size();
5) Configurable
The above is the "hardwired" scoring, which is the default scoring (which might change if I see it needs tweaking).
This is the internalScoring.
You can configure an EnhancedAI to have "external" scoring, which means scoring scripts which can be interpreted.
This is quite a bit slower but it is as configurable as you want.

The scoring can be configured using the following scheme:
(in the Configure AI Window)

1) you can build a scoring formular, the formular must fill the variable "score" with an integer number (you can access a "VirtualMatch", which gives you all information about the to be scored game situation)
2) the formular must have a name and a weighting
3) you can build a "Formular Collection" with any number of such formulars you created
4) you can set that formular collection to an AI. The AI will use that formular collection as its weighting.
(I implemented the above "internal scoring" also as an external scoring as an example)

Any comments, also on better scoring - welcome.

Regards

Malban