Page 2 of 3

Re: JPortal current status

PostPosted: 06 May 2011, 07:03
by Malban
Ok, the ordered IMac will be delivered in 4-6 Weeks.

Till than folks...


Malban

Re: JPortal current status

PostPosted: 13 Jul 2011, 13:13
by Malban
Hiho!

For about a month now I have my IMac. Which is a really nice little machine.
Im still getting further familiar with it, but having been a Linux user since 0.9 (or somthing like that, I remember Linux on 15 5 1/4 disks) - and being a Windows user for many years - it is not really all that different an OS to get used to.

I did 2-3 hours on work on JPortal - fixing varous Mac issues (like fullscreen support) but nothing fancy really.

JPortal probably will have to wait a bit longer to be picked up again, since right now I´m on an intermediate project -
building an own Arcade Machine with my old PC as the MAME foundation - which is also a lot of fun - but leaves no room for
extensive other projects.

I will return :-)

BB

Malban

Re: JPortal current status

PostPosted: 14 Jul 2011, 02:53
by Huggybaby
Thanks for the update.

I'm also a MAME fan, so when you get your cabinet done feel free to post pics. :)

Re: JPortal current status

PostPosted: 07 Oct 2011, 08:14
by Malban
Hello to you all,

I know - nobody (or nearly) is really waiting for JPortal. So there is no need to rush into things.
I would just like to let you know - "It is not dead".
Although my MAME project is not finished yet (but nearly, just purchased a brand new arcade monitor - the TV was really only second best option) I am again on and off working on JPortal.

AI is getting better - I nearly have a grip on the timings - only a few things are left todo - but it will probably still take a bit of time.
- right now the new AI does not reproduce the "Fighting back" scenarios - meaning the simulation of the next round, when the other player might attack back (which is important, otherwise creatures do "stupid" attacks :-))

After that only tweaking and beautyfying is needed.

When AI is doing more or less what I want in an acceptable TimeFrame, I will release a Beta candidate - and do more tweaking afterwards.

But still - as allways, if anyone is interested in the current version there is allways the svn repository. Only player that uses the new AI is MALBAN though.

To at least show something new - here a screen of the freely programmable scoring mechanism (although a good scoring will be implemented in code by me - which will be faster than doing script interpretation):

Scoring.jpg



Greetings

Malban

Re: JPortal current status

PostPosted: 07 Oct 2011, 15:56
by Huggybaby
Thanks Malban, looking forward to the beta.

Do you have any plans for the GUI?

Re: JPortal current status

PostPosted: 10 Oct 2011, 13:17
by Malban
Hm.
I didn´t really plan any changes.
Did you have anything special in mind?

I guess its not the most beautyfull interface anymore - but I still find it quite functional - and I have seen worse GUI´s.

Although the new "artistic" approach of the fellow developers is very nice to look at :-)

Malban

Re: JPortal current status

PostPosted: 14 Nov 2011, 11:17
by Malban
Here, there and back again.

Status update. For folks who watch my svn activities it is obvious, that work is still ongoing. For allother a small update.
I know it has been a long time since a last official release. But doing the new AI is quite more work than I originally anticipated. Nonetheless - work is ongoing and AI is getting better and bugs are removed as I make progress.

What I am actually doing is a mixture between "BugFix" and "Opimization".
As befor - the skeletton of the new AI is finished - and I am fleshing it out.

Main challenge 1 for the new AI is still the sheer multitude of possible choices that can be made.
----------------
The challenge is:
- the new AI tries all possible variations of cards that can be played, cards that can be activated and attack variations that are possible etc.
- instants that can be played in all round -> choices are generated for all rounds
- the AI (default) will look ahead till the end of the opponents player combat phase, thus it will be estimated all possible actions for:
1. PHASE_BEGINNING_UNTAP
2. PHASE_BEGINNING_UPKEEP
3. PHASE_BEGINNING_DRAW
4. PHASE_MAIN1
5. PHASE_COMBAT_BEGIN
6. PHASE_COMBAT_DECLARE_ATTACKERS
7. PHASE_COMBAT_DECLARE_BLOCKERS
8. PHASE_COMBAT_DAMAGE
9. PHASE_COMBAT_END
10. PHASE_MAIN2
11. PHASE_END_END
12. PHASE_END_CLEANUP
13. PHASE_BEGINNING_UNTAP
14. PHASE_BEGINNING_UPKEEP
15. PHASE_BEGINNING_DRAW
16. PHASE_MAIN1
17. PHASE_COMBAT_BEGIN
18. PHASE_COMBAT_DECLARE_ATTACKERS
19. PHASE_COMBAT_DECLARE_BLOCKERS
20. PHASE_COMBAT_DAMAGE
21. PHASE_COMBAT_END
-> depth of 21 phases (but the depth looked at will be configurable)

This can be very fast VERY challenging.
If there are only TWO choices per phase to evaluate, that means we have 2 ^ 21 choices -> 2.097.152! This is the main challange. Reduce the volume of choices that can be made, and reduce them in a way that no "GOOD" choices are eleminated.
Here I will shortly sketch the means implemented to reduce. Some of them are configureable, some are "hardwired". In no particular relevant order:
- "Recuce library chosing to:" a number Cards that "open" the library and allow chosing any card... generate sheer multitudes of options.
Chosing a "I try them all out" strategie is virtually impossible with current computers. Thus
I reduce the cards that are evaluated to a maximum. The cards that are evaluated are chosen using a scheme like the one used in the old AI, "chose_good_card();" - default value here is 5
- "If land can be played, force Main1, played first"
A number of lands on hand, which can be played in Main1 and Main2 generate a lot of not needed choices.
If AI decides beforhand:
a) Do I want to play a land
b) what color do I want to play
c) I play in Main1
I reduce the amount of choices generated considerably.
(If I have no land, draw land and have it available only in Main2 -> it nonetheless will be played out correctly)
- Try to decide between good/bad
Allways from the players point of view.
Bad cards are damaging cards, moving to graveyard etc...
Good cards are healing of any kind, moving from graveyard etc...
For "bad" cards -> prefer choices using opponent cards
For "good" cards -> prefer choices using own cards
(Todo: make a choice of "perhaps" cards, like move creature to hand -> if we gain needed heal from playing it out again,
in that situation it will be a good card...)
- Maximize target selection
Meaning cards which have a varying number of targets (Tap 1, 2 or 3 creatures...) allways try to maximize targets, do
not try to evaluate all coices (if not enough targets are present -> try the ones available)
- Do forcable attack on no blockers
if opponent has no blockers, try to attack with all attackers, rather than try all possible variations
of attackers that are possible
(Still might not attack with all, since if low on health AI may decide to keep a blocker for safety reasons)
- Try simple block
Try an algorithmic approach to blocking. Like Block all Attackers, without losing a blocker... etc
If no definite stragtegy can be found -> try them all out.
(surpisingly often a "simple strategy" can be found which is "optimal")
- Try simple attack
if there are more attackers than blockers, one can usually reduce
the attack->block combination that a number of attackers do not have any blockers (algorithmic apporach, attacker blocker
combination which "make sense")
After reduction "Do forcable attack on no blockers" can be applied.
- Try extreme testing
Upon attacking very often "extremes" are the best, meaning either an "all out attack" or "do not attack at all".
Scoring for the below cases are made in advance:
- attack with all
- attack with all minus most powerfull attacker
- attack with none
- attack with only the most powerfull attacker
- attack with only attackers, that can not be blocked
if any of the scores (NON, ALL, NOTBLOCKABLE) (SCORING!) is higher (or equal) than all other scores
-> that one will be taken as planned attack combination
There might be very rare cases, where other attack strategies might be more successfull, but I find them
very hard to find...
- Reduced activation evaluation
If a creature / artifact can be activated in ALL phases, only test them in Main1 or Main2 (or opponents turn) phase, not in
any other "own" turns
I don´t see any difference in the outcome. Is there any?
- reduced instant evaluation
If an instant is either a Buf (of own creatures) or a debuf (of opponent creatures)
- only evaluate them in attack / block phase, not in Main1 or Main 2
- Don´t evalutae (1 Round) Buf / Debufs in Main2
do not buf / debuf creatures in Main 2 -> because it will never do any good
(scoring would prevent playing anyway)
- Double check attacker instants
choices generate for attacking which include Buffing of creatures which do NOT attack are elimitated

- Blocking:
- also elemitation of combinations, see here old posts.

TODO:
- Do quick reblock analyze
Not implemented yet
- Use scoring in block evaluation
Not implemented yet
- kicker extremes
Kickers can explode choices. I still have to think about that
- twins
Quite often players have Twin-Options, like two of the same cards on the hand.
Still have to elimtate exactly the same combinations with equal cards


Main challenge 2 are the new hints and the support for the hints in the new match simulation.
----------------
In JPortal AI and the actual game are still two different stories. And I intend to keep it this way.
This means "The Game" does not know whether a player is human or a computer player.
From the Match (that is the name of the main game class) point of view both human and computer players
are interfaced in exactly the same way.
This also means I do not reuse any of the Match to implement the AI.
The choices generated are not tested using the "Match" - all choices are tested internally by the AI.
For that reason "within" the AI package the AI has its own independend implementation of a different Match.
This one is called "VirtualMatch".
This the real match is called "Match", the AI evaluation match is called "VirtualMatch".

Match
Enforces all MTG rules. Cards are evalutaed by the scripting of the cards. Players are questioned when needed.

VirtualMatch
Does not really enforce rules, but rather implements needed routines that can be used by AI.
Cards are evaluated by using the hints found for cards (EnhancedHints).

EnhancedAI interfaces both the Match (indirectily) and VirtualMatch by EAIActions.
Thus the challange is to implement VirtualMatch to enable it to evaluate all cards in a way that is as similar as possible
to the actual real thing in "Match".
I discovered that this is slow going. Once all is done it will be easy to keep it up to date. But I made knots in my brain
to implement e.g. triggers of any kind in VirtualMatch. I have already come a good far way - but there is still
much left todo.

Main challenge 3 are correctly implementing Hints for all supported cards -> test cards
---------------
This is directly related to challange 2.
All supported cards must have the correct EnhancedHints.
1/2 a year ago I said "finished" - but as testing continues, again and again I discover cards where not
all hints are in place. This will require quite a lot of playing and focused attantion on details to discover
cards where hints are not complete.


To make a long post even longer...
Debug Window for new AI.
It is cool :-) And more than helpfull. You can evaluate all choices, look at the damage cards took, which are tapped, health of players and so on. Nice!

Conclusion:
----------
I still want a release this year.
Perhaps it will be a beta with known unfinished lose ends.


Regards...

Malban

Re: JPortal current status

PostPosted: 28 Nov 2011, 14:57
by Malban
Hiho,

for a whole week JPortal lay dormant. I had much todo in the real world (work and privat).
Nonetheless, JPortal is advancing.

At the moment there are no known bugs - Hurray!
(only planned features missing)

AI plays quite well. Main debugging at the moment consists of watching AI play against each other and if it does something unexpected -> investigate and correct it - if it was a bug.

This is quite lazy "programming" - but in one way or another quite satisfying(on my Mac this is so fast - I can barely identify the turns the players make).
AIs have already matched itself a couple of hundred times without "exceptions". What I do now is have it play itself with numerous different decks so all card types will be played - eventually.

Left todo:
- small features for new AI left todo:
* calculate "timetravels" (do another attack turn etc)
* instants (right now the new AI uses the old code - which is ok, but does not recognize artifacts)
- beautyfing the source
- new quests for Starter 2000
- make a nice NamedAI for the new enhanced AI
- manaburn still not implemented
- make achievments a little bit nicer
- Update help
- make a "Hint" button, which suggests to the player a smart move
(this should be VERY easy and fun to implement)
- other cool stuff :-)
(Not with the next release:
- AI automated deck building
- campaign mode)


Regards

Malban

Re: JPortal current status

PostPosted: 28 Nov 2011, 17:52
by Huggybaby
Thanks Malban. The hint button is definitely a feature I'd like to see in some other apps *cough* and having the AI play against itself to iron out bugs is also an excellent approach that some others might consider.

PS I thought mana burn was obsolete?

Thanks for the report and the continuing efforts!

Re: JPortal current status

PostPosted: 06 Dec 2011, 14:26
by Malban
Hiho,

Manaburn: Yup, I know it is not in the rules anymore. But I find it a cool feature and will implement it as optional.

I am not a 100% rules fanatic, I want a nice playable game... therefor...

Last actual EAI coding done (YEAH! =D> ).

Now only BugFixes and "Leftovers" must be done.
(TimeTravel Cards - as I call them - implemented in VirtualMatch and EAI:
right now these are: ).

Regards
Malban

Re: JPortal current status

PostPosted: 07 Dec 2011, 14:33
by Malban
Hiho,

Started updating the docs...

http://www.deamonsouls.de/jportal//index.html

Expecially the EAI Sections:
EAI Configuration: http://www.deamonsouls.de/jportal//b001af97.html
EAI Debug Window: http://www.deamonsouls.de/jportal//7eefaa79.html

Got to install an English spell checker - there are still quite a lot of mistakes - sorry.

Malban

Re: JPortal current status

PostPosted: 14 Dec 2011, 10:12
by Malban
Hiho,

again a small status update.

Despite my last telling - i still did some work that I said I would not.
- I implemented stack-reaction of EAI
(befor stack handling was done by old code) -> this has to be thoroughly tested

- implemented "HINT"- Button (found in small help section on screen)
How it works:
+ a special "HintAI" is called to evaluate the current game situation.
+ as usual a "best" move is created from all available choices
+ that best move is output to the player as "hint"
+ two outputs: textual description
tree representation (as in EAIDebugWindow)
+ thus the hint depends on the underlying EAI settings (including scoring)
if a bad AI is chosen, or a bad scoring -> it will result in bad hints

See Screenshots:

Hint1.jpg

Hint2.jpg


- implemented an AI-"Test"-Environment
One might think this is a bit "to much", and you are probably right
on the other hand my real-life job is closely related to software testing, so it probably comes only natural
The basic underlying idea:
- games can be saved (stateless - as described somewhere else...)
- these games can be loaded as a "befor" and "after" scenario
- one of the opponents is a "fixed" AI which playes the role of the opponent of the to be tested AI
- this fixed AI must have a "fixed" behaviour, in order for the test to be reproducable
- the other AI will be the one to be tested
- if the after scenario that was defined by the testcase is exactly as the one produced be the to be tested AI
than the test is passed / otherwise failed

Testcases as such can be defined, commented and collected.
You can edit gamestates.
You can import gamestates.
You can view gamestates (window, which behaves like a match)
Testcases can be collected to a "TestRun".
TestRuns can be run on to be tested AI in sequential order.
OutPut can be viewed.
Thus by collecting usefull testcases I can automatically verify:
- that an AI does after changes still do as expected
- that cards are implemented, and AI does handle them as expected

Example:
To be tested AI (EAI): Malban (what else :-))
+ TestCase name is BlockTest1
+ Fixed AI is "Moses", Fixed AI is player 1 (this automatically means, the to be tested AI will be player 2)
+ original gamestate is named BlockTest1Start
+ target gamestate is named BlockTest1End
+ no player has any life difference
+ testcase has duration of one game round

TestRun1.jpg

(Note: If Malban was player 1, he would play "Undo" - and the
result would be totally different!)

Here you can see a gamestate in a "view"-Mode:
TestRun1Source.jpg


TestRun definition looks like this:
TestRunDefinition.jpg


And finally - the results of a run:
(Tested Moses, who does not pass the tests (only one):
TestRunResult.jpg


tbc

...

Regards

Malban

Re: JPortal current status

PostPosted: 18 Dec 2011, 03:42
by Huggybaby
Very cool stuff, I love the hints as I've suggested that this function be available in every game!

Re: JPortal current status

PostPosted: 09 Jan 2012, 15:01
by Malban
Just to say hello,

I didn´t to much JPortal over last couple of weeks.
started again today - so obviously the target of 2011 for a first beta release was not met.
Anyway - there are still some testings and a couple of minor implementations to be made - so keep on waiting.
In the meantime - play Skyrim - like I did... really nice game!

Regards

Malban

Re: JPortal current status

PostPosted: 17 Jan 2012, 10:40
by jmartus
Well I hope you can implement tons of cards and have a challenging ai.