It is currently 19 Sep 2018, 15:09
   
Text Size

Verse - Approach to rules through Parsing Expression Grammar

General Discussion of the Intricacies

Moderator: CCGHQ Admins

Verse - Approach to rules through Parsing Expression Grammar

Postby proud » 21 Apr 2010, 12:16

Verse
I’ve started a project I’m calling “Verse” which is a grammar for parsing MTG cards. It is designed to read straight from the Oracle text spoiler.

I’ve decided to use PEG (Parsing Expression Grammar) in JavaScript as it seems much more powerful for me and easier to write rules for. (I believe this can be exported to other languages, like Python or Ruby which have PEG support.)

At the moment the rules logic itself is being worked on. For the moment, it works great as a card validator otherwise. (Just be sure you have a new line character at the end of the text.)

If you’re interested, see the project page here: https://github%2Ecom/ancestral/Verse
(Because the forums do not like me posting URLs, the %2E is actually a percent-encoded character for a dot. The link should work for everyone, except in Firefox you’ll need to click in the address bar and press return.)

Thanks!

Original Post | Open
After scouring the ’net for the right community to seek, I think just maybe this is it :)

Being an old school Magic user who is rather new to MTG multiplayer game software today, I've noticed two things in them. Magic cards are more or less hardcoded or converted in order to be used, and most projects have only partial or no rules implementation at all.

It's a problem that I think each project has taken a slightly different stance towards, in finding a balance between cards and rules. But no matter how you do it, either you're left with barely any rules (Lackey) or you're left with some or many rules but only with select cards (Firemox, MTGForge, Wagic, etc.) When you think about it, there's really not much alternative, right? Just hope you're fast enough to implement the new cards as they come out (if you can) and minimize the bugs so everyone's happy.

Right?

Here's where I think people have been tackling the problem all wrong in the first place.

Instead of focusing on adding in cards one-by-one, why not "teach" the software how to read and use Magic cards instead? Enter Natural Language Processing and compilers.

Think of the problem like translating a sentence into another language. Using NLP, you can effectively break down sentences and a scanner can verify and handle input to use. Fortunately, one of the best things about Magic is the Oracle database/rulings are updated and use consistent language, so rules for all cards read exactly the same.

The goal is to simply use the Oracle text spoiler of the cards, which is publicly available, as the data itself. No conversion or hard-coded data files needed. This would also very easily allow support for future expansions with little difficulty (minus any new game-changing mechanics) and meanwhile afford users the possibility of writing their own cards using not some pseudo language, xml or the like, but the same natural language that Magic cards use.

I continue to think this approach is intriguing, especially given that no one has done this as far as I can tell; not Wizards, not anyone.

I've had some false starts, but I've been working on a grammar in ANTLR. There's still work to do, but at this point I'm nearly where I need to know how this is going to play with the rest of the game engine.

Do you think this something that could benefit a project which already has multiplayer capability and a GUI, or if perhaps I'm best off making a bare-bones program first, then either starting a new project or merging it into something already existing at a later point in time?

Your thoughts?
Last edited by proud on 10 Jun 2012, 22:36, edited 1 time in total.
aka ancestral, mproud
mtg.design
proud
 
Posts: 47
Joined: 21 Apr 2010, 10:50
Has thanked: 6 times
Been thanked: 15 times

Re: Approaching rules through NLP and grammars

Postby Snacko » 21 Apr 2010, 17:53

I agree that it would be best to develop an NLP parser that would convert the card syntax into a working code.
This being non trivial hasn't been done yet, but been discussed several times on this forum.

If you want to plug it into an existing project I think either MagicWars or Incantus would be prime targets as both include rules enforcements and have card scripting, which mixes well with automated card generation.

None of the projects currently available includes a full MtG rules implementation, so depending how do you want to crack the problem there seems to be two ways. Either you make a "code generator" that can plug into other projects or you want to build your own MtG framework, however this doubles your effort.
Snacko
DEVELOPER
 
Posts: 826
Joined: 29 May 2008, 19:35
Has thanked: 4 times
Been thanked: 73 times

Re: Approaching rules through NLP and grammars

Postby silly freak » 21 Apr 2010, 21:06

NLP is cool, but hard. I have not much experience with it, but the main problem might be the many combinations, like intervening if clauses, optional parts, understanding what an "if you do" clause references etc. Additionally, recognition of common parts is important, like in "target creature", but the longest possible variant must be found, like "target creature you control".
Okay, that aside, you sound like you have already solved these ;)


MagicWars and Incantus are really advanced and seem to have pretty correct rules. I really would like to point you to my project, but... I'm aiming at a very correct rules implementation and modular design for rules, GUI, AI and cards. However, I'm still far away from a finished game - I can play barely play lands, and are working on spells and abilities, but that's it.

Working on your own engine from scratch might not be worthwhile. My google project was started half a year ago, and at that time my layering system was finished already.

PS: Hi on the forums! I hope you have a great time here!
___

where's the "trust me, that will work!" switch for the compiler?
Laterna Magica - blog, forum, project, 2010/09/06 release!
silly freak
DEVELOPER
 
Posts: 598
Joined: 26 Mar 2009, 07:18
Location: Vienna, Austria
Has thanked: 93 times
Been thanked: 25 times

Re: Approaching rules through NLP and grammars

Postby juzamjedi » 21 Apr 2010, 21:45

Yes I know this was discussed for Incantus at one point. There was a card generator that would import the keyword abilities and the attributes from Oracle text. The Oracle parser could recognize triggered abilities for example, but knowing the correct timing for the trigger and implementing the resulting effect of the trigger was not done. I.e. "Vanilla" creatures like Grizzly Bears and "French Vanilla" creatures like White Knight were created auto-magically, more complex cards just have stubs indicating triggered abilities or static abilities.

Converting Oracle text to usable code for any of the existing projects is hard enough. Take Landfall as an example here. With Steppe Lynx the effect of the Landfall ability means your creature gets +2 / +2. With Lotus Cobra the Landfall ability means you may add mana to your mana pool. With Searing Blaze the Landfall ability increases the damage dealt by the spell. Also consider that the first 2 Landfall abilities are triggered abilities that can be responded to, but the Searing Blaze should NOT produce a triggered ability that can be responded to. And all of this assumes you have a framework that implements all of these effects so that your parser knows what kind of output you want it to generate that creates the effect.

So with that said personally I would not try to build your own rules framework. Magic is such a complex game and parsing the text into something usable for an existing project will be prove difficult enough. I would vote you try it for Incantus although I am biased. :lol:
juzamjedi
Tester
 
Posts: 573
Joined: 13 Nov 2008, 08:35
Has thanked: 6 times
Been thanked: 8 times

Re: Approaching rules through NLP and grammars

Postby MageKing17 » 22 Apr 2010, 03:08

juzamjedi wrote:Yes I know this was discussed for Incantus at one point. There was a card generator that would import the keyword abilities and the attributes from Oracle text. The Oracle parser could recognize triggered abilities for example, but knowing the correct timing for the trigger and implementing the resulting effect of the trigger was not done. I.e. "Vanilla" creatures like Grizzly Bears and "French Vanilla" creatures like White Knight were created auto-magically, more complex cards just have stubs indicating triggered abilities or static abilities.
In the current "test" version, it also auto-magically parses activated ability costs (some of them, anyway).

juzamjedi wrote:Converting Oracle text to usable code for any of the existing projects is hard enough. Take Landfall as an example here. With Steppe Lynx the effect of the Landfall ability means your creature gets +2 / +2. With Lotus Cobra the Landfall ability means you may add mana to your mana pool. With Searing Blaze the Landfall ability increases the damage dealt by the spell. Also consider that the first 2 Landfall abilities are triggered abilities that can be responded to, but the Searing Blaze should NOT produce a triggered ability that can be responded to. And all of this assumes you have a framework that implements all of these effects so that your parser knows what kind of output you want it to generate that creates the effect.
Actually, any good parser should ignore the text "landfall" entirely, and treat it just like what it is... a regular triggered ability (or static ability, in the case of Searing Blaze).

juzamjedi wrote:So with that said personally I would not try to build your own rules framework. Magic is such a complex game and parsing the text into something usable for an existing project will be prove difficult enough. I would vote you try it for Incantus although I am biased. :lol:
I can hardly say no to another coder helping with Incantus, given that Incantus and I don't have any time for it ourselves. ;P

If you're interested in discussing some of my ideas for making a Magic engine parse oracle text directly (which is on my eventual to-do list for the Incantus project), you can come find me on the #incantus IRC channel on EFnet. If you lack an IRC client, mibbit can help with that.
User avatar
MageKing17
Programmer
 
Posts: 473
Joined: 12 Jun 2008, 20:40
Has thanked: 5 times
Been thanked: 9 times

Re: Approaching rules through NLP and grammars

Postby frwololo » 03 May 2010, 02:07

Whatever you do, you can't have a parser that will magically code the rules for you. Whenever a new mechanism comes out (i.e. pretty much every new expansion), you'd have to add additional code somehow. The parsing part is not the difficult part. Actually, most people who help on wagic find it rather fun to find tricky ways to have a card to work and bypass the current limitations of the engine.

Imagine a card that says:
{2}{T}: target player has to say "abracadabra" 100 times. If he doesn't want to do it, you draw 3 cards.

Whatever parser you have, I'm pretty sure your engine doesn't handle that...

That's of course a very borderline case but you see my point.

Rules enforcement + new mechanics == code is needed || you have to trick the existing engine
In both cases, tweaking the parser and or the code used on the cards is the easy part.

Also, NPL fails, as proven by all programming languages that try to look like natural languages. That is also why WotC keep coming with reprints and rewording for their cards, which wouldn't happen if the text on the cards was actually mathematic formulas or code. It means that the text on a card has several possible interpretations (which is why there are judges, too), which makes parsing difficult from a computer's point of view, for "complex" cards...which happen to be exactly the ones that most engines don't handle.

I've said many times that my idea of implementing my own scripting language for Wagic wasn't a good idea, but I want to withdraw that. The bad idea was to not use a real grammar and a real parser such as Yacc. Other than that, it rocks.

Additionally, I strongly believe that there's no difference between handling 50% of the cards and 95% of the cards, as you'll always find people to complain about the missing 5%

THAT BEING SAID, your idea of parsing the initial text is extremely cool, and wagic could definitely use that if a translator "English" -> "Wagic card code" came out.

Edit: my point is that Wagic usually handles between 40 and 50% of a new set's cards before the set is even out (this represents roughly 3h of work that anybody with little experience can do, to compare to the dozens hours of work required to add new mechanics to the game), and I'm not sure a different parser would help in any way. The problem is the underlying engine. I'm convinced the issue would be the same for Incantus, even if the figures are different.

Having people writing the card's code is also a very nice way to build a community because non-developers can help a lot on this, and it helps them understand they are part of the project, which I think is rewarding. I think that's an extremely important point.

Plus if you want people to be able to create their own cards, good luck teaching them the rules of english so that the parser doesn't make mistakes. It's probably doable to parse the MTG cards, but how do you teach people how to "speak the correct english that the parser will understand" ? Parentheses are much better than words in that case I believe. Not mentionning the parsing nightmare on low resource machines (which, granted, is already the case in WAgic, but for different reasons)
frwololo
DEVELOPER
 
Posts: 265
Joined: 21 Jun 2008, 04:33
Has thanked: 0 time
Been thanked: 3 times

Re: Approaching rules through NLP and grammars

Postby MageKing17 » 03 May 2010, 05:53

frwololo wrote:Whatever you do, you can't have a parser that will magically code the rules for you. Whenever a new mechanism comes out (i.e. pretty much every new expansion), you'd have to add additional code somehow. The parsing part is not the difficult part. Actually, most people who help on wagic find it rather fun to find tricky ways to have a card to work and bypass the current limitations of the engine.
You're going to have additional work either way... if your program is set up for parsing oracle text, you have less work to add the new features than to individually add every card with the new features.

frwololo wrote:Imagine a card that says:
{2}{T}: target player has to say "abracadabra" 100 times. If he doesn't want to do it, you draw 3 cards.

Whatever parser you have, I'm pretty sure your engine doesn't handle that...

That's of course a very borderline case but you see my point.
That's not borderline... that's un-set material. There's a reason they don't print cards like Chaos Orb anymore.

frwololo wrote:Also, NPL fails, as proven by all programming languages that try to look like natural languages. That is also why WotC keep coming with reprints and rewording for their cards, which wouldn't happen if the text on the cards was actually mathematic formulas or code. It means that the text on a card has several possible interpretations (which is why there are judges, too), which makes parsing difficult from a computer's point of view, for "complex" cards...which happen to be exactly the ones that most engines don't handle.
NPL doesn't fail, especially if your input isn't actually natural. Magic templating is very artificial, as the game rules have to avoid ambiguity at all costs. Parsing it is far from a pipe dream.

Also, Inform 7 is a natural-language programming language that works quite well. Naturally a few constructs have to use "unnatural" language, and ambiguous wordings is a no-no, but that's to be expected, and it can fully support a wide variety of language constructs (for instance, you can give it a definition of an adjective (like "powered") and then use it in further sentences (like "all unpowered devices") and Inform knows what you're talking about).

frwololo wrote:Additionally, I strongly believe that there's no difference between handling 50% of the cards and 95% of the cards, as you'll always find people to complain about the missing 5%
There is a vast difference, though. Saying there's no difference between two things just because people will complain about both is total nonsense, and the difference between an engine that can handle 5000 cards and one that can handle 10000 cards is massive, especially when you get into the terrain of custom cards (my eventual goal, once I have Incantus parsing card text, is to get it to successfully interpret an entire, 200+ card custom set, without errors).

frwololo wrote:THAT BEING SAID, your idea of parsing the initial text is extremely cool, and wagic could definitely use that if a translator "English" -> "Wagic card code" came out.
That's more-or-less what the current work with the card editor is doing (although it is, of course, translating it into Python code that works with Incantus instead of code that works with Wagic :P). My thought is that this is an unnecessary intermediate step; if we can successfully parse text and turn it into card code, we could just have the engine parse the text to begin with.

frwololo wrote:Edit: my point is that Wagic usually handles between 40 and 50% of a new set's cards before the set is even out (this represents roughly 3h of work that anybody with little experience can do, to compare to the dozens hours of work required to add new mechanics to the game), and I'm not sure a different parser would help in any way. The problem is the underlying engine. I'm convinced the issue would be the same for Incantus, even if the figures are different.
Before we broke the card format, thereby losing our thousands of implemented cards (but opening up the possibility of thousands more), we were generally implementing sets as they came out; and around 90% of each set could be done without major engine changes (generally just adding new keywords). Cascade was one of the few exceptions, and now our engine can handle it, too. :P

frwololo wrote:Having people writing the card's code is also a very nice way to build a community because non-developers can help a lot on this, and it helps them understand they are part of the project, which I think is rewarding. I think that's an extremely important point.

Plus if you want people to be able to create their own cards, good luck teaching them the rules of english so that the parser doesn't make mistakes. It's probably doable to parse the MTG cards, but how do you teach people how to "speak the correct english that the parser will understand" ? Parentheses are much better than words in that case I believe. Not mentionning the parsing nightmare on low resource machines (which, granted, is already the case in WAgic, but for different reasons)
In that case, isn't helping people learn better English an admirable goal? :P

I honestly don't see this as a problem; you don't see people arguing that we shouldn't use programming languages because people have to learn the grammar and syntax of programming languages for the computer to understand them. It's simply expected. If we decide to do it this way, we shouldn't cater to laziness. I'm not saying all projects should do direct text parsing, but I don't see any reason not to do it in Incantus.

Even supposing we have to change a few cards to have more rigid and unambiguous wordings (which I would not readily suppose), so what? Isn't the effort saved on all the thousands of cards that will require no human effort to run in-game worth it? Just imagine the infinite possibilities for custom cards that will require no actual coding to play with!
User avatar
MageKing17
Programmer
 
Posts: 473
Joined: 12 Jun 2008, 20:40
Has thanked: 5 times
Been thanked: 9 times

Re: Approaching rules through NLP and grammars

Postby frwololo » 03 May 2010, 07:31

MageKing17 wrote:the difference between an engine that can handle 5000 cards and one that can handle 10000 cards is massive
Say that to the player who wanted card 10001...he will still quit and go back to MWS ;)

MageKing17 wrote:
frwololo wrote:Edit: my point is that Wagic usually handles between 40 and 50% of a new set's cards before the set is even out (this represents roughly 3h of work that anybody with little experience can do, to compare to the dozens hours of work required to add new mechanics to the game), and I'm not sure a different parser would help in any way. The problem is the underlying engine. I'm convinced the issue would be the same for Incantus, even if the figures are different.
Before we broke the card format, thereby losing our thousands of implemented cards (but opening up the possibility of thousands more), we were generally implementing sets as they came out; and around 90% of each set could be done without major engine changes (generally just adding new keywords). Cascade was one of the few exceptions, and now our engine can handle it, too. :P
Everything's a dick contest to you isn't it? :wink:
My point was that a "natural language" parsing system wouldn't have helped you coding the 10% missing AND that it wouldn't have accelerated the coding of the 90% by a significant margin.

MageKing17 wrote:I honestly don't see this as a problem; you don't see people arguing that we shouldn't use programming languages because people have to learn the grammar and syntax of programming languages for the computer to understand them. It's simply expected. If we decide to do it this way, we shouldn't cater to laziness. I'm not saying all projects should do direct text parsing, but I don't see any reason not to do it in Incantus.

Even supposing we have to change a few cards to have more rigid and unambiguous wordings (which I would not readily suppose), so what? Isn't the effort saved on all the thousands of cards that will require no human effort to run in-game worth it? Just imagine the infinite possibilities for custom cards that will require no actual coding to play with!
Again, not my point.
My point is that it is easier to teach and write a tutorial for a (programming) language than for a human language. Having taught both my mother language and programming languages in my life, I can tell you that programming languages are (obviously) much strict, and therefore it is much easier to parse, less easy to make mistakes, AND much easier to teach.

In the end, you use a subset of English and have to explain to card creators that yes, it looks like english but it's not really, and then you have to try to justify why "and" can be used in this or that case, but not in that other case, etc...

So, yeah, even if it works in one way for parsing, the other way (teaching the rules) is probably way easier with a real programming language.

To me the reason NLP fails is because it looks like it could work with our natural language, when it actually doesn't. In the end, you need strict rules/grammar the way you have in standard programming languages, but additionally it's confusing because of the confusion with real languages, AND it's longer to type than actual code.

Even if Magic only uses a subtype of English, the confusion for people who have to write cards would remain.
Please keep in mind that I'm not saying it is not useful at all, rather that it is not a silver bullet, far from that.

http://en.wikipedia.org/wiki/Garden_path_sentence
frwololo
DEVELOPER
 
Posts: 265
Joined: 21 Jun 2008, 04:33
Has thanked: 0 time
Been thanked: 3 times

Re: Approaching rules through NLP and grammars

Postby MageKing17 » 04 May 2010, 04:58

frwololo wrote:
MageKing17 wrote:the difference between an engine that can handle 5000 cards and one that can handle 10000 cards is massive
Say that to the player who wanted card 10001...he will still quit and go back to MWS ;)
I'd argue that player isn't our intended audience; people who actually want to play with rules enforcement won't "jump ship" just because a certain card isn't implemented (in fact, a few of our users more-or-less told us to "just ignore the interaction between Equinox and Wild Swing, it's not worth the hassle" :P (and I feel obliged to point out that a direct parser could handle that interaction due to knowing exactly how Wild Swing differs from Armageddon)).

frwololo wrote:
MageKing17 wrote:
frwololo wrote:Edit: my point is that Wagic usually handles between 40 and 50% of a new set's cards before the set is even out (this represents roughly 3h of work that anybody with little experience can do, to compare to the dozens hours of work required to add new mechanics to the game), and I'm not sure a different parser would help in any way. The problem is the underlying engine. I'm convinced the issue would be the same for Incantus, even if the figures are different.
Before we broke the card format, thereby losing our thousands of implemented cards (but opening up the possibility of thousands more), we were generally implementing sets as they came out; and around 90% of each set could be done without major engine changes (generally just adding new keywords). Cascade was one of the few exceptions, and now our engine can handle it, too. :P
Everything's a dick contest to you isn't it? :wink:
Sorry, not my intention. :)

frwololo wrote:My point was that a "natural language" parsing system wouldn't have helped you coding the 10% missing AND that it wouldn't have accelerated the coding of the 90% by a significant margin.
No, it wouldn't have coded the 10% missing by itself (unless all new mechanics are expressed using existing templating), but once you code the new mechanics in, all future cards using that mechanic are supported. And while it may not have accelerated the 90% on the first set, or the second set, by the fifth or sixth set, you have barely any work to do (and for custom cards using existing mechanics, no work at all... which, as a custom card creator myself, is very attractive to me).

frwololo wrote:
MageKing17 wrote:I honestly don't see this as a problem; you don't see people arguing that we shouldn't use programming languages because people have to learn the grammar and syntax of programming languages for the computer to understand them. It's simply expected. If we decide to do it this way, we shouldn't cater to laziness. I'm not saying all projects should do direct text parsing, but I don't see any reason not to do it in Incantus.

Even supposing we have to change a few cards to have more rigid and unambiguous wordings (which I would not readily suppose), so what? Isn't the effort saved on all the thousands of cards that will require no human effort to run in-game worth it? Just imagine the infinite possibilities for custom cards that will require no actual coding to play with!
Again, not my point.
My point is that it is easier to teach and write a tutorial for a (programming) language than for a human language. Having taught both my mother language and programming languages in my life, I can tell you that programming languages are (obviously) much strict, and therefore it is much easier to parse, less easy to make mistakes, AND much easier to teach.

In the end, you use a subset of English and have to explain to card creators that yes, it looks like english but it's not really, and then you have to try to justify why "and" can be used in this or that case, but not in that other case, etc...

So, yeah, even if it works in one way for parsing, the other way (teaching the rules) is probably way easier with a real programming language.

To me the reason NLP fails is because it looks like it could work with our natural language, when it actually doesn't. In the end, you need strict rules/grammar the way you have in standard programming languages, but additionally it's confusing because of the confusion with real languages, AND it's longer to type than actual code.

Even if Magic only uses a subtype of English, the confusion for people who have to write cards would remain.
Please keep in mind that I'm not saying it is not useful at all, rather that it is not a silver bullet, far from that.

http://en.wikipedia.org/wiki/Garden_path_sentence
Except Magic templating is already a rigid subset of English, just nobody's gone and written a formal grammar for it yet. People who make custom cards are already going out there and learning the way Magic templating actually works, and if we successfully implement direct oracle parsing, they'll be able to put custom cards in with no further effort on their behalf. The "garden path sentence", while a very interesting problem in linguistics and a helpful reminder of why real natural language processing will require strong AI, doesn't really apply to Magic templating; just like Inform 7 code, it's a domain-specific language that's a subset of English, not English itself, that is possible to parse deterministically, just as the Inform 7 compiler can arrive at a single conclusion about a sentence (including the conclusion that it is nonsense). Even a garden path sentence can sometimes be interpreted correctly, as long as the parser is advanced enough, but I can't think of problematic garden path sentences in Magic templating.

I must admit, your "silver bullet" comment somewhat confuses me. I'm not saying that implementing this natural language parsing will mean that we'll never have to do any more work on the program ever again, or anything; not even that we'll never have to do card-related coding ever again. Merely that it will reduce the total amount of effort needed to implement cards. Hell, even a simplistic psuedo-parser (by which I mean, it defines abilities like "When CARDNAME enters the battlefield, draw a card." and then spots those abilities in card text, rather than having to define the ability every time it shows up on a card... I believe this is similar to Forge's "keyword" system, but I'll admit I haven't used Forge so am not 100% sure) is more efficient. With a simple application of logic, you can see it will always require less coding than Incantus's current system (defining every ability on every card, except keywords), because all the abilities you'd have to define that way will still have to be defined this way, only you'll have to define them fewer times. If you add arguments to this psuedo-parser (okay, that's more like Forge's keywords), so that you can interpret "<COST>: CARDNAME deals <AMOUNT> damage to [target creature|target player|target creature or player].", you have even less work to do... and if you take it all the way and have it parse Magic templating directly, well, the parser will take quite a bit of time, yes, but the amount of time you have to spend on individual cards drops to zero, and there becomes a literally infinite number of possible cards that can be used without any coding whatsoever.

That's why I want to make the program parse the oracle text directly; because, in the long run, it will save time, and be awesome. And really, awesomeness is what it's all about. :P
User avatar
MageKing17
Programmer
 
Posts: 473
Joined: 12 Jun 2008, 20:40
Has thanked: 5 times
Been thanked: 9 times

Re: Approaching rules through NLP and grammars

Postby Huggybaby » 04 May 2010, 05:24

MageKing17 wrote:And really, awesomeness is what it's all about. :P
Right on brother!
User avatar
Huggybaby
Administrator
 
Posts: 3075
Joined: 15 Jan 2006, 19:44
Location: Finally out of Atlanta
Has thanked: 570 times
Been thanked: 570 times

Re: Approaching rules through NLP and grammars

Postby n00854180t » 10 Oct 2010, 04:57

It seems like the way to go for implementing new keywords is to expose a large subset of the rules enforcement itself to script, so that when new keywords are released that aren't currently implemented, you can simply expose whatever functionality is needed to accommodate the new keyword.

It's a matter of granularity, I guess: Currently most rules enforcement engines only expose actual card scripting, rather than card scripting and rules enforcement/keyword scripting. I think with enough of the base functionality exposed to script, new rules enforcement/keywords could fairly easily be added without having to make engine-wide changes, or recompile an exe.

I'd love to know your guys' thoughts on this.
n00854180t
 
Posts: 19
Joined: 09 Oct 2010, 05:41
Has thanked: 0 time
Been thanked: 0 time

Re: Approaching rules through NLP and grammars

Postby silly freak » 10 Oct 2010, 09:35

hi! i don't know what you exactly mean by granularity, but I'm trying that with Laterna Magica. Every "keyword" (i.e. a parser who understands certain card text; in fact, the few abilities implemented at the moment use oracle text) is specified individually, and new keywords can be added without messing with existing code, like necessary with forge's CardFactory. Of course, the keywords can only use rules and effects that are already there, and that's currently the problem.

A cool thing would be to have a set's mechanics (+cards) bundled, so you can add new sets like plug-ins. Rules changes would still need to be handled inside the engine, but the usual mechanic that simply interprets existing rules would be very simple to add. This would allow for custom sets, without making them a must in the main release.
___

where's the "trust me, that will work!" switch for the compiler?
Laterna Magica - blog, forum, project, 2010/09/06 release!
silly freak
DEVELOPER
 
Posts: 598
Joined: 26 Mar 2009, 07:18
Location: Vienna, Austria
Has thanked: 93 times
Been thanked: 25 times

Re: Approaching rules through NLP and grammars

Postby telengard » 10 Oct 2010, 15:46

silly freak wrote:hi! i don't know what you exactly mean by granularity, but I'm trying that with Laterna Magica. Every "keyword" (i.e. a parser who understands certain card text; in fact, the few abilities implemented at the moment use oracle text) is specified individually, and new keywords can be added without messing with existing code, like necessary with forge's CardFactory. Of course, the keywords can only use rules and effects that are already there, and that's currently the problem.
And that's the hard part. :)

You need hooks or a way to inject new rules (and effects etc) behavior somehow. I implemented the Venom abilities in such a way that it just dealt it out and all state about Venom was kept in a generic container that the engine provides. It also added a check for destruction at the appropriate time (via subscription). I could probably extend upon this to allow new rules behavior, but haven't had a need to for my game. Over time, all of my abilities have gone from very specific to very generic allowing for many permutations without having to change any c++ code.

silly freak wrote:A cool thing would be to have a set's mechanics (+cards) bundled, so you can add new sets like plug-ins. Rules changes would still need to be handled inside the engine, but the usual mechanic that simply interprets existing rules would be very simple to add. This would allow for custom sets, without making them a must in the main release.
Ideally, as I mentioned above, you would have the means to inject new rules code. Some new abilities I've seen in Scars of Mirrodin could be done without any changes. i.e. Metalcraft. The ability just sounds like a pre-condition to being able to activate, so if in your engine you have the means for any activated ability to have preconditionS (note I said plural) it would probably involve no work. I do this and it works out very well!

~telengard
Author of Dreamblade:
viewtopic.php?f=51&t=1215
User avatar
telengard
DEVELOPER
 
Posts: 369
Joined: 23 May 2009, 23:04
Has thanked: 0 time
Been thanked: 23 times

Re: Approaching rules through NLP and grammars

Postby silly freak » 10 Oct 2010, 22:03

telengard wrote:You need hooks or a way to inject new rules (and effects etc) behavior somehow. I implemented the Venom abilities in such a way that it just dealt it out and all state about Venom was kept in a generic container that the engine provides. It also added a check for destruction at the appropriate time (via subscription). I could probably extend upon this to allow new rules behavior, but haven't had a need to for my game. Over time, all of my abilities have gone from very specific to very generic allowing for many permutations without having to change any c++ code.

~telengard
Venom's ability has three big parts: It triggers on being blocked, creates a delayed triggered ability, and destroys a creature. At least the first and the last are so general that a good rules engine should support them.
Having all three, you probably still need some code to tie it together, but not in the engine. Delayed triggered abilities are something to be implemented in the engine, not the actual Venom ability.

And that is what I was talking about: If you didn't have delayed triggered abilities, you can't implement Venom, but that's not a shortcoming of the engine's architecture but of is completeness. If you have a complete rules engine implementation, you should be able to implement any mechanic. With code, still, but without the need to recompile the core.
___

where's the "trust me, that will work!" switch for the compiler?
Laterna Magica - blog, forum, project, 2010/09/06 release!
silly freak
DEVELOPER
 
Posts: 598
Joined: 26 Mar 2009, 07:18
Location: Vienna, Austria
Has thanked: 93 times
Been thanked: 25 times

Re: Approaching rules through NLP and grammars

Postby n00854180t » 10 Oct 2010, 23:54

silly freak wrote:
telengard wrote:You need hooks or a way to inject new rules (and effects etc) behavior somehow. I implemented the Venom abilities in such a way that it just dealt it out and all state about Venom was kept in a generic container that the engine provides. It also added a check for destruction at the appropriate time (via subscription). I could probably extend upon this to allow new rules behavior, but haven't had a need to for my game. Over time, all of my abilities have gone from very specific to very generic allowing for many permutations without having to change any c++ code.

~telengard
Venom's ability has three big parts: It triggers on being blocked, creates a delayed triggered ability, and destroys a creature. At least the first and the last are so general that a good rules engine should support them.
Having all three, you probably still need some code to tie it together, but not in the engine. Delayed triggered abilities are something to be implemented in the engine, not the actual Venom ability.

And that is what I was talking about: If you didn't have delayed triggered abilities, you can't implement Venom, but that's not a shortcoming of the engine's architecture but of is completeness. If you have a complete rules engine implementation, you should be able to implement any mechanic. With code, still, but without the need to recompile the core.
This is pretty much what I'm talking about. Though you have to go a step further than that. Instead of implementing specific triggers (to support triggered abilities) add hooks that allow you to implement any trigger at any time in script, without a recompile (this is why I mentioned granularity).

For instance, instead of having specific hooks for certain triggers (card enters graveyard, card is exiled, zone change etc.) you can have a generic priority system which allows you to hook whatever you want into pre and post events. So each step might have 2-3 places it executes a callback to see if there are any pieces of code which need to happen... perhaps at the beginning of each step (pre and post beginning), during (pre and post) and end (pre and post).

Initially this sort of design is more complex to set up, but eventually it will give you more flexibility in writing triggers and other abilities without having to modify the engine.

I don't know if I'm really being clear here or not.
n00854180t
 
Posts: 19
Joined: 09 Oct 2010, 05:41
Has thanked: 0 time
Been thanked: 0 time

Next

Return to Magic Rules Engine Programming

Who is online

Users browsing this forum: No registered users and 1 guest


Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 10 minutes)
Most users ever online was 279 on 11 Jul 2013, 22:03

Users browsing this forum: No registered users and 1 guest

Login Form