silly freak wrote:Interesting article! I'm trying to reproduce the results myself (whenever I have a little spare time), and would be interested in the answer as well. I could imagine that forge script has a little less boilerplate compared to Java, and that this could be beneficial to the performance, but I don't know.
The article is just one of many on the topic. I just selected the one which features the performance numbers since I was citing them. If you want to read more about this, the best reference is probably the
Arxiv paper.
I am not sure what to expect in terms for performance to be honest. Sure there is less boilerplate, but that should not be a real issue for a neural network since that type of code can be copied verbatim from one card to the next. Surely the network should be able to get that right. That is why I was asking the question.
If the performance is significantly different, this could mean that there is probably an "optimal" way of representing cards. Conversely if performance on Forge is similar to that obtained on XMage, and that performance is way lower than typical translation problem, that could indicate either that the problem cannot be reduced to a translation task (meaning a more sophisticated approach is necessary) or that the neural network is too small. I don't work on machine translation so I don't know what the performance of a RNN-based state of the art system is.
I don't think the results featured in that paper are that interesting. It looks more like a case of trying to apply neural networks to yet another problem (and getting crappy results out of it in the process). More interesting experiments that have been run. Just a couple from the top of my head:
- Take card texts in two different languages, train a model, measure performance, and compare that to the results on card implementation. That would shed some light on the question I was discussing above.
- Take the English card text and corresponding Forge implementation, and train a model to produce the English text from the code. First, I think that would be fun. Second, that could potentially help spot cards which are incorrectly implemented (assuming performance is good enough of course).
While I was searching for the Arxiv paper above, I ended up finding a brand new paper called
Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms published only 2 days ago. Talk about a coincidence. I am mentioning this here because I think that will be of interest to people visiting this topic.