gamebook.js - an IF-style gamebook engine

Hi all,

I’m new to both this forum and IF, and I already posted a similar message to the Project Aon and Demian Katz’ mailing lists, so sorry for any redundancy.

I have created “gamebook.js”, which is an experimental crossbreed between IF and gamebooks: instead of navigating an explicit set of choices, you are free to type any command after each section. The engine then tries to match what you typed with the best available choice, using a parser. The parser is quite simple (as it wouldn’t really make sense for it to be otherwise) but nevertheless using some techniques like stemming and synonym matching, to improve flexibility (and fun!) a bit.

It currently implements the full version of Fire on the Water, the second book in the LW series, whose content is freely available (for doing such mashups) through Project Aon.

If you’d like to try it:

projectaon.org/staff/christian/gamebook.js

or read more about the idea, or some technical details (on my blog):

cjauvin.blogspot.ca/2013/03/susp … elief.html

I’m quite aware that this idea is far from perfect, and that it has some obvious flaws. But it was a fun project to do (with some interesting web programming challenges) and I’d like to know what some people might think of it, before moving on to something else.

Christian

Hi Christian! A couple of bug reports:

[spoiler]On p. 278 I rolled a 1 and got a TypeFault exception. I thought I’d copied the text for you but it seems I haven’t! Sorry.

On p. 78 I did not actually lose the chainmail waistcoat from my inventory.[/spoiler]

I’m a bit conflicted about this – I like the book, but it really does feel very guess-the-verby. This may be inescapable given the nature of the project.

Hi Matt,

Thanks for the comments! “Guess-the-verby” made me smile (I wasn’t aware of such an expression!) but I fear that it must be also pretty much true… even though I tried hard to incorporate mechanisms to reduce guesswork, it remains that a much more powerful system would be required to answer the seemingly simple question: to what choice does this user input seem to refer more, A or B? The problem is that in most cases, the space of choice words is way too patchy, while the space of possible user inputs is way too vast, and bridging the two is very hard.

I corrected the Chainmail bug in Section 78, but couldn’t reproduce the problem you mentioned for Section 278… are you sure it was there? And what browser are you using?

Christian

I couldn’t get past the very first choice section without resorting to hints. The concept isn’t bad, but taking the text verbatim from an existing work does have this problem - the text was written for a platform which does spell out your options. It needs a rewrite of SOME sort, or the player is left floundering.

One thing that might help is, if the player enters an invalid input, the game could prompt them if they want a hint.

And like Peter I had no compunction in referring to hints. The first choice is particuarly non-obvious.

Sounds to me like the problem could be tackled from the statistics end, since what you’re talking about is essentially what Naive Bayesian Classifiers do, and they seem to do a pretty good job of determining whether email is spam or not! (At least, for my email. Your mileage may vary. :wink: )

You’d “just” need to set up a classifier, and then train it with loads of possible inputs that could correspond to “option A”, “option B”, …, “option N”, and “this option does not exist”. A user-reporting feature (hey, this game didn’t do what I wanted) could help fix up options missed after release.

…at least, that’s the idea. I’ve never actually tried to implement something like this myself! In any case, here’s a link to the theory: http://en.wikipedia.org/wiki/Bayesian_spam_filtering

[EDIT]: I like the idea of this, but the game was written in such a manner that you are expected to have the choice prompts available. I think this could be mitigated in a system where the story itself hints at the options available, but in the case of this particular story, I don’t think it works so well except as a proof of concept. (Still! I think it’s pretty interesting and would enjoy seeing further experimentation!)

@ironwallaby Tackling this problem with machine learning is a very interesting idea (and that’s actually what I more or less do for a living) but the way you propose is not totally clear to me: the label “is related to option X” only has a meaning in the context of a certain section, so how would train your classifier? The problem would seem to me so exceptionally “sparse” as to be almost impossible to solve that way… One other area that might be of some help are “query expansion” techniques (from the world of information retrieval).

I like the idea and how it was implemented, but like others, I found it difficult to guess which verb or word to use.

Also the green letters on black background make sore eyes.

My assumption was basically that you’d have a lot of little classifiers, each trained separately for each section (or, presumably, each input label?)

I wouldn’t be surprised—I haven’t messed with Bayesian techniques, but in my dealings with neural nets, you really need sort of a “critical mass” of data for the machine learning to get a handle on! Tricky business for sure.

This will probably be my last update about it, so just for the sake of completeness: I have added an “always-cheat” mode (which still requires to enter commands or type the number of sections you want to go) and reduced the minimal word autocompletion length to 2 (btw you can expand an autocompleted word by using TAB).