Implementing ChatGPT's Parser

People seem to be stuck on this idea that the point of a machine learning-enhanced parser is to understand more complex commands.

It is as if they never got frustrated with a game where they know exactly what to do but just can’t phrase it in a way that the game will accept. Or they never tried to introduce parser games to somebody who dismisses it after a couple of seconds saying ”It doesn’t understand anything I type!”


Only times I’ve personally had issues with getting a text parser game to understand me were quite specifically games with “different” expected syntaxes. Bunch of Adrift AIF, some old one-off engines, and basically mostly non-Infocom, non-Sierra games. The one outlier: telling another character something, no matter the parser. Do you have to phrase it as ask Ford about my house or Ford, what about my house, or are there other ways to do it?


As to what problems we’d like to solve with a better parser, I have a personal answer: to better maintain the illusion that I (the player) am in the story.

When I get drawn into a good IF experience, I have a tendency to start writing commands that are almost as rich as the prose the program presents to me. Eventually, I step outside Informese and–boom–I’m now out of the reverie.

An example, I’m told “There’s a pitchfork leaning against the wall of the barn.” I do some things with the pitchfork and later realize I should have left it right where it was. So I type “Lean the pitchfork against the wall of the barn.” It’s not inconceivable that the parser would handle such a command, but it’s unlikely.

This problem (which I’m sure bothers few people other than myself) is not the parser but the gap between the canned prose and the world model. One possible solution is to magically [insert AI here] improve the parser to understand that all those details in the command are irrelevant and distill my overly verbose command down to “drop pitchfork.” Another is to extend the model to better align with the prose and to integrate the parser with the world model more deeply. My experiments (toys) have pursued the latter approach, but I could see how machine learning might enable the former.

Given: “The gravel driveway leads north to the barn and south the edge of the highway.” Most players choose “n” or “s”. But when I’m immersed, I’ll very naturally type something like, “follow the driveway to the barn” or “go toward the highway.” Parser failure here breaks me out of the game. It’s interactive fiction; I want to co-author the the story.

So one takeaway is that a more flexible parser need not extend the game mechanics in order to be valuable to me. One that better matches the world model (or appears to) would help keep me in the story.

That said, I love seeing new concepts in games that don’t exist in others. If I can experience the entire story with just the verbs and actions provided by the stock Inform library, that’s less interesting to me than one that incorporates a new type of action. (That’s not impossible with current parsers, but I think authors are sometimes discouraged by the difficulties they encounter attempting to do so.)

That said, nobody wants to guess the verb, not even me. But if the parser is good enough to understand almost anything the game says (“lean the pitchfork…”), then the verbs are right there in the text begging to be used.


I think of this as the “Lost Pig Model.” When the prose, setting, and premise all come together to make the parser interaction seem plausible. You often see it with robot or AI player characters as well.

Sorry 'bout the bump but I just found a very interesting Twitter thread on how ChatGPT works and it immediately reminded me of two things said here.

Earlier on this thread, it was likened to what amounts to a Markov chain model – I’ve worked with those before, for an IRC chatbot, so I’d recognize “this blob is likely to be followed by one of these blobs” anywhere. So that’s the whole “>take bread” example covered.

But ChatGPT doesn’t just finish the sentence like a Markov chain would. To quote the Twitter thread, it doesn’t just consider “an apple a day” is most likely followed by “keeps the doctor away”, but also rephrases the prompt a bit, turning “what is the most cited economics paper” into “the most cited economics paper is” and then running from there instead. Which is like what Eliza did way back in 1966.

ChatGPT is just what you get when Eliza and Markov have a savant baby with unrestrained Internet access.

Nah, I think I’d just go with a big ol’ dictionary and a lot of ignored words. >smash that like button… might as well just have the Like button handle being attacked.


That’s how I see it, too. I do wonder how the algorithm of IBM’s Watson works.

The computer system that played Jeopardy?

That’s actually pretty easy to consider, since Jeopardy answers are meant to be descriptive. Sure, the trick is to find out what is being described but given enough keywords (provided by the given answer) you can come up with a handful of possible questions, some with higher confidence scores than others.

The problem is to reach a high enough confidence score before the other players buzz in.

Answer: “Any of the twelve peers of Charlemagne’s court, of whom the Count Palatine was the chief.”
Key words: “twelve” “Charlemagne”, “Palatine”.
Plug those three into DDG or Google and the top result lets me ask…

… “who were the paladins?”

Difference is, Watson couldn’t google the answers like that, since the other players couldn’t, so it had to have its whole knowledge base pre-built.


Oh and by the way, that thing with the implied verbs, where I used Day of the Tentacle as an example?

We technically already do something like this with compass directions. We don’t type go north even though we can – we just north.

1 Like

I asked ChatGPT to “make a parser-based text adventure about a duck who locked itself inside a car and it’s trying to get out” and it actually did just that… It’s playable, even. It’s just a one-turn game, with three different solutions to winning the game, but still.


Using tech like GPT-3 as a parser risks being overkill, but the example for “text to command” appears to be the most relevant to the proposal, translating free text into simpler commands that could be passed on. Additional training and/or prompting would be needed to constrain what could be sent through, since the large language model does not have direct access to any IF engine object model or state.

1 Like

A preprocessor could be useful to people new to parser games as it may not be obvious to just use:
Verb noun
all the time.

I suddenly have this idea to write a small tutorial, selectable from the start, just to make sure the player knows the basics of expected IF inputs.

Tutorial Hall
As you get off the train, you find yourself in a large room. Several people who came along with you shuffle off through an exit to the north.
An officer addresses you. The moment you turn to him, he knocks an empty can from the rim of a trash can next to him and barks an order: “Pick up that can.”

> what?

The officer glares at you – at least that’s what you infer considering he’s wearing a gas mask of some sort. “Pick up that can,” he repeats.

> pick up can

You might want to remember that “get can” or “take can” would also work.
The officer indicates the trash can with what seems to be a stun baton. Seems he wants you to put the can in there.

> in where?

The officer impatiently glares at you and fingers the activator of his stun baton.

> throw can at officer

The officer catches the can and rushes at you, stun baton sparking in his grip! You black out on contact…

Press any key.

(Actual starting location of your game)
(Regular room description.)


Something like that. I dunno, it’s late and I haven’t actually ever played Half Life 2.


Outside of IF it’s pretty common to open a game with a tutorial. I think there were some references to the presumed superior accessibility of platformers and such. But actually I’ve seen more platformers with mandatory tutorials than IF. The first Halo game wouldn’t even let the player move before teaching them to look around. AAA first-person games, which probably have the best odds of finding players who are familiar with controlling them, begin with more or less disguised tutorials (Far Cry, Skyrim). They’re also more likely to have the budget for it, but that’s not my point.

My point is: IF people seem to underestimate the utility of putting a tutorial in their game, and overestimate the need to make it unobtrusively skippable. The consensus seems to be that a message like “Type HELP if this is your first time playing this game” hits the sweet spot. I believe it would make more sense to put players through a tutorial unless they’re experienced enough to guess the command to skip it. Again, I know a tutorial phase can represent a significant investment of time for the author, for comp judges, especially if it’s framed as something placed before The Game begins. But if we’re going to talk about why players can’t figure out how to operate the thing, we need to talk about training them.

There’s also this idea that certain games could be introductory games, and that once trained by one of these games a player is basically competent to play any parser game. There’s some truth to it, since obviously experienced players have a much easier time with new games. But I bet your game has its quirks, and of course you can never be sure your game won’t be somebody’s first.

So when we talk about putting an AI interpreter in the game, it looks to me like a step backwards. Instead of deigning to tell the player how to operate the game, we’re actually going to remove the feedback they would normally get, and train them less than ever. Think about what it would be like to play Skyrim, if instead of telling you what buttons to press and giving you five minutes just to practice looking around, it just threw you in and guessed what you meant the buttons to do. That’s what’s being proposed for parsers. And with all respect to people who like entering very involved commands, players deserve to know that they may enter concise ones.


Are you familiar with the TALP?


Yes, in fact I had the TALP in mind with regard to the “introductory game” concept. I think it points to a desire to situate training as a peripheral concern. Something to cover briefly in an optional help text, or to make some other game’s problem entirely. I suppose analogy to books encourages it: since readers are initially trained with picture books, and at some point graduate from training entirely, IF players can be trained with beginner games then set loose. But I’ve seen plenty of books prefaced with a “how to use this book” section that explains which chapters to read based on your background and interests. In games, a tutorial section is the equivalent aid.


I didn’t really see it so much as an “introductory game” jam, even though it did make plenty of those. I saw that as a byproduct of the actual point, a space for IF tutorial innovation and experimentation. Encouraging this open trial of tutorials creates new tools and approaches for other authors to adapt into their own games. If none of these techniques get used anywhere else then where they were trialed, we sort of failed, tbh.

There are many established ways to approach this problem, and certainly even more that haven’t yet been tried. I supported the TALP specifically to help encourage innovation and implementation on this front, as I saw it integral to both solving the issues you laid out above and to outreach and growth of the parser niche as a whole.


These points are all well taken, though I admit I’m personally rather fatalistic about this kind of thing. Most works of parser IF these days are quite short (1-2 hours), such that adding even a 10-minute tutorial section could meaningfully impact the pacing and narrative of a game, as well as representing a nontrivial amount of additional labor. On the flip side, there’s a low likelihood that any particular contemporary parser game will be somebody’s first, it’s a questionable proposition that any individual author will do a better job of explaining things than things like the “how to play IF” postcard and the instructions built into the Emily Short Inform extension do, and – most damningly to me – there was a huge amount of effort expended 10-15 years ago to make parser games more friendly to newcomers, none of which seemed to have much of an impact (those who were around back then, please correct me if I’m wrong!)

I dunno, I’m usually not the Negative Nelly around here. And I think for a game with idiosyncratic mechanics, player aids including a short tutorial can often be helpful. But it’s hard for me to believe that overall, the game is worth the candle.

EDIT: the more optimistic take is that actually, people do seem to come across parser IF all the time – often they’re people who like played Infocom stuff back in the day but never knew folks were still making these games, but often they’re coming to it brand new. But IME these days usually they’re either coming because they saw a recommendation via a forum or general-interest video game site, usually for something high-profile and newbie-friendly like Hadean Lands or Photopia, or because they’re into playing or making choice-based IF and decide to take the plunge and check out the parser side of things, in which case my sense is they’re often sufficiently embedded in the community that they either know, or can find out, what might be good places to start.

To make either pathway work, it’s definitely important that there be a critical mass of good, newcomer-friendly parser IF that explains its mechanics well, and in total will resonate with a broad range of potential players in terms of genre, game mechanics, writing style, etc. But to my mind it doesn’t follow that either pathway would be noticeably more effective if the critical mass were much bigger than it currently is because the large majority of new games had tutorials, is I guess my point, as compared to other efforts aimed at bringing more new folks into the fold.


Also, this is as good of a time as any to make sure this is known here. The third annual TALJ needs a new organizer this year. The previous organizer has too many prior commitments this year, but would still like it to happen if possible. So, if encouraging innovation on this front seems important to you, perhaps join/form a committee to see TALJ #3 through?

Be a shame to drop the jam after being on a roll for a couple years. These sorts of things live and die on consistency.

The ruleset and framework of the jam have been pretty well established at this point, so this would be more like minding the store than building from scratch like SeedComp.

As a friendly bit of advice, having 3-5 people co-organizing is a blessing, tbh. People are all busy at different times, so with 3 or more organizers, someone’s always on the ball.


I’m in.


Excellent! Thank you, Garry! Any other takers out there? More of a factor of the first willing than anything else.