Parser Theory / Foundations

Hi all. I want to write a IF parser. I’ve read some of the content here like Patrick Mooney’s great post on writing a parser in Python.

The parsers for Inform, Tads 3, Hugo and even the earliest Infocom are amazing technology - so amazing they must have benefited at least slightly from prior art. We all stand on the shoulders of giants.

To that end, does anyone know how these great engines got started on the parsing front? Is there a 1970’s NLP / linguistics classic out there Graham, Michael, and Kent would have all been familiar with?

Thanks for any direction you can offer me.


The parser I’m personally most familiar with is the one used by Sierra in their SCI adventure games before they went fully point-and-click. I blogged about it, too.


I implemented a command-based parser for QuestJS. I am not sure if that is better or worse when it comes to parsing English, but I do think it was easier to implement, and I have yet to hit any limitation of it.

By command-orientated, I mean there is a list of command objects in the game (to which authors can add). Each command object has a scoring function so can rate itself against the input string, and all the parser really has to do is get the one with the highest score. Score is based in part on matching against a regular expression, and if there is a match, the command object can then look at the items named or whatever.


TADS, Hugo, and Inform very specifically imitated the Infocom parser. The Infocom parser doesn’t use any particular bit of linguistic theory – Lebling etc just cobbled it together.

The underlying idea is very simple. (Which is why there are so many homebrew IF engines dating from the 80s and 90s.) You match single words or noun phrases against the input:

GET noun → Taking action
GET IN noun → Entering action

Make a list of all possible matching lines; pick the best one.

The hard part is the infinite number of details you have to pile on top of that basic idea. Matching objects with various arrangements of synonyms and properties. Disambiguation. Scope (knowing what objects are reachable). Containers, transparent and opaque. And (the actual hard part) allowing the game author to customize each of these steps in a convenient way.


The original Infocom parser evolved over time through the first iterations of Zork (the mainframe one), which is why even in their earliest games it’s remarkably advanced for the era. Many of these early advances, like accepting “the” and requiring adjectives to come before nouns, were not especially appreciated by players—but it also gave them the flexibility they needed for things like giving multi-sentence orders to other characters (the solution to a puzzle in Enchanter).

I don’t actually know much about the internal machinery there—I’m not great at ZIL and the parser is the most complex bit of ZIL code there is—but I can tell you the broad strokes of how Inform’s parser works. There’s an internal register called wn (“word number”) which is initially set to 0. Each possible command structure (“grammar line”) is an array of routines. Each of these routines can consume any number of words (starting at wn, and then incrementing wn past them) and return whether it matches or not. For example, a routine that looks for the literal word “with” will look at word wn, see if it’s “with”, then either return failure or increment wn and return success. If the whole line matches, that’s the player’s action.

I’m glossing over a lot of details here (the parser is incredibly complex and has been steadily accumulating complexity without a full rewrite since before I was born) but this is the general idea behind it. This notably doesn’t allow for backtracking: each routine is expected to consume as much of the input as it can, without knowing what’s supposed to come next. This is why Inform doesn’t allow “[text] [something]” in a grammar line: the [text] routine won’t know where to stop!

Of course, now that we generally have vast amounts of RAM to spare, a design using dynamic memory allocation is far more feasible than it was in Infocom’s day. Earley’s algorithm is an elegant way to build a general-purpose parser that doesn’t have these restrictions, at the cost of a lot more memory (while Inform’s parser uses only a fixed number of registers).


A classic at the time was “Language as a Cognitive Process” by Terry Winograd, 1983.

This covered most techniques from the 70s and early 80s. It’s a bit dated today however.


We should note that later on, the Infocom people said “Dammit, let’s rewrite this parser from scratch now that we know what we’re doing!” That parser had a more rigorous technical basis. (See article from 1989.)

Technically speaking, we used an ATN algorithm with a LALR grammar and one-token look-ahead.

However, the “new parser” was very late in Infocom history. It only made it into Zork Zero, Arthur, and Shogun before Infocom shut down. The “modern” parsers (TADS/Inform/Hugo/etc) were not based on it in any significant way.


Why is that?

I wrote a CFG parser for JavaScript with the idea that it would eventually be one of the underpinnings of a complete IF library. Still working on the library off and on, but the parser is a thing that exists. You can find out more about it here:


I guess because those games didn’t provide a compellingly better experience. There was nothing about playing Zork Zero which made you think “Oh, this parser is the one I should imitate in my IF system.” If you were an experienced IF fan – and we all were – then you were typing simple PUT OX IN BOX commands anyway, and Zork 1 handled that perfectly well.


The Balladeer parser takes a different tack.

I did a lot of reading on Speech Acts and other similar perspectives which treat speech as a part of a social transaction.

So when you enter commands, you are not interacting directly with the physical model of the game world.

Instead, imagine the player as a Bertie Wooster and the parser as Jeeves.

In Balladeer there is an extra layer which allows for interpretation of the input as a dialect. There is dynamic generation of which phrases are currently understood at any particular point.


I think 95% of players are in fact basically Bertie Wooster so this seems like a sound approach.

Is there a more detailed description of Balladeer’s parser you could point to? It sounds very interesting!


Thanks for the interest :slightly_smiling_face:. The situation right now is that Balladeer is very much a Python library and to use it you really have to consider your IF project fundamentally as a piece of software.

I understand and respect the tradition that other IF frameworks try to protect an author from that and that’s why there are vocabulary conventions and standardised object models and so forth.

If I have a mission in life it’s not to create epic stories, but to try and find a way to bridge that gap between free artistic expression and the hard realities of the computational environment.


My apologies, as I wasn’t sentient at the time that all of this was happening, but were there any noticable differences from the player’s perspective that made the later system better? Or were players so used to previous systems that these improvements were never noticed?

1 Like

If nobody told me it had a new parser, I never would have known.

The graphical stuff was different, but that isn’t parser-specific


I likewise don’t have first-hand knowledge, but this is from Jimmy Maher’s history of Zork Zero:

More puzzling is the impact — or rather lack thereof — of Stu Galley’s much-vaunted new parser. Despite being a ground-up rewrite using “an ATN algorithm with an LALR grammar and one-token look-ahead,” whatever that means, it doesn’t feel qualitatively different from those found in earlier Infocom games. The only obvious addition is the alleged ability to notice when you’re having trouble getting your commands across, and to start offering sample commands and other suggestions. A nice idea in theory, but the parser mostly seems to decide to become helpful and start pestering you with questions when you’re typing random possible answers to one of the game’s inane riddles. Like your racist uncle who decides to help you clean up after regaling you with his anecdotes over the Thanksgiving dinner table, even when Zork Zero tries to be helpful it’s annoying.


This explains why so few people solved my game. I knew it wasn’t insufficient clueing!


I think it is worth mentioning the KAOS parser in the last few Level 9 games (1987-1989) as it was much better than their earlier games and added an original feature:

Though Infocom also allowed giving multiple commands at once to an NPC, e.g:
(The turtle carries it all out in one turn)

But KAOS took it one step further: It litterally allowed commanding NPCs just as if they were the player so you could e.g:
John, e, e, climb ladder, get chicken, wait, d, w, w
[Fictive example]

Then John would execute one sub-command each turn while the player did something else and in this specific example return to the first location after 8 turns.

I don’t think I have ever seen this mechanic in any other games than some by Level 9.


Suspended had the notion of NPCs engaging in multi-turn travel. I don’t think it let you queue up multiple commands, though.

The Enchanter turtle always seemed to me like… deliberately playing on a weakness of the parser. I know, it was the intended solution. But “TURTLE, SE. GET SCROLL” reads like the second half should be a player action, not a turtle command. If I meant the turtle to do both, I’d have said “TURTLE, SE. TURTLE, GET SCROLL”. But that wouldn’t parse because the turtle is out of scope by the second action. So you have to parse it the first way to make the puzzle work, which feels like cheating.


(feverishly writes notes)