Implementing ChatGPT's Parser

This is not about using ChatGPT to help write games (Which I’m also finding rather interesting and useful). It is about using it as the parser for game interaction.
It is truly incredible how well it can interpret a user’s input. Imagine how much that would simplify game creation, eh? Not having to account for every obscure synonym a player might try, or every possible description of an action. Not to mention CGPT’s ability to construct responses. But that’s another topic.
Sadly, I am not a programmer. At all. Except for Inform 7, and so far I’m not all that great at that either LOL.
But perhaps someone(s) in the community might be interested in pursuing such a project.
I have already tried asking CGPT to write an extension for Inform 7 that implements its parser, but it is very resistant to doing so. … So far. … LOL

Side Note: It’s actually not all that great at writing I7 code, which I find odd. But it can lay out a decent enough skeleton. Especially when approaching things as pieces rather than a whole … but that’s a different post)
EDIT: But you CAN directly feed it the bug report output from the compiler and it will correct the code based on that. Which is nice. Could be useful for finding and fixing bugs.


This is maybe a facile observation, but I find ChatGPT to be pretty bad at actually understanding even very basic things. It’s really good at pretending intelligence, but in several instances I get a distinctly Eliza-ish response, while in others the system just plain bullshits you.

It’s different when you do creative writing, I get that. Still, I find myself having to triple check everything produced via ChatGPT, because it tends to invent facts and present it as the truth, confidently and without an ounce of shame.


ChatGPT (and its predecessors) don’t understand the user’s input.

Usually when people say this they mean “It doesn’t understand the way a human does”. Then you have a philosophical debate about human cognition and it doesn’t go anywhere. But that’s not what I mean. I mean it doesn’t understand input the way Zork does. It doesn’t do the job that the crappy two-word parser I wrote in BASIC on my Apple 2 does.

An IF parser matches input against its world model. You type “GET LAMP”; it sorts through and figures out that it should run the Take() routine on object 37:LANTERN. Then the Take() routine updates the inventory array and displays the response: “Taken.”

An AI chatbot has no stock of action routines or objects. It doesn’t model the player’s inventory as an array. You type “GET LAMP” and it generates output that it thinks should follow “GET LAMP”, word by word. That may be “Taken”, particularly if it was trained on IF transcripts, but there’s no internal state. There’s no way for the game to pick out that the Taking action or the Lamp object was involved. In particular, it doesn’t know that “LAMP” and “LANTERN” are synonyms for the same object; it just knows that they’re associated with similar sentence patterns in its training data.

This is why people refer to these algorithms as “black boxes”.


In addition to zarf’s very astute observation it’s important to note that todays chatbots, including ChatGPT, operate on probabilistic models, which means a hypothetical author could never be sure what the bot would do.

Writing a parser game in Inform, on the other hand, an author, given enough care, can know exactly how their game will act in every situation. Inform and its various interpreters are deterministic.


A parser doesn’t have a stock of action routines either. That’s the job of the rules. A parser is vocabulary and grammar.
Using your “GET LAMP” example: If a game’s parser were programmed only to recognize that, it wouldn’t accept “TAKE LAMP” or “EQUIP LANTERN” or “OBTAIN LIGHT” or any other of numerous possibilities.
ChatGPT can. And that’s my point. Games wouldn’t have to be programmed with all the possible ways of a player saying something, removing a lot of hassle. HECK!! It can deal with idiom even.
You can also assign “shortcut words” in real time: For instance "When I say ‘AGLOMMERATE’ I mean ‘TAKE ALL THEN EXAMINE ALL.’ and then from that point on you can just type ‘aglommerate’ and CGPT does as you want. That’s just a rather useful parser shortcut ability in general IMO. That wouldn’t need an AI to implement really either.
Also, when it doesn’t understand what you say, you can explain it and it will in the future remember that. (Within the same conversation)

[to others]
And yes. I know it doesn’t “understand” or “interpret” anything. Regardless, the effect is the same. And CGPT is far far better than Eliza. C’mon, get serious. That’s a very exaggerated comparison.
Not to mention, it could be trained specifically on interactive fiction.



As an author, I think one thing that would be great is if the technology behind chatgpt made it possible to create a fake player for games that would randomly type in weird things.

In fact, I guess it could do that now; telling it to pretend to be a player, and then feeding it the output of my game for its responses.

That would help me see what kind of things I need to code for a typical player, since it would emulate th aggregate of many past players.

I do have concerns for its reliability for authoring games, though. There’s a problem of ‘procedural oatmeal’, where procedural generation is brilliant in short-term but becomes indistinguishable and bland in the long term.


It would be possible (if not necessarily easy) to train an AI to actually map input to internal functions, or to recognise a misunderstood command as a synonym of an understood one. It might possibly even be an improvement over the current hard-coded way. But it would be a lot of work for a very small potential improvement.

What you want doesn’t seem to be that, though, but to make the AI improvise new responses as well, the way a human dungeon master does when a player wants to do something they haven’t planned for. That is much harder to do well and much more likely to conflict with the hard-coded human-programmer made responses and world model, causing lots of confusion.


Hey now!! I think I’ll go try that now. Load up Zork. Tell CGPT we’re going to play an interactive fiction game, Zork. Then feed it the game’s output and have it provide the input.
First tho to make sure it doesn’t just know the solution(s) to the entire game. That would be no good.
(… Checking …)
No. It does not.
Also, it refuses to “help me cheat,” and will only provide hints rather than solutions to help me play. LOL How nice of it.

As for authoring, I’m finding it useful more as a springboard sort of thing. Interactive with it. Changing as we go. I’ve come up with a couple interesting game ideas worth pursuing.
For instance: The old first ed. Paranoia pen & paper RPG combined with modern SCP Foundation material. It has the entire SCP Foundation catalog thru 2021 inside it, as well as the entire ruleset for that RPG. Pretty useful.


No no. “Just” using the parser to read and “interpret” player input.
Note that in the OP I did make the note that:

It would have some use there, but … yeah … Different subject. :slight_smile:

1 Like

That’s not really what ChatGPT does, so you would have to make a new program for this. Basically, you would have to train it to extract verb, nouns and so on from the player input and then pass these over to the hard-coded human-made functions. Or to translate a string of player input that the game does not understand to a synonymous string that it will understand.

I’m no expert, but I don’t think this is the kind of thing that machine learning is really suited for. I’d be happy to be proven wrong, though.


A parser is something that transforms syntax. The parser part of an IF system takes human input and emits a syntax tree. To get the meaning, this syntax tree must be resolved against the world model.

Usually, when people refer to an IF parser, they’re actually referring to the syntax part and the semantic part as well.

A parser can be a standalone thing, but semantic resolution cannot. This means you could have an “AI parser” trained to output a syntax tree. Then apply that tree the same as before. However, the syntax part isn’t really that complicated. The hard part is the resolution.

To make the resolver also an AI component, the AI would have to also model the world. Unfortunately what we’re currently referring as AI (DNNs etc), cannot store any state. It’s just a giant input to output weighted map.

I’d like to add, the concept of “AI” has recently been hijacked to essentially mean DNN based implementation. Back in the day, IF systems like Zork were considered AI systems themselves.

So when you talk of an “AI parser”, care is needed in what you exactly mean both by the terms “AI” and “parser”.


You do realize that in order for this to happen, you need massive quantities of data, such that you’re practically teaching common sense to a computer?

Considering common sense isn’t that common, well… good luck!


My vision of integrating ChatGPT into IF would be something like this.

Instead of detailing descriptions, the author would identify all the salient intent of the description and feed it to GPT. GPT would provide varied results, but keep those salient points. The author could even reference styles of prose for the output.

I think this would be very interesting.


I tried this. It does a pretty poor job of it. Sometimes it forgets it’s the player. All of the time it’s terribly loquacious. It refuses to use one/two/four word commands.


I tried having it play Zork I.
First I asked it a bunch of questions to see if it just knew the entire walkthru of the game. It didn’t appear to.
At first it was reticent to play, as it only wanted to give me hints to help me play. I explained that I’ve played it, and this is an experiment. After that it was okay with it.
So I give it the very first output from the game, and …
It gave a command and proceeded to continue with what the game would respond, then it gave its input to that, and the computer’s output … Etc … I had to stop it. LOL
So then I tried a game from the 2022 IFComp. That was going much better, but then the website cut me off for using it too much in an hour. LOL D’OH
Certainly interesting.
The results, I find, very much depend how you phrase things to it.


Hot take: the parser is a lie anyway. The complex sentences beginners want to type are meaningless in the world model, so it’s a lie from that end. The actual inputs that experienced players want to use (X ME. W. DROP GREEN) don’t particularly resemble English, so it’s a lie from that end as well. The truth of the parser is that a command line must have syntax, and pseudo-English has a shallow learning curve. But if you want to fix the parser, it would be better to help players construct valid input. Actually, expanding the language accepted would just make it harder to implement quality-of-life features like input completion, or validation as you type.



While there is a lot of truth to that, one of the main attractions of the parser is that it is not a Unix shell, but gives the illusion of total freedom. The momentary feeling that you can do anything. No other computer game interface can offer that. And that feeling is diminished by things like input completion and validation-as-you-type. The lie is the point, if you like.


battered lantern: There is an almost blinding flash of light as the battered lantern begins to glow! It slowly fades to a less painful level, but the battered lantern is now quite usable as a light source.

empty jug: There is an almost blinding flash of light as the empty jug begins to glow! It slowly fades to a less painful level, but the battered lantern is now quite usable as a light source.

spell book: There is an almost blinding flash of light as the spell book begins to glow! It slowly fades to a less painful level, but the battered lantern is now quite usable as a light source.

Who is the illusion for, though? Logically, a Unix shell should give a greater impression of freedom. The Unix shell and the parser may not convey it clearly, but with a certain depth of experience doesn’t the fact sink in that the shell has four thousand commands, and the parser game has forty? Maybe I experience the illusion of being able to type anything in my first few hours playing IF, but after that?

IMO the impression of total freedom comes only somewhat from the illusion that you can type anything. It comes from the real cycle of thinking of an unprompted action, communicating it, and having the game understand what you intended to do. (As opposed to playing a link-based game, which can feel more like following instructions than giving them.)

From this perspective, completion, spell check, and so forth should actually increase the impression of freedom, because they will act to tighten up that cycle. That is, the player will spend less time typing input which turns out to be invalid, and the game will give more frequent indications that it is right there with the player, with respect to understanding. Fighting with the parser does not really enhance the impression of freedom.

In a parser game, the player absolutely has to get inside the world model and think about what actions are possible. They have to explore the space of possible actions with only their understanding as a guide, rather than working from user-interface hints about how many items are presently in play and how many verbs. I don’t see why spell check or completion would need to give up the game in this regard.

The goal would be to implement the details such that before the player knew what they wanted to do, they would get no help, but after they knew, the game would rush to assure them it understood. Some tricks might be necessary, like padding the verb list with red herring verbs. You’d be careful not to make it too trivial to game the user interface into giving you hints.

I think of it more as a question of scale and combinatoriality. A complex roguelike like Nethack has a similar feeling of freedom, despite all the commands being open information. But there are about forty of them, and you might be carrying as many items, some of which have idiosyncratic uses, so overall there are enough possibilities to sift through that it’s not easy to keep them all in mind. Parser games add the possibility of commands you haven’t seen before, rather than just commands you’ve forgotten, but it’s a similar principle.

I should stop rambling, but I believe the freedom of parser games is the freedom to think through the world model and come up with a non-obvious command unassisted. You get the chance to tell the game you understand it, and in return it tells you that it understands you.