ChatGPT (and its predecessors) don’t understand the user’s input.
Usually when people say this they mean “It doesn’t understand the way a human does”. Then you have a philosophical debate about human cognition and it doesn’t go anywhere. But that’s not what I mean. I mean it doesn’t understand input the way Zork does. It doesn’t do the job that the crappy two-word parser I wrote in BASIC on my Apple 2 does.
An IF parser matches input against its world model. You type “GET LAMP”; it sorts through and figures out that it should run the Take() routine on object 37:LANTERN. Then the Take() routine updates the inventory array and displays the response: “Taken.”
An AI chatbot has no stock of action routines or objects. It doesn’t model the player’s inventory as an array. You type “GET LAMP” and it generates output that it thinks should follow “GET LAMP”, word by word. That may be “Taken”, particularly if it was trained on IF transcripts, but there’s no internal state. There’s no way for the game to pick out that the Taking action or the Lamp object was involved. In particular, it doesn’t know that “LAMP” and “LANTERN” are synonyms for the same object; it just knows that they’re associated with similar sentence patterns in its training data.
This is why people refer to these algorithms as “black boxes”.