No call to action here. I just thought this might be interesting to folks to see what some authors in Inform try. (More specific context near the end of the post.) One of my student teachers reported extreme frustration in the class with this simple example:
The Office is a room.
The Lobby is a room.
A wooden oak door is east of the Office and west of the Lobby.
The wooden oak door is a locked door.
The player carries a door key.
Test me with "actions / east / open door / unlock the door".
So just by itself, if the player didn’t have the key, Inform would try to unlock the wooden door with the only other object in the room, which would be the wooden oak door. The friction here was that Inform’s model was totally opaque to them. They didn’t realize that Inform was just picking the only object in scope.
With the key, it seemed to make sense to them that Inform tried to unlock the door with the key. I had to explain that Inform was not choosing the key because it’s called a “key.” It’s just choosing an object that the player has access to. When that key isn’t present, Inform tried to use the wooden door.
This made zero sense to people and was an interesting cause of friction. When explained what was going on, they got it. But then they said the Inform guide has “basic rules of realism.” Isn’t unlocking something with its own self a basic rule of realism? Isn’t using a door to unlock itself a bit unrealistic?
That leads into interesting discussions about simulation and world models. But then they got into this:
Does the player mean opening the door key:
it is unlikely.
The challenge was then to handle the last command, “unlock the door” which still went with:
>[4] unlock the door
(the door key with the door key)
[unlocking the door key with the door key]
The sensible way people found to word a similar approach was:
Does the player mean unlocking a door:
it is likely.
Obviously that didn’t work. So here was the field report from the student teacher:
I should note that they are aware of actions and rules. However, this particular class is trying an approach of seeing how consistently a given language is presented and how consistent the code constructs are (or at least are perceived to be) in terms of conveying intent and having that become implementation.
I have tons of notes on classes experimenting with Inform 7 like this and it’s really interesting.
Some of the context here is that this is a precursor to using AI style tooling to describe these kinds of situations and have an AI test the game. Meaning the AI would eventually design the “test me” statements. That’s also leading to the AI writing the game logic itself. So the game logic that the students type is being tokenized and used as part of the training model for the AI. In this case, when the AI saw the first successful construct:
Does the player mean opening the door key:
it is unlikely.
And was told the problem with “unlock the door” being ambiguous, it tried to construct almost exactly the same logic as the humans in the class were doing.
Some of these field reports are really interesting but this one just stuck out as an early attempt that frustrated one particular class.