AI Class Case Study: Using Inform 7 (Unlocking and Ambiguity)

No call to action here. I just thought this might be interesting to folks to see what some authors in Inform try. (More specific context near the end of the post.) One of my student teachers reported extreme frustration in the class with this simple example:

The Office is a room.
The Lobby is a room.

A wooden oak door is east of the Office and west of the Lobby.
The wooden oak door is a locked door.
The player carries a door key.

Test me with "actions / east / open door / unlock the door".

So just by itself, if the player didn’t have the key, Inform would try to unlock the wooden door with the only other object in the room, which would be the wooden oak door. The friction here was that Inform’s model was totally opaque to them. They didn’t realize that Inform was just picking the only object in scope.

With the key, it seemed to make sense to them that Inform tried to unlock the door with the key. I had to explain that Inform was not choosing the key because it’s called a “key.” It’s just choosing an object that the player has access to. When that key isn’t present, Inform tried to use the wooden door.

This made zero sense to people and was an interesting cause of friction. When explained what was going on, they got it. But then they said the Inform guide has “basic rules of realism.” Isn’t unlocking something with its own self a basic rule of realism? Isn’t using a door to unlock itself a bit unrealistic?

That leads into interesting discussions about simulation and world models. But then they got into this:

Does the player mean opening the door key:
	it is unlikely.

The challenge was then to handle the last command, “unlock the door” which still went with:

>[4] unlock the door
(the door key with the door key)
[unlocking the door key with the door key]

The sensible way people found to word a similar approach was:

Does the player mean unlocking a door:
    it is likely.

Obviously that didn’t work. So here was the field report from the student teacher:

I should note that they are aware of actions and rules. However, this particular class is trying an approach of seeing how consistently a given language is presented and how consistent the code constructs are (or at least are perceived to be) in terms of conveying intent and having that become implementation.

I have tons of notes on classes experimenting with Inform 7 like this and it’s really interesting.

Some of the context here is that this is a precursor to using AI style tooling to describe these kinds of situations and have an AI test the game. Meaning the AI would eventually design the “test me” statements. That’s also leading to the AI writing the game logic itself. So the game logic that the students type is being tokenized and used as part of the training model for the AI. In this case, when the AI saw the first successful construct:

Does the player mean opening the door key:
	it is unlikely.

And was told the problem with “unlock the door” being ambiguous, it tried to construct almost exactly the same logic as the humans in the class were doing.

Some of these field reports are really interesting but this one just stuck out as an early attempt that frustrated one particular class.


Actually, here were two other things the AI actually tried to do that the human class did not:

Does the player mean unlocking something with itself:
    it is very unlikely.


Does the player mean unlocking a door with a key:
	it is very likely.

Notice how the instinct is to generalize? Whereas the humans tried to be more specific, such as with:

Does the player mean unlocking the wooden oak door with the door key:
    it is very likely.

None of it works, of course, but it’s kind of fascinating to see that play out.

What’s going to be interesting to see is if someone (human or AI) hits upon:

Does the player mean unlocking the door key with:
	it is very unlikely.

I’m not going to try to address what the AI types, as we have enough trouble figuring out what humans type. :)

There’s a couple underlying problems here:

  • The DTPM rules can prioritize one action over another, but they cannot rule out an action entirely. So they are simply unable to prevent the “(the door key with the door key)” case.

  • The parser was designed for Infocom-like games with lots of items lying around. In some sense, the behavior when there’s exactly one object in scope is just a pathological case, and the correct solution is “give the player some junk”.

I realize that’s not a very satisfying answer. It violates the “don’t have to know weird shit to write your first Inform game” rule. However, it is the way the parser is since Inform 5, and there has been no work to address it.

1 Like

I wonder how many people would notice if the “choose the only object in scope if there’s only one” feature was removed? GET without a noun isn’t as common as it used to be.

1 Like

This is just one of the many issues with implicit actions. I don’t use or understand Inform 7, but in Inform 6, I disable implicit actions and do my own implicit actions when the indirect object is omitted. In the case of UNLOCK DOOR (without specifying the key), I merely extend the grammar to allow 'unlock' noun and replace the library’s code with my own. If the with_key object is in scope, it uses it, otherwise it says you don’t have the key. It’s a similar concept for LOCK. This avoids all the really stupid responses that Inform normally produces, like attempting to unlock the door with the dish cloth.

Surely, you can do something similar in Inform 7.

You can, but then you’re completely eliminating the “What do you want to unlock the door with?” disambiguation stage.

This may make sense for UNLOCK in some games (when keys aren’t a mystery). But I wouldn’t do it for all games, and certainly not for all two-noun actions. (“Who do you want to give the gift to?” is a perfectly sensible disambiguation question.)

What’s been interesting is how much better the AI gets when it has a very refined form of the manual to tokenize and learn from. For example, I crafted a lot of examples that only show the technique being described, without a lot of the “extras” that many Inform examples include in the manual.

Having the AI – and people, incidentally – learn from that has shown quite a bit of convergence on getting to better solutions more rapidly.

Here’s an example of some descriptive text for the manual that threw the AI as well as people:

The AI wasn’t clear why the second example would create a “one-way connection” nor were people who actually tried to use the source in an example.

But these refined examples can still lead to the situation I showed above with the ambiguity with unlocking. So what this led us to do was create an example that had the last bit I showed in place:

Does the player mean unlocking the door key with:
	it is very unlikely.

The key there was just using “with” as the last part, including nothing else. That gives the effect that’s desired in the current context.

1 Like