Lab Session (concluded): Giving commands to remote NPCs

I proposed a new series to the other Mad Scientists, the idea for which is that we all put our heads together to discuss and try to solve particularly thorny problems. The response was positive, and @Zed suggested a number of problems to examine (and also “Lab Session” as a title). The first on his list was the scenario of giving orders to NPCs that are in a location different from that of the PC, so we can use that for a trial run.

Note that the ultimate goal of this series would be to make suggestions for improving Inform, so please don’t hesitate to contribute your own questions and ideas to the mix, even if your interest in the technical side of things is limited. This isn’t supposed to be an exercise for mad scientists only – it’s more like “open house” for the mad science labs.


THIS THREAD’S TARGET

An NPC which is in a different part of the world model than the PC can be a tricky situation, and it has been that way since Inform 6 (see Ex 101 in DM4). Scope determination is fundamentally PC-centric (though some improvements have been made in I7), and the concept of scope is interlinked with both the in-world visibility relation and states of both in-world (PC, NPC) and out-of-world (human player) knowledge.

[EDIT: This Lab Session has been concluded. A status report is listed at the end of this post.]

DEFINING THE PROBLEM

So… it’s probably a good idea to get a clear definition of the problem before trying to solve it. These are the core issues as I understand them:

  1. The rules for NPC scope determination are not the same as those for PC scope determination.
  2. Floating objects (e.g. backdrops, two-sided doors) aren’t relocated when considering NPC scope.
  3. It’s hard to interpret how the parser should handle a word when modeling three different knowledge states (human player, PC and NPC).

(Are there any other aspects of the situation that I didn’t cover, Zed?)

BACKGROUND READING

This is a complicated subject that is addressed very little in the primary documentation for Inform 7. For an overview, I would suggest:

DEMONSTRATING THE PROBLEM

To better illustrate issues #1 and #2, consider the following:

Sample Scenario
"Scope for NPCs"

Cavern is a room. "A dark pit is in the center of the cavern. Rough stairs cut into its outer edge descend into darkness." The player is in Cavern.

There is a backdrop called a fissure. "It's pretty far up." It is everywhere. It is not scenery.

Rule for writing a paragraph about the fissure:
	say "High above, a slash of brightness marks the fissure from which you first entered the caves."

Pit is a dark room. It is down from Cavern.

A person called Robot A is in Pit.

A gold nugget is in Pit.

Alcove is a dark room. It is east from Pit.

To decide whether (O1 - object) is in scope for (O2 - object):
	(- (TestScope({O1}, {O2})) -).

To show complete scope for (P - person):
	let scoped be a list of objects;
	repeat with obj running through objects:
		if obj is in scope for P, add obj to scoped;
	say "In scope for [P]: [scoped]."

Every turn:
	repeat with P running through people:
		show complete scope for P.

Test me with "z / down / east".

which will produce output demonstrating some potentially unexpected behavior:

Sample Transcript
Cavern
A dark pit is in the center of the cavern. Rough stairs cut into its outer edge descend into darkness.

High above, a slash of brightness marks the fissure from which you first entered the caves.

>z
Time passes.

In scope for yourself: yourself and fissure.
In scope for Robot A: Robot A and gold nugget.

>down

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself.
In scope for Robot A: yourself, fissure, Robot A and gold nugget.

>east

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself.
In scope for Robot A: Robot A and gold nugget.

For issue #3, the key question is: If an item A is in scope for the NPC but not the PC, how should the game handle a human player’s command that tries to reference item A?


Let’s fire up those bunsen burners and see what we can do!


CURRENT STATUS

Exciting things have been accomplished! So far, we’ve managed:

  • Three subproblems:
    1. Handling of scope is different between PCs and NPCs - done
    2. Floating objects don’t relocate for NPCs - done
    3. Mediation of knowledge state of PC vs. NPC vs. human player - done
  • Scope creep:
    4) scope extension and visibility extension are conflated - done
    5) visibility and touchability are PC-centric - done
    6) Determine object illumination relative to the viewer’s position - done
    7) Unify all sensory tests under the same object search framework - done
    8) Driving scope from senses available to the PC - done
    9) Extending senses beyond a range allowed by the world model - done
    10) Arranging better feedback about the cause of failed actions (i.e. preserving reason the action failed) - done
    11) Support for extending sensory model - done

The scope creep isn’t just overkill, because another item on Zed’s list of suggested topics is “achieving (or approaching) PC-NPC equivalence,” which can be considered a subproblem of the goal here.

10 Likes

A couple of things:

  1. you need to indicate the pit is below the cavern.
  2. I get the following output: (Edit: I think you need to specify the backdrop as everywhere?)
Scope for NPCs
An Interactive Fiction
Release 1 / Serial number 240512 / Inform 7 v10.1.2 / D

Cavern
A dark pit is in the center of the cavern. Rough stairs cut into its outer edge descend into darkness.

>z
Time passes.

In scope for yourself: yourself.
In scope for Robot A: Robot A and gold nugget.

>d

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself.
In scope for Robot A: yourself, Robot A and gold nugget.

>
1 Like

Oops – I modified the example source in I7 while composing but forgot to copy the changes to the post while editing. Fixed above.

3 Likes

The most glaring example of this is, as described in the link @otistdog gave above, NPCs’ scope is unaffected by darkness and light, such that they can interact with things in darkness, and which according to Inform they ‘can’t see’, in ways not open to the PC (and, occasionally under very unusual circumstances, vice versa)

1 Like

Hence:

"DarknessAnd Light" by PB

Lab is a dark room.

The sword is in the Lab. The dwarf is a person in the Lab.

The block giving rule does nothing.

Persuasion: rule succeeds.

After deciding the scope of the player:
	place the dwarf in scope.
	
Test me with "take sword/dwarf, take sword/dwarf, give me sword/i".

DarknessAnd Light
An Interactive Fiction by PB
Release 1 / Serial number 240512 / Inform 7 v10.1.2 / D

Darkness
It is pitch dark, and you can’t see a thing.

>test me
(Testing.)

>[1] take sword
You can’t see any such thing.

>[2] dwarf, take sword
>[3] dwarf, give me sword
>[4] i
You are carrying:
a sword

2 Likes

We can start by looking at the routine TestScope() to see how it works. Most of the statements are there to ensure that certain globals are undisturbed by the process and/or that their values don’t inadvertently interfere. After stripping out these, we are left with:

[ TestScope obj act a al sr x y;
	...
	if (act == 0) actor = player; else actor = act;	! set actor to player if not provided as parameter
	actors_location = ScopeCeiling(actor);			! determine highest scope ceiling for actor
	SearchScope(actors_location, actor, 0); ...		! look through all objects in that scope ceiling, excluding actor and children
	...
	return x;										! <-- will be true if an object found by SearchScope() is obj
];

ScopeCeiling() basically returns the highest enclosing object (room or closed opaque container) that would limit the PC’s visibility assuming that light is available. In special cases this is thedark or the player itself. It uses a routine VisibilityParent() to climb up through the object tree from the actor. (I note that VisibilityParent() arguably should use the same logic as IsSeeThrough(), but it doesn’t. I also note that ScopeCeiling() could avoid some calls to VisibilityParent() by using a temporary variable.)

Having found the visibility ceiling, the search for in-scope objects begins with it. The deciding the scope of activity is run for it. There are by default no for deciding the scope of... rules, but default processing is written into the template code. We can skip branches relevant to parsing commands for now, leaving logic that looks something like:

  1. scan everything inside the visibility ceiling that is not the actor (including the visibility ceiling itself when it is not a room)
  2. scan everything inside the actor

The scanning process is handled by a trio of related routines: ScopeWithin(), DoScopeAction() and DoScopeActionAndRecurse().

ScopeWithin basically iterates through the children of an object and submits each of them to DoScopeActionAndRecurse(). It excludes objects that are concealed (in the I7 sense), but also overrides any attempt for the actor to conceal something. (Arguably, this override is undesirable. What if the author’s scenario involves having something surreptitiously placed upon the PC’s person?)

DoScopeActionAndRecurse() specifically executes DoScopeAction() against the target object. Unless instructed not to, it recurses through things that are object tree children of the target object, excluding concealed things but overriding actor concealment as in ScopeWithin(). It also always recurses through component tree children of the target object, using the same rules regarding concealment. There is a third branch involving add_to_scope that I think may be purely vestigial – a holdover from the I6 library. (I don’t think there’s anything an author can do without I6 hackery that would involve this branch.)

The DoScopeAction() routine is the one that orders the actual work relevant to the scope search being conducted. In the case of TestScope(), this work is simply checking whether the object being examined is the one that is being scope-tested, i.e. the obj parameter of the TestScope() routine. (If the object is found, it notes this by setting a global value parser_two.)

ScopeWithin() and the subclauses of DoScopeActionAndRecurse() use very similar logic. I once put together a proof-of-concept to let it accept selector routines as parameters, so that it could be called from both places in DoScopeActionAndRecurse().

I note that there is no provision for stopping the search early if the object being hunted is found early. The complete scan is performed, and if it is found at any point, the global will have been set.

There is nothing obviously different about the scope search process for an NPC versus the PC in all this machinery. None of it actually even checks for the presence of light! The difference must arise in the handling below DoScopeAction() that applies only when actually parsing commands, i.e. via MatchTextAgainstObject(), or in the handling of scope at a higher level (like the top-level parser routines).

[EDIT: It turns out that there is some light-related code in there – it’s just easy to miss. See below.]

2 Likes

That’s not the only key question: What if B is in scope for the player but not the NPC? Currently, the command is rejected by the parser (because it can’t build an action for the NPC using B), so the parser then assumes it is an attempt at conversation and converts it into an Answering it that... action (e.g. Answering the dwarf that take sword), which, if the dwarf is not touchable (as, strangely, answering it that... requires a touchable noun) fails the basic accessibility rules with the cryptic response ‘You can’t reach into < the dwarf’s location>’…

1 Like

It’s perhaps worth musing for a moment about the philosophical meaning of scope.

Although, paraphrasing, it’s roughly practically defined in DM4 as ‘what’s in an actor’s possession, or visible to them, plus the compass directions’, its purpose is to constrain which objects can be used by the parser in constructing an action to subsequently be handed over to the action-processing rules and (potentially at least) performed by a given actor. Objects which can be used in such a way are said to be currently in scope for that actor in performing that action. It follows from this that the only true context of scope is actions.

Note that scope is not the same as being visible to the actor. For example, the compass directions are never visible but always in scope. In darkness, things carried by the actor are in scope but not visible. More philosophically, the concept of something being ‘visible to’ an observer is not an idea tied to the parser constructing an action

It is also different to the constraints on objects applied by the action definition, e.g. Gluing it to is an action applying two two touchable things. These constraints (in this case that both noun and second noun be touchable) are applied after the parser has completed its work, the action has been constructed, and is already being processed. They are applied by the basic accessibility rule and the carrying requirements rule (between the Before and Instead stages of action processing) and have nothing to do with the parser.

Scope is also not a matter for consideration by code-generated actions such as ‘try the baboon gluing the anchovy to the aspidistra’. Here, the parser is not involved and therefore no consideration of the baboon’s scope will be applied to the anchovy or the aspidistra- only constraints applied by the action definition itself or action-processing rules such as Before gluing something....

Generally, the objects that the parser can use are those currently directly accessible to the actor’s senses. Objects the actor may know about but can no longer see, touch or hear can’t be used- and elicit the famous 'You can’t see any such thing.". All the conventional actions acting on objects do so through the actor’s reach or senses, so this makes sense.

The room as a concept defines the conventional outer limits of an actor’s reach and senses and therefore the outer limits of scope. The room itself is not usually in scope because it is a metaphysical construct, not a physical one. It may have walls, a ceiling and a floor- but it may also be altogether more nebulous- such as ‘The Airless Void’ or ‘The Afterlife’. A closed opaque enterable container may constrain scope in a fashion similar to a room, but differs in that the container itself is in scope to its occupant.

Conventional scope can of course be easily extended to allow the actor to see things in an adjacent room, or to think about things once seen and now out of sight, or having only a metaphysical existence, or to speak to an NPC in the dark, or on a walkie-talkie.

The policemen of scope are the action tokens found in Understand phrases such as Understand "glue [thing] to [thing]" as gluing it to., where [thing] is an action token meaning a thing in scope, which doesn’t sound very constraining but excludes any object that is not a thing but which would otherwise be in scope, such as the compass directions. Similarly, [someone] further restricts scope to things which are people- or at least can be spoken to- whereas [person] restricts scope to things which are definitely people. The word ‘any’ prepended to an action token removes the conventional sensory restrictions placed on scope, such that for example [any door] brings into scope doors not in the actor’s current location, or which the actor can’t see because they are in darkness.

1 Like

A worthwhile consideration, certainly – and I agree with the philosophical outlook. However, it seems clear that scope and visibility shared the same evolutionary precursor way back in the primordial soup of Inform development. Visibility is still dependent on scope machinery in I7, while what you’ve said implies that the dependency should be the other way around.

The reason that the calculation for the PC differs is due to light-related handling. ScopeSearch() is being called with actors_location == thedark, which is something that happens when actor == player && location == thedark:

[ ScopeCeiling pos c;
	if (pos == player && location == thedark) return thedark;	! <-- SPECIAL CASE
	c = parent(pos);
	if (c == 0) return pos;
	while (VisibilityParent(c)) c = VisibilityParent(c);
	return c;
];

I expanded the test scenario to include a flashlight and an enterable, closable cardboard box

The player carries a switched on lit device called a flashlight. Understand "light" as the flashlight.

Every turn:
	if the flashlight is switched off:
		now the flashlight is not lit;
	if the flashlight is switched on:
		now the flashlight is lit.

An enterable open openable container called a cardboard box is in Cavern.

and add a region to limit visibility of the fissure

Central Shaft is a region.

Cavern and Pit are in Central Shaft.

There is a backdrop called a fissure. "It's pretty far up." It is in Central Shaft. It is not scenery.

With a modified version of ScopeCeiling() that disables the special case, the PC now has the same darkness-defying scoping as NPCs.

Test Transcript
>Z
Time passes.

In scope for yourself: yourself, flashlight, cardboard box and fissure.

In scope for Robot A: Robot A and gold nugget.

>TURN OFF LIGHT
You switch the flashlight off.

In scope for yourself: yourself, flashlight, cardboard box and fissure.

In scope for Robot A: Robot A and gold nugget.

>D

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

>E

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself and flashlight.

In scope for Robot A: Robot A, gold nugget and fissure.

The next step is to ensure that lighting limits scope the same way for PC and NPC alike.

In the template code, lighting calculations are handled with a different set of routines. The most immediately useful of these is called OffersLight(), which basically checks to see whether a portion of the object tree is lit, either by a room or an object. It only looks “leafward” on the object tree, so some care must be taken when using it. It also has an independent copy of the “see-through” logic, which it probably shouldn’t.

Fortunately, it is a perfect tool to decide whether the portion of the object tree bounded by the scope ceiling is lit. If it is, then the whole area “under” the scope ceiling will be visible. If not, then the scope ceiling should be limited to the object being checked (if we assume that everything plays by the same rules of always scoping its immediate parts and possessions). A quick modification to ScopeCeiling()

[ ScopeCeiling pos c tmp;
	! if (pos == player && location == thedark) return thedark;	! DISABLED
	c = parent(pos);
	if (c == 0) return pos;
	while (tmp = VisibilityParent(c)) c = tmp; ! MODIFIED
	if (~~OffersLight(c)) return pos; ! ADDED
	return c;
];

and we get pretty close. A problem has been introduced, though – even though the scope ceiling of an item in darkness is resolving to the item itself, that item is not in scope.

>X ME
You can't see 'me' (nothing) at the moment.

The reason is the way that ScopeSearch() is written. There’s a section of code specifically for dealing with darkness:

            ! (c.5)
            if (thedark == domain1 or domain2) {
                    DoScopeActionAndRecurse(actor, actor, context);
                    if (parent(actor) has supporter or container)
                            DoScopeActionAndRecurse(parent(actor), parent(actor), context);
            }

We don’t want to depend on thedark, which is PC-centric, so we’ll change the key condition to (~~OffersLight(ScopeCeiling(actor))). (While we’re at it, we’ll modify OffersLight() to use IsSeeThrough() for consistency, which shouldn’t change anything functionally.) And now the PC and NPC are playing by the same light-respecting rules.

Test Transcript
>Z
Time passes.

In scope for yourself: yourself, flashlight, cardboard box and fissure.

In scope for Robot A: Robot A.

>turn off light
You switch the flashlight off.

In scope for yourself: yourself, flashlight, cardboard box and fissure.

In scope for Robot A: Robot A.

>D

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself and flashlight.

In scope for Robot A: Robot A.

>E

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself and flashlight.

In scope for Robot A: Robot A.

>TURN ON LIGHT
You switch the flashlight on.

In scope for yourself: yourself and flashlight.

In scope for Robot A: Robot A.

Alcove

>W

Pit
High above, a slash of brightness marks the fissure from which you first entered the caves.

You can also see Robot A and a gold nugget here.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

It’s quite possible that something has been broken along the way, but we’re still in proof-of-concept stage.

2 Likes

Moving on to the matter of floating objects, there are two built-in types: backdrops and doors. The designs of these are rooted in space-saving techniques for the Z-machine era – a single object is relocated to various rooms as the player enters them in order to present the impression that they are present in multiple rooms.

Again, the root problem is the PC-centric nature of the code, so that’s what we’ll be trying to change.

The functionality floating objects depends on the I6 found_in property, which is not discussed in any I7 documentation. (See DM4 for details.) There is a routine BackdropLocation() that can be used to test whether a backdrop “belongs in” a given room.

We want backdrops to be in scope whenever the scope ceiling is a lit room. Since it’s possible for a backdrop to emit light, we also want to update the lighting code to ensure that this is taken into account.

First, SearchScope() gets some new locals and additional lines in its block for default scope handling:

		sc = ScopeCeiling(actor); sc_lit = OffersLight(sc); ! ADDED

		! BEGIN ADDITION
		! (backdrops)
		if (sc ofclass (+ room +) && sc_lit) {
			objectloop (i ofclass (+ backdrop +))
				if (BackdropLocation(i, sc))
					DoScopeActionAndRecurse(i);
		}
		! END ADDITION

Second, OffersLight() gets an early case to look for backdrops when applicable:

	! BEGIN ADDITION
	if (obj ofclass (+ room +))
		objectloop (j ofclass (+ backdrop +))
			if (HasLightSource(j)) rtrue;
	! END ADDITION

(EDIT: Note that there’s an error in the preceding; see following post.)

I’ve also added some details to the scenario to help with testing:

A persuasion rule: rule succeeds. [make robot obey]

[controls whether fissure is bright enough to illuminate]
After waving the flashlight:
	if the fissure is not lit:
		now the fissure is lit;
	otherwise:
		now the fissure is not lit;
	continue the action.

So far, so good in terms of transcript:

Test Transcript
>Z
Time passes.

In scope for yourself: yourself, flashlight, cardboard box and fissure.

In scope for Robot A: Robot A.

>WAVE LIGHT
You wave the flashlight.

In scope for yourself: yourself, flashlight, cardboard box and fissure.

In scope for Robot A: Robot A, gold nugget and fissure.

>G
You wave the flashlight.

In scope for yourself: yourself, flashlight, cardboard box and fissure.

In scope for Robot A: Robot A.

>D

Pit
High above, a slash of brightness marks the fissure from which you first entered the caves.

You can also see Robot A and a gold nugget here.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

>E

Alcove
In scope for yourself: yourself and flashlight.

In scope for Robot A: Robot A.

>W

Pit
High above, a slash of brightness marks the fissure from which you first entered the caves.

You can also see Robot A and a gold nugget here.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

>DROP LIGHT
Dropped.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

>E

Darkness
It is pitch dark, and you can't see a thing.

In scope for yourself: yourself.

In scope for Robot A: flashlight, Robot A, gold nugget and fissure.

>W

Pit
High above, a slash of brightness marks the fissure from which you first entered the caves.

You can also see a flashlight (providing light), Robot A and a gold nugget here.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

>GET LIGHT
Taken.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

>ROBOT, TAKE NUGGET
Robot A picks up the gold nugget.

In scope for yourself: yourself, flashlight, Robot A, gold nugget and fissure.

In scope for Robot A: yourself, flashlight, Robot A, gold nugget and fissure.

>E

Alcove
In scope for yourself: yourself and flashlight.

In scope for Robot A: Robot A and gold nugget.

>WAVE LIGHT
You wave the flashlight.

In scope for yourself: yourself and flashlight.

In scope for Robot A: Robot A, gold nugget and fissure.

Now we can look at doors.

1 Like

For those following along, I’ve added new background reading in the top post. The post with important technical detail on backdrops is an excellent write-up by drpeterbatesuk of key details on their usage. I don’t think it’s listed anywhere on the “Inform 7 documentation and resources” post, but it probably should be.


Like backdrops, two-sided doors are floating objects. Their range is restricted to the two rooms on either side, however, so the code base for them is written differently for better speed.

There’s a routine IndirectlyContains(), which is the basis for the enclosure relation. It manually checks both sides and counts the room on either side as enclosing the door.

[ IndirectlyContains o1 o2;
	...
	if ((o1 ofclass K1_room) && (o2 ofclass K4_door)) {
		if (o1 == FrontSideOfDoor(o2)) rtrue;
		if (o1 == BackSideOfDoor(o2)) rtrue;
		rfalse;
	}
	...
];

I personally see no reason why the same should not be true of backdrops, even though that’s not the default behavior. We can modify the routine to use a similar pattern for them:

	if ((o1 ofclass K1_room) && (o2 ofclass (+ backdrop +))) {
		if (BackdropLocation(o2, o1)) rtrue;
		rfalse;
	}

and if we want extra efficiency we can merge the two blocks so that they only check for room-ness once.

With that in place, the modified block in ScopeSearch() can be updated to use IndirectlyContains() for all floating objects.

			objectloop (i ofclass (+ backdrop +) or (+ door +)) {
				if (IndirectlyContains(sc, i))
					DoScopeActionAndRecurse(i);
			}

A few more additions to the scenario to help with testing:

A door called a metal door is east of Alcove.

Mysterious Cell is east of the metal door.

A person called Robot B is in Mysterious Cell.

An openable closed container called a chest is in Mysterious Cell.

A brass key is in the chest.

After jumping:
	if the chest is closed:
		now chest is open;
	otherwise:
		now chest is closed.

After waving hands:
	if the metal door is not lit:
		now the metal door is lit;
	otherwise:
		now the metal door is not lit.

… and the test transcript shows some bugs, which were traced to an oversight in the adjusted OffersLight(). I also notice that the routine wouldn’t handle lit doors, so we’ll fix that too with:

		objectloop (j ofclass (+ backdrop +) or (+ door +) && IndirectlyContains(obj, j))
			if (HasLightSource(j)) rtrue;

We can update the test me to be more rigorous:

Test me with "z / wave light / g / get box / d / robot, take nugget / e / w / robot, go east / wave / g / jump / g / e / robot, go east / e / open chest / drop box / robot b, enter box / close box / open box / put chest in box / put light in box / close box / jump / open box / open chest / put light in chest / close box / jump".

And now the test transcript looks pretty good:

Test Transcript
>Z
Time passes.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, fissure and Robot B.

In scope for Robot A: Robot A.

In scope for Robot B: metal door, Robot B and chest.

>WAVE LIGHT
You wave the flashlight.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, fissure and Robot B.

In scope for Robot A: Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>G
You wave the flashlight.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, fissure and Robot B.

In scope for Robot A: Robot A.

In scope for Robot B: metal door, Robot B and chest.

>GET BOX
Taken.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, fissure and Robot B.

In scope for Robot A: Robot A.

In scope for Robot B: metal door, Robot B and chest.

>D

Pit
High above, a slash of brightness marks the fissure from which you first entered the caves.

You can also see Robot A and a gold nugget here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>ROBOT, TAKE NUGGET
Robot A picks up the gold nugget.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>E

Alcove
You can see a metal door here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door and Robot B.

In scope for Robot A: Robot A and gold nugget.

In scope for Robot B: metal door, Robot B and chest.

>W

Pit
High above, a slash of brightness marks the fissure from which you first entered the caves.

You can also see Robot A here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>ROBOT, GO EAST
Robot A goes east.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: Robot A and gold nugget.

In scope for Robot B: metal door, Robot B and chest.

>WAVE
In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: Robot A, gold nugget and metal door.

In scope for Robot B: metal door, Robot B and chest.

>G
In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: Robot A and gold nugget.

In scope for Robot B: metal door, Robot B and chest.

>JUMP
In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: Robot A and gold nugget.

In scope for Robot B: metal door, Robot B, chest and brass key.

>G
In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B.

In scope for Robot A: Robot A and gold nugget.

In scope for Robot B: metal door, Robot B and chest.

>E

Alcove
You can see Robot A and a metal door here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door and Robot B.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: metal door, Robot B and chest.

>ROBOT, GO EAST
Robot A goes east.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door and Robot B.

In scope for Robot A: Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot B: Robot A, gold nugget, metal door, Robot B and chest.

>E

Mysterious Cell
You can see a metal door, Robot A, Robot B and a chest (closed) here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

>OPEN CHEST
You open the chest, revealing a brass key.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>DROP BOX
Dropped.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>ROBOT B, ENTER BOX
Robot B gets into the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>CLOSE BOX
You close the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, chest and brass key.

In scope for Robot B: cardboard box and Robot B.

>OPEN BOX
You open the cardboard box, revealing Robot B.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>PUT CHEST IN BOX
(first taking the chest)
You put the chest into the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>PUT LIGHT IN BOX
You put the flashlight into the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>CLOSE BOX
You close the cardboard box.

In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: flashlight, cardboard box, Robot B, chest and brass key.

>JUMP
In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: flashlight, cardboard box, Robot B and chest.

>OPEN BOX
You open the cardboard box, revealing a flashlight, a chest and Robot B.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

>ROBOT B, OPEN CHEST
Robot B opens the chest.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>ROBOT B, PUT LIGHT IN CHEST
(Robot B first taking the flashlight)
Robot B puts the flashlight into the chest.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>CLOSE BOX
You close the cardboard box.

In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: flashlight, cardboard box, Robot B, chest and brass key.

>JUMP
In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: cardboard box and Robot B.

[EDIT: Note that for easier testing the following rule can be added:

After deciding the scope of the player:
	repeat with R running through people that are not the player:
		place R in scope. 

which allows issuing commands to the robots from other rooms, but:

  1. It makes the player’s list of in-scope items always show the robots.
  2. If you reference something out of scope for a robot, you get a reaching-related error as noted by drpeterbatesuk above.

The first isn’t a bug, and the second will have to wait until Issue #3 is better defined.]

I’d say that Issues #1 and #2 are more or less solved (though there will probably be bugs to shake out). Issue #3 is tougher, especially given the good doctor’s point. There’s also the question of what the parser should do if the noun phrase player’s command could refer to both of 1) item A in scope only to the PC and 2) item B in scope only to the NPC.

What makes sense to you?

As prelude to discussion of Issue #3

One of the most fundamental jobs of the parser is to match the words in the noun phrases of human player’s command to objects in the game world. As an illustration of the deep problem here, consider the following scenario.

"The Meaning of Cup"

Chamber 1 is a room. The player is in Chamber 1.

An open unopenable container called a teacup is in Chamber 1. Understand "cup" as the teacup.

Chamber 2 is a room. A man called Adam is in Chamber 2.

An open unopenable container called the Stanley cup is in Chamber 2.

After deciding the scope of the player:
	place Adam in scope.

A persuasion rule: rule succeeds.

Test me with "x cup / adam, take cup".

Here’s what happens by default:

Chamber 1
You can see a teacup (empty) here.

>X CUP
The teacup is empty.

>ADAM, TAKE CUP
Adam picks up the Stanley cup.

By contrast (from the startup state):

Chamber 1
You can see a teacup (empty) here.

>TAKE STANLEY CUP
You can't see any such thing.

>ADAM, TAKE TEACUP
You can't reach into Chamber 2.

If you have trouble making sense of that last response (the same one that the good doctor pointed out above), that should not be surprising to anyone. (If you don’t understand his brief explanation, the riddle will be explained in greater detail later.)

But look again at the response to >ADAM, TAKE CUP. Think about how the parser is understanding the word “cup,” bearing in mind that, at startup, neither the human player nor the PC would be expected to know about the Stanley cup. Without foreknowledge, the human player in all likelihood assumed that “cup” would refer to the teacup. Did the parser do the right thing, or not?

Note that this could easily become a long and inconclusive discussion about craft and different approaches to it. While that might be interesting, I would urge that any such discussion be a split topic on the General Design Discussion subforum. The part that’s potentially solvable and on-topic here is establishing a set of conventions and getting the parser to enforce them in a manner that’s intuitive to a human player (and that’s communicated/reinforced by parser feedback if not intuitive).

Wow! Open house to the maddest parser-bending magiscienteknominds. Very interesting!

If it was in real life, I would expect it to works this way. If I call you on the phone and ask you to pick up a cup, I expect you to pick a cup in your house, not mine, or to tell me you can’t because there’s no such thing in your house. I would not expect at all that “cup” referred to my cup.

This is a stab at it that does surprisingly little with scope manipulation. Won’t work in 10.1 because rule substitutes for doesn’t work in 10.1. Works in current dev; probably works in 9.3/6M62.

lab is a room.

parlor is a dark room.
parlor is north of lab.

alice is a person in the parlor.

crate is a container in the parlor.
toy is in crate.

conservatory is south of lab.
char is a person in the conservatory.

platform is supporter in conservatory.
wrench is on platform.
lucite box is a transparent closed container in the conservatory.
happy fun ball is in lucite box.
jug is a closed container in conservatory.
honey is in jug.

after deciding the scope of the player (this is the player can talk to anyone anywhere rule):
  repeat with p running through people who are not the player begin;
    place p in scope;
  end repeat.

persuasion (this is the svengali rule): rule succeeds.

To decide what person is the executant: (- actor -).

This is the advanced visibility rule:
  if the holder of the executant is a lighted room, make no decision;
  if the executant can see a lit thing, make no decision;
  now the noun is the executant;
  carry out the issuing the response text activity with the carry out requested actions rule response (A);
  say line break;
  rule fails.

The advanced visibility rule is listed after the basic visibility rule in the action-processing rules.
The advanced visibility rule does nothing when the executant is the player.

To decide if (X - thing) has line of sight to (Y - thing):
  if the common ancestor of X with Y is nothing, no;
  if Y is enclosed by a closed opaque container that does not enclose X, no;
  if X is enclosed by a closed opaque container that does not enclose Y, no;
  yes.

Reaching inside a room when answering someone that:
  unless the player has line of sight to the noun begin;
    carry out the issuing the response text activity with the carry out requested actions rule response (A);
    say line break;
    rule fails;
  end unless;

To decide if (X - thing) could poke (Y - thing):
  if the common ancestor of X with Y is nothing, no;
  if Y is enclosed by a closed container that does not enclose X, no;
  if X is enclosed by a closed  container that does not enclose Y, no;
  yes.

this is the advanced accessibility rule:
  if the executant could poke the noun, make no decision;
  now the noun is the executant;
  carry out the issuing the response text activity with the carry out requested actions rule response (A);
    say line break;
    rule fails;

the advanced accessibility rule substitutes for the basic accessibility rule when the executant is not the player.

first player's action awareness rule (this is the clairvoyance rule): rule succeeds.

test me with "alice, get crate / char, get wrench / char, get honey / char, get ball".

produces:

>test me
(Testing.)

>[1] alice, get crate
Alice is unable to do that.

>[2] char, get wrench
Char picks up wrench.

>[3] char, get honey
Char is unable to do that.

>[4] char, get ball
Char is unable to do that.

>

Definitely would fail with backdrops; might fail with doors, too, but if so that case would be easy to fix, I think.

Also, advanced visibility shouldn’t apply if the action doesn’t require a noun or doesn’t require light, or requires nouns that the actor holds that aren’t in a closed opaque container, with similar provisos for advanced accessibility. And I mostly ignored second nouns. So there are a lot of things it doesn’t cover.

A fundamental difficulty in establishing good conventions is that out-of-the-box Inform keeps no record of the human player’s out-of-world knowledge (which may derive from this playthrough or from previous playthroughs), the PC’s in-world knowledge or NPCs’ in-world knowledge. In the example ‘Meaning of Cup’, although the human player might when play begins have out-of-world knowledge about both Adam and the Stanley cup (either from past playthroughs or from examining the source code), the in-world PC has no knowledge of either.

It is a convention that, as a rule, IF should operate in a way that would be expected if the human player had no retained knowledge from past playthroughs, i.e. as if they are working only with in-world knowledge available to the PC from the current playthrough. So, for example, the PC shouldn’t be able to walk through a secret door just because the human player knows from out-of-world knowledge that it’s there- the PC should first have to perform whatever actions are required in-world to reveal it.

However, in the absence of an inbuilt epistemiological model, there is little that the parser can do unaided to enforce this convention- in the ‘Meaning of Cup’ example, there is no way without help the parser can know if the PC should be aware of the Stanley cup, or of Adam. Of course, the inbuilt mechanism Inform has to assist the parser is the scoping mechanism- and ‘Meaning of Cup’ makes use of that by explicitly placing Adam in scope for the PC, thereby signalling to the parser that Adam, although in another room, is at a basic parser level at least, available for interaction as if he were in the same room as the PC.

This at least I think hints that scope is the appropriate mechanism to assist the parser when communicating with an NPC, and therefore prior knowledge of each of the human player, the PC and the NPC should by default all be discounted- which implies further that ALL objects in a command should be in scope for the actor issuing the command, while for the actor performing the commanded action, all objects in that action should be in scope.

In the ‘Meaning of Cup’ this would mean that the Stanley cup and Adam should both be in scope for the PC for the command to succeed.

An interesting side-effect of this requirement would be that the dwarf could no longer be commanded to pick up the sword, because it is out of scope for the PC.

An implication is that ‘adam, take cup’ should, if the Stanley cup is out of scope for the PC, be interpreted by the parser as ‘adam, take teacup’.

1 Like

Following on from that, the appropriate response from the parser to a command that fails because it requires the NPC to interact with an object which is out of scope for the PC is the classic “You can’t see any such thing”- and for the usual reason- this is a response designed to conceal whether an object so-named is simply out of scope or actually doesn’t exist anywhere in the game world. If it is in scope for the PC but not for the NPC, the response should probably be simply ‘The dwarf is unable to do that.’

It seems to me that the conversion of a ‘<NPC>, <text.>’ to an ‘Answering <NPC> that <text>’ action should only take place when <text> does not start with a recognised verb word.

I agree, and I think this exactly touches on the central points involved. Solving Issue #3 requires answering the question “How should the parser deal with commands originating in one scope issued to an actor in another scope?” This seems inherently difficult, because it implicitly depends on the answer to the question “What is the meaning of a word?”

Consider:

  • The normal command prompt could be thought of as prepending "PC, " to every command. In fact, what the parser does is functionally equivalent – a command like >ME, JUMP is translated into the exact same action as >JUMP. (The pronoun “me” always refers to the current player object.)

  • The parser constantly mediates the difference between the human player’s knowledge context and the fictional PC’s knowledge context. If a there is a letter on a mantle and the mantle is written to conceal the letter, the parser handles a command such as >EXAMINE LETTER as though there is no letter present, even when the mantle is visible and touchable to the PC. It would be typical for a game with this setup to require specific actions to be taken to imbue the PC with knowledge of the letter object’s presence before it would become in-scope so that the PC can interact with it. (Commonly, the advice to a new author for this type of scenario would be to keep the letter offstage until it is revealed and moved within the world model to the mantle. From the human player side there is no observable difference because in either case the effect is to change the PC’s scope with respect to the letter.)

  • The form of command that matters here is: >NPC, DO THIS ACTION. I believe that the typical human player’s conception is exactly that laid out by CrocMiam: A command like this is conceived of as a speech act, i.e. semantically equivalent to >SAY “DO THIS” TO NPC or >TELL NPC TO DO THIS. (Neither of those phrasings are actually supported, though. Also, using quotes for the words spoken is not supported by default, so technically the equivalent is >SAY DO THIS TO NPC; players new to parser IF must learn this convention.)

  • However, if the command is considered to be a speech act being performed by the PC, then there are actually three different knowledge contexts to mediate (human player, PC, NPC) instead of just two. The words of a noun phrase may mean different things to all three parties, though one would generally presume that in a typical well-formed game noun words will have the same meaning to both human player and PC. (By “well-formed” I mean one that communicates the PC’s situation accurately to the human player, in order to create alignment between their knowledge states. For the purposes of this discussion, I am deliberately excluding games designed to intentionally limit human-PC knowledge alignment – that kind of game would be part of the craft discussion.)

  • For the parser as currently written, if the human player’s command addresses a remote NPC, it mediates the difference between the human player’s knowledge context and the fictional NPC’s knowledge context in the same fashion that it would for a normal command targeting the PC. Specifically, it tries to map the words in noun phrases of the player’s command to objects in scope for the NPC, and fails to parse the entered text as a command addressing the NPC if it cannot. This seems to presume a direct two-party communication from human player to NPC. In direct contrast, however, the action that results is modeled as though it is a speech act, i.e. an asking <NPC> to try... action, which implies a three-party communication from human player to PC to NPC. The parser translates the human player’s command entered at the prompt (human->PC) into the speech act of the PC requesting the action (PC->NPC). Other game logic determines whether the NPC actually executes an action in response – and if it does, that is treated as a fundamentally separate action – but the objects involved in the two actions as noun and second noun do not change.

There’s a fundamental tension between the fact that for >NPC, DO THIS the parser is resolving nouns using the NPC’s scope in order to generate an action for the PC.

2 Likes