Lab Session (concluded): Giving commands to remote NPCs

Yes, that’s another key point, in my view.

As usual, your phrasing is much more succinct than mine.

That’s the approach taken in Zed’s example, and it seems like a reasonable approach to me in terms of feedback to the human player, but what is the handling in terms of parsing and action-processing? Is there still a speech act executed (as currently done), is it a new kind of parser error (“[NPC] can't make sense of that command.”), or something else? (Here’s where I’m hoping for some basic consensus to propose as a “solution” for Issue #3.)

That is a very good idea.

This is interesting. It does seem worth exploring what can be addressed at the I7 level, though I think some I6 modifications will be involved no matter what approach is taken.

The implication of adopting this convention would be that in a scenario where it is necessary for the PC to issue a command to an NPC referencing an object outside the PC’s conventional scope, it is up to the author to arrange things such that said object is brought into the PC’s scope when appropriate. E.g.

As the good doctor points out up above, the answering it that action requires touchability (which is usually a close-enough substitute for unmodeled audibility), while the parser’s NPC-ordering logic only requires a “visible” (really just in-scope) NPC. It’s this difference in action-governing assumptions, coupled with automatic reachability enforcement, that creates the hard-to-stomach difference between these two commands (from the “Meaning of Cup” example with >ACTIONS output enabled):

>ADAM, X CUP
[asking Adam to try examining the Stanley cup]
[(1) Adam examining the Stanley cup]
Adam looks closely at the Stanley cup.
[(1) Adam examining the Stanley cup - succeeded]

[asking Adam to try examining the Stanley cup - succeeded]

>ADAM, X TEACUP
[answering Adam that "X TEACUP"]
You can't reach into Chamber 2.
[answering Adam that "X TEACUP" - failed the basic accessibility rule]

The second response is the “riddle” response (back again, as promised), and for anyone confused by it above, it should make more sense now. The underlying cause is that there is no object matching the noun word (“teacup”) that is in-scope for the NPC receiving the order (Adam in Chamber 2), so the command cannot be successfully parsed into the expected action (which in this case would be asking Adam to try examining the teacup). As a result, the player’s command is automatically transformed into the equivalent of >SAY X TEACUP TO ADAM. Since Adam is in Chamber 2 while the PC is in Chamber 1, the Adam object is not touchable to the PC, so the basic accessibility rule is trying to helpfully inform the human player about why the PC can’t do what they “told” it to. The feedback seems nonsensical because it addresses a different problem than what most players and/or authors expect it to.

With a replacement of the Standard Rule definition for answering it that to:

Answering it that is an action applying to one visible thing and one topic.

we will, instead of the “riddle,” get the much more comprehensible response:

>ADAM, X TEACUP
[answering Adam that "X TEACUP"]
There is no reply.
[answering Adam that "X TEACUP" - succeeded]

In this case, there is a verb word meaningful to the game, it’s just that the teacup is out of scope to Adam. Let’s explore drpeterbatesuk’s suggestion about handling the case differently.

The conversion of failed NPC orders to answering it that happens in Parser Letter H. The 6M62 and 10.1 versions of this section differ somewhat, but they share the same essentials for this function:

! ==== ==== ==== ==== ==== ==== ==== ==== ==== ====
! Parser.i6t: Parser Letter H
! ==== ==== ==== ==== ==== ==== ==== ==== ==== ====

  .GiveError;
	...
	if (actor ~= player) {	! actor == NPC for whom parsing attempt failed
		...
	    parser_results-->ACTION_PRES = ##Answer;
	    ...
	    parser_results-->INP1_PRES = actor;
	    parser_results-->INP2_PRES = 1; ...
	    actor = player;
	    ...
	}

It’s easy enough to make the relevant code conditional on the verb_word not being recognized as a word marked as a verb in the dictionary. However, the parser will then proceed to print the message for the can't see any such thing error, which is the parser error internal rule response (E). That, too, is easy enough to change:

The parser error internal rule response (E) is "[The person asked] [can't] see any such thing."

With those changes in place, we now have a set of responses that seem better to me:

>ADAM, X CUP
[asking Adam to try examining the Stanley cup]
[(1) Adam examining the Stanley cup]
Adam looks closely at the Stanley cup.
[(1) Adam examining the Stanley cup - succeeded]

[asking Adam to try examining the Stanley cup - succeeded]

>ADAM, X TEACUP
Adam can't see any such thing.

>ADAM, CAN YOU SEE THE TEACUP?
[answering Adam that "CAN YOU SEE THE TEACUP?"]
There is no reply.
[answering Adam that "CAN YOU SEE THE TEACUP?" - succeeded]

Is that close enough to “solved”?

I think that in the case of a command to an <NPC> beginning with a verb word and a noun/second noun being in scope for the PC but out of scope for the NPC, the ‘asking <NPC> to try …’ action should be generated (and, ideally, succeed if persuasion succeeds regardless of whether <NPC> can perform the action) but the <NPC> action should fail either with ‘<NPC> is unable to do that!’ or ‘<NPC> can’t see any such thing!’ (I have a slight queasiness about the second because it implies a degree of intelligent communication from <NPC> to <PC> that perhaps shouldn’t be assumed- what if <NPC> is for example a dog responding to ‘Dexter, fetch the ball’?)

This is indeed the fundamental problem- audibility is not directly modelled, so is inconsistently modelled either by touchability or scope, both of which have their issues.

In the case of scope, it’s not obvious why a closed transparent container (which ‘transmits’ scope) should necessarily transmit sound better than an opaque one which ‘blocks’ scope (although it might if it’s a cage, for example), and in many if not most instances one might reasonably expect sound to be heard through a closed container whether transparent or opaque- most closed containers (a cupboard, or shower cubicle or cardboard box, for example) are not soundproof or even proof against conversation at normal volumes to listeners in the same room.

In the case of touchability, it makes no difference whether the closed container is transparent or opaque- sound will by default be blocked in both cases.

It’s possible to manipulate both scope and touchability- the former via action tokens and ‘after deciding the scope of…’ rules, the latter via ‘reaching inside’ and ‘reaching outside’ rules. There is one big fundamental difference in these approaches- the former takes place within the parser, the latter during action processing, so modelling audibility via scope will determine whether actions can even be built by the parser, but at the same time allow considerations of audibility to be bypassed by code-generated actions (e.g. ‘try asking Bob for the wrench’)- which by default are untroubled by scope. Conversely, modelling audibility via touchability means that associated rules will be applied consistently across parser-generated and code-generated actions, but may require scope to be manipulated in order to allow the parser to generate actions in the first place (e.g. to place objects the other side of a closed cardboard box into scope).

Suppose we had:

lab is a room.

parlor is a dark room.
parlor is north of lab.

alice is a person in the parlor.
alice wears scarf.
alice holds sandwich.

bob is a person in the lab.
bob wears shirt.
bob holds pen.

crate is a container in the parlor.
toy is in crate.

conservatory is south of lab.
char is a person in the conservatory.

platform is supporter in conservatory.
wrench is on platform.
lucite box is a transparent closed container in the conservatory.
happy fun ball is in lucite box.
jug is a closed container in conservatory.
honey is in jug.

the fog is a backdrop.
the fog is everywhere.

And the player can magically communicate with all other people, is infinitely persuasive, and can see the outcomes of their actions. And (bear with me), let’s say that the player knows that the honey is in the jug, thus the player can refer to it but Char doesn’t know it’s there (what with the jug being a closed opaque container).

Given all that, this transcript would be pretty good, right?

>[1] alice, remove scarf
Alice takes off scarf.

>[2] alice, get crate
Alice is in darkness and can see no such thing.

>[3] char, get wrench
Char picks up wrench.

>[4] char, get honey
Char sees no such thing.

>[5] char, get ball
Char is unable to do that.

>[6] char, touch fog
Char touches the fog.

>[7] bob, drop pen
Bob puts down pen.

>[8] bob, touch fog
Bob touches the fog.

>[9] alice, touch fog
Alice is in darkness and can see no such thing.

>[10] char, x char
Char engages in self-examination.

>[11] alice, x alice
Alice engages in self-examination.

>[12] alice, x char
Alice couldn't understand that.

>[13] char, x alice
Char couldn't understand that.

npc.txt (8.7 KB)

This is terrible in several ways and wouldn’t scale but I think the basic approach might be viable.

As wrlitten, this works as one would hope with the cup/teacup example.

>[1] x cup
The teacup is empty.

>[2] adam, take cup
(the Stanley cup)
Adam picks up the Stanley cup.

By the way, I do realize that I have badly re-created a conspicuously inefficient version of Epistemology. My longer-term notion if I pursue this would be to use something more like Andrew’s Optimized Epistemology, ideally along with a dream of mine I’ve tackled a couple of times: create an activity for moving a thing and having every instance of moving something in the kits and standard rules use it so that knowledge can be dynamically updated as appropriate with after moving something rules.

Regarding the larger issues under discussion, I’m on Team Action Rules. (I don’t know what Bella sees in the Parser.) Rip things to pieces, do something to the effect of creating a scoped attribute where everything’s set unscoped before reading a command and then what’s supposed to be in scope gets marked scoped. Rewrite the grammar rules to use any scoped thing and re-specify the actions to apply to visible things. Write some new versions of the visibility and accessibility rules. We could throw in audibility rules, too…

Bring this stuff into the daylight where it’s comprehensible and modifiable instead of continuing to accept ad perpetuum that the parser is too complicated and friable to mess with.

Or at least that’s my half-baked idle fantasy an order of magnitude larger than the moving something activity dream…

1 Like

I

Is this an example of scope creep? (he he)

1 Like

?? Is this a reference to Bella Swan in the Twilight saga ??

1 Like

having glanced at the definition of ‘touchable’ as an adjective in I7 I see that in order to be ‘I7-touchable’ something must be both in scope to the player (i.e. TestScope(<object>,player) returns true and touchable to the player in the sense of ‘a touchable thing’ in action definitions (i.e. ObjectIsUntouchable(<object>, 1, player) returns false).

It seems fair to put the burden of customizing messages on the author so long as it is no more difficult than any other customization. Likewise, it seems fair to me to put the burden of scope modification on the author if the base rules are the same for every object in the model world.

There seems to be general agreement with CrocMiam’s stance that the noun words of a command to the NPC should be preferentially interpreted as meaningful from the perspective of that NPC. In my view, the three different responses in the most recent “Meaning of Cup” iteration satisfy the target of “establishing a set of conventions and getting the parser to enforce them in a manner that’s intuitive to a human player” because:

  1. The first preference is to resolve noun words within scope of the NPC. This may have a surprising result in terms of disclosing information about the model world, but it is in harmony with the theory of mind that everyone exercises all the time when communicating with people in the real world, so it is not an unintuitive result.

  2. The second preference is to resolve noun words within scope of the PC. This also discloses information about the model world (that the object in question is out-of-scope for the NPC), but this is again in accordance with intuition and not surprising. At the very least the parser is being consistent in providing feedback that an NPC, DO THIS command is interpreted relative to the NPC, while also being consistent in acknowledging that it was able to make sense of the human player’s intent.

  3. The third preference is to fall back on true topic-based conversation, in which the parser makes no attempt whatsoever to assign in-universe meaning to the words after the comma. This is consistent with historical convention and with other feedback to the human player, such as the response to >SAY DO THE WATUSI TO ADAM. The “There is no reply.” response does a better job of signalling what happened than the “riddle” response.

Still… the purist in me wants maximal consistency of the system. I agree with the good doctor that case 2 (out-of-scope for NPC, in-scope for PC) should ideally result in a speech act in the form of a request, which would ideally interact with other tools like the persuasion and unsuccessful attempt rules so that there could be better feedback in cases of requests for impossible actions.

However, in trying to implement this, I’m quickly discovering that the visibility/accessibility subsystems are also wholly PC-centric. This is where world model enforcement begins on the action-processing side, so it will require work of the kind drpeterbatesuk and Zed have been doing.

Also, the current system of handling scope extension via place ... in scope 100% conflates extended scope with extended visibility. I’m feeling inspired by Zed’s charge that maybe we should be more ambitious, so let’s look at that first.


The routine SearchScope() is set up to allow scope extension via the deciding the scope activity. All of the primary documentation shows how to use after deciding the scope... rules to extend scope, but the only uses of for deciding the scope... occur in a pair of examples (RB Ex 349 Four Stars 1 and RB Ex 363 Four Stars 2) with the apparent intent of allowing definitions of audible and scented (i.e. smellable) things that rely on touchability. It’s kind of a blunt instrument, in that it overrides scope for the compass, so use of for... rather than after... may have been an error.

What I’d like to do is make it possible to do a “visibility only” scope search, i.e. one that disallows scope extension. However, the current structure makes that difficult – I don’t want to blanket disallow after deciding the scope, because who knows what else an author might be doing there. What’s really called for seems to be separation of scope determination (which by default respects world model restrictions) from scope extension (which is driven by authors’ intentional violation of world model restrictions).

OK, then – a new extending the scope activity.

Extending the scope of something (future action) is an activity.

and a new line in SearchScope():

CarryOutActivity( (+ extending the scope +), actor);	! ADDED

and a change to our magic scope for robots rule:

For extending the scope of the player:
	repeat with R running through people that are not the player:
		place R in scope. 

That checks out in cursory testing, so we can make the carrying out of it conditional, which requires adding a flag to SearchScope() and TestScope() so that TestVisibility() can exclude scope extension:

[ TestVisibility A B;
	if (~~OffersLight(A)) rfalse;	! MODIFIED
	return TestScope(B, A, true);	! third parameter is new flag
];

Note that I misread the way that OffersLight() works up above – it doesn’t work in just one direction – so that change saves a little work if object A is lit.

That’s all in preparation for some more invasive surgery on the parser. The consensus of those who’ve spoken up seems to be that giving priority to NPC scope when matching noun words produces the more intuitive result, and that is the current function. However, we also want to allow for PC scope as a fallback, per the good doctor’s suggestion.

There are two places where NounDomain() is used to match noun words to objects that seem relevant, in Parser Letter F and Parse Token Letter D. The same principles apply in both places, so we’ll change both but here I’ll focus on the latter, found in the “Case 1” section.

	l = NounDomain(actors_location, actor, token);

All that’s really required is another check in case NounDomain() fails:

	l = NounDomain(actors_location, actor, token);
	! BEGIN ADDITION
	if (l == 0 && actor ~= player) {
		wn = desc_wn;
		l = NounDomain(ScopeCeiling(player), player, token);
	}
	! END ADDITION

If the fallback fails, we’re no worse off than before. If it succeeds, we’re probably getting ready to issue a request for an impossible action, but that’s what we want.

1 Like

I’m partial to tackling things bottom-up and top-down at the same time. Having now done a bunch of mucking around in code, I’m going to spend some time thinking about what would be a desirable design if we disregarded constraints of the existing system. I definitely agree, though, that ideally it would include tracking:

  • who knows (or at least has an opinion on) the location of what
  • who at least knows about the existence of something even if they have no idea of the location

and checking:

  • who can perceive what in the moment (by sight, sound, smell, or technological, magic, extrasensory means to be named later)
  • who is capable of touching what in the moment (toward which, I think something like Jon Ingold’s Far Away to deal with the common case of a thing that can be seen but not touched should be built-in)

yup!

Yup. Never seen it or read it and yet still know about the existence of Team Edward and Team… 'tother one.

Done, thanks for suggestion, and thanks @drpeterbatesuk for writing it.

We now have a situation where the parser is more than happy to construct impossible actions for remote NPCs, i.e. actions that they should not be able to perform due to a lack of visibility or touchability.

There is an existing system for preventing impossible actions for the PC; recall that this is the source of the “riddle” response when trying an answering it that action with a remote NPC as the noun. It should be possible to extend that to cover all actors. We’ll start with visibility.

The Standard Rules declare a rulebook for governing visibility, as discussed in WWI 12.19 Changing visibility:

Visibility rules is a rulebook. [16]
Visibility rules have outcomes there is sufficient light (failure) and there is	insufficient light (success).

This rulebook is invoked via the basic visibility rule, which is part of the action-processing rules (placed between the before rules stage and the instead rules stage). This is the 6M62 version:

The basic visibility rule translates into I6 as "BASIC_VISIBILITY_R" with
	"[It] [are] pitch dark, and [we] [can't see] a thing." (A).

The basic visibility rule is listed in the action-processing rules.

The rule itself is written in I6. It more or less says: If the current action requires light, and the actor is the player, then follow the visibility rules. If that rulebook produces there is insufficient light, then carry out the printing a refusal to act in the dark activity.

We could tinker with the I6 rule, but I see an opportunity to leverage the I7 layer better. First, we’ll add some more outcomes to allow more generic language for success and failure.

Visibility rules have outcomes [MODIFIED]
	there is sufficient light (success),
	there is insufficient light (failure),
	visibility succeeds (success) and
	visibility fails (failure).

Second, we’ll allow more information to flow outward about the exact cause of a failure of visibility.

This is the basic visibility rule: [MODIFIED]
	anonymously abide by the visibility rules.

Third, we’ll cut out the I6 rule and install new rules to replace what the original did, broadening them to include all actors.

A visibility rule (this is the can't act in the dark rule): [ADDED]
	if the action requires light and the person asked is in darkness:
		carry out the printing a refusal to act in the dark activity;
		there is insufficient light.

For printing a refusal to act in the dark (this is the can't see in the dark rule): [ADDED]
	if the person asked is the player:
		say "[It] [are] pitch dark, and [we] [can't see] a thing." (A);
	otherwise:
		say "[The person asked] [can't see] a thing right now." (B).

(Note that I had to put together a phrase to allow the person asked is in darkness.)

And, having prepared for the possibility of something in-scope but not visible, we can add:

A visibility rule (this is the can't act on non-visible things rule): [NEW]
	showme whether or not action requires light;
	if the action requires light:
		if (the noun is not nothing and the person asked cannot see the noun) or
		   (the second noun is not nothing and the person asked cannot see the second noun):
			say "[The person asked] [can't see] [the noun]." (A);
			visibility fails.

After modifying the scope-for-robots rule to use the phrase option but not its contents, and modifying the printing of complete scope to print an asterisk after invisible things, the test me for the robots scenario is still looking pretty good! Scope and visibility have been successfully separated:

Test Transcript
>[1] z
Time passes.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A*: Robot A*.

In scope for Robot B: metal door, Robot B and chest.

>[2] wave light
You wave the flashlight.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A: Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>[3] g
You wave the flashlight.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A*: Robot A*.

In scope for Robot B: metal door, Robot B and chest.

>[4] get box
Taken.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A*: Robot A*.

In scope for Robot B: metal door, Robot B and chest.

>[5] d

Pit
High above, a slash of brightness marks the fissure through which you first entered the caves.

You can also see Robot A and a gold nugget here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B*.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>[6] robot, take nugget
Robot A picks up the gold nugget.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B*.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>[7] e

Alcove
You can see a metal door here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, metal door and Robot B*.

In scope for Robot A*: Robot A* and gold nugget*.

In scope for Robot B: metal door, Robot B and chest.

>[8] w

Pit
High above, a slash of brightness marks the fissure through which you first entered the caves.

You can also see Robot A here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, fissure and Robot B*.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and fissure.

In scope for Robot B: metal door, Robot B and chest.

>[9] robot, go east
Robot A goes east.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A*: Robot A* and gold nugget*.

In scope for Robot B: metal door, Robot B and chest.

>[10] wave
You wave.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A: Robot A, gold nugget and metal door.

In scope for Robot B: metal door, Robot B and chest.

>[11] g
You wave.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A*: Robot A* and gold nugget*.

In scope for Robot B: metal door, Robot B and chest.

>[12] jump
You jump on the spot.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A*: Robot A* and gold nugget*.

In scope for Robot B: metal door, Robot B, chest and brass key.

>[13] g
You jump on the spot.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, fissure and Robot B*.

In scope for Robot A*: Robot A* and gold nugget*.

In scope for Robot B: metal door, Robot B and chest.

>[14] e

Alcove
You can see Robot A and a metal door here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door and Robot B*.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: metal door, Robot B and chest.

>[15] robot, go east
Robot A goes east.

In scope for yourself: yourself, flashlight, cardboard box, Robot A*, metal door and Robot B*.

In scope for Robot A: Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot B: Robot A, gold nugget, metal door, Robot B and chest.

>[16] e

Mysterious Cell
You can see a metal door, Robot A, Robot B and a chest (closed) here.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

>[17] open chest
You open the chest, revealing a brass key.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[18] drop box
Dropped.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[19] robot b, enter box
Robot B gets into the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[20] close box
You close the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B*, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, chest and brass key.

In scope for Robot B*: cardboard box* and Robot B*.

>[21] open box
You open the cardboard box, revealing Robot B.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[22] put chest in box
(first taking the chest)
You put the chest into the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[23] put light in box
You put the flashlight into the cardboard box.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[24] close box
You close the cardboard box.

In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B*.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: flashlight, cardboard box, Robot B, chest and brass key.

>[25] jump
You jump on the spot.

In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B*.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: flashlight, cardboard box, Robot B and chest.

>[26] open box
You open the cardboard box, revealing a flashlight, a chest and Robot B.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B and chest.

>[27] open chest
You open the chest, revealing a brass key.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[28] put light in chest
(first taking the flashlight)
You put the flashlight into the chest.

In scope for yourself: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot A: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

In scope for Robot B: yourself, flashlight, cardboard box, Robot A, gold nugget, metal door, Robot B, chest and brass key.

>[29] close box
You close the cardboard box.

In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B*.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B: flashlight, cardboard box, Robot B, chest and brass key.

>[30] jump
You jump on the spot.

In scope for yourself: yourself, cardboard box, Robot A, gold nugget, metal door and Robot B*.

In scope for Robot A: yourself, cardboard box, Robot A, gold nugget and metal door.

In scope for Robot B*: cardboard box* and Robot B*.

Shouldn’t that second response be:

say "[The person asked] [can't see] a thing right [now]." (B).

?

1 Like

Sure! Good catch.

Could you please provide your whole example? It’s been assembled from a lot of pieces at this point. :grinning:

It’s in a not-quite working state at the moment, since I’m trying to replace touchability. Once I get that ironed out, I’ll update.

1 Like

Having converted visibility governance to an I7 basis, we can try to do the same for touchability.

Some interesting tidbits I discovered while studying the existing I6 code:

  • In 6M62, the global untouchable_silence is weirdly involved in the I6 for the Glulx version of the quit the game rule (something fixed in 10.1), and the access through barriers rule has doubled-up commands related to failing the rule when handling attempts to touch directions (something not fixed in 10.1).

  • The ZRegion() routine, a legacy item from the I6 Standard Library, is used by only a pair of template routines (MoveFloatingObjects() and BackdropLocation()), and removing it from them improves their legibility. (Still that way in 10.1 template code.)

  • The very unlikely to mean taking what's already carried rule, one of the built-in DTPM rules, is located in the section on Accessibility instead of along with the other rules for taking, as one might expect. (Still that way in 10.1 Standard Rules.)

Anyhow, the system around touchability isn’t very easy to follow. A good starting point is the basic accessibilty rule:

The basic accessibility rule is listed in the action-processing rules.
The basic accessibility rule translates into I6 as "BASIC_ACCESSIBILITY_R" with
	"You must name something more substantial." (A).

The I6 basic accessibility rule handles attempts to use a direction as the noun or second noun of an action that requires touchability for one or both (issuing response A if that happens), plus it ensures that the noun and/or second noun are touchable by the actor, if required. The key routine is ObjectIsUntouchable(), which determines whether the actor can’t touch something, and which is the basis of the touchability relation.

ObjectIsUntouchable() primarily works by invoking the accessibility rules. This rulebook has only one rule by default.

Accessibility rules is a rulebook. [13]

The access through barriers rule is listed last in the accessibility rules.
The access through barriers rule translates into I6 as
	"ACCESS_THROUGH_BARRIERS_R" with
	"[regarding the noun][Those] [aren't] available." (A).

The I6 access through barriers rule issues response A when the actor is not in the same room as the noun or second noun, or invokes the reaching inside rules and reaching outside rules as needed to check whether anything in the same room will prevent the actor from reaching the target object.

Reaching inside rules is an object-based rulebook. [14]
Reaching inside rules have outcomes allow access (success) and deny access (failure).

The can't reach inside rooms rule is listed last in the reaching inside rules. [Penultimate.]
The can't reach inside rooms rule translates into I6 as
	"CANT_REACH_INSIDE_ROOMS_R" with
	"[We] [can't] reach into [the noun]." (A).

The can't reach inside closed containers rule is listed last in the reaching inside rules. [Last.]
The can't reach inside closed containers rule translates into I6 as
	"CANT_REACH_INSIDE_CLOSED_R" with
	"[The noun] [aren't] open." (A).


Reaching outside rules is an object-based rulebook. [15]
Reaching outside rules have outcomes allow access (success) and deny access (failure).

The can't reach outside closed containers rule is listed last in the reaching outside rules.
The can't reach outside closed containers rule translates into I6 as
	"CANT_REACH_OUTSIDE_CLOSED_R" with
	"[The noun] [aren't] open." (A).

None of these rules really need to be written in I6 to do their jobs.

As with visibility, the system has some undesirable interdependencies. Chief of these is that the concepts of “accessibility” and “touchability” are intertwined – it’s all a lump handled under the basic accessibility rule. It’s not super-clear just what “accessibility” is supposed to mean as a concept, but we can try to reverse-engineer it from the code. Once mechanical, world model-driven touchability concerns are separated out, it seems to be the framework for organizing the necessary touchability checks.

The accessibility rules can’t be removed or modified to take a parameter, but we’re free to add, remove or change the rules within it. After much tinkering, I think I’ve worked out a pretty good framework:

  • There’s a new touchability rules rulebook. This is where all mechanical touch enforcement is handled.

  • The accessibility rules are purposed with driving the touchability rules (via ObjectIsUntouchable()) for each object that requires touchability.

The new rulebook:

Touchability rules is a rulebook.
The touchability rules have outcomes touchability succeeds (success) and touchability fails (failure).

The touchability rulebook has an object called toucher.
The touchability rulebook has an object called touchee.
The touchability rulebook has an object called touch apex. [i.e. common ancestor]

… and a diagram of how it interacts with other parts:

ObjectIsUntouchable
	MoveFloatingObjects(LocationOf(toucher))
	FollowRulebook(touchability rules)

touchability rules
	establish touchability context	[rule for setting rulebook variables]
	can't touch immaterial objects	["You must name something more substantial."]
	can't touch off-stage things	["That isn't available."]
	can't reach inside rooms		["You can't reach into Cavern."]
	can't reach unreachable objects
		reaching outside rules
			can't reach outside closed containers	["The glass cage isn't open."]
		reaching inside rules
			can't reach inside closed containers	["The glass cage isn't open."]

The starting scenario has been expanded with some new objects, and the scope listing now includes information on visibility (non-visible suffixed with asterisk) and touchability (non-touchable prefixed with hash).

Cavern
A dark pit is in the center of the cavern. Rough stairs cut into its outer edge descend into darkness.

High above, a slash of brightness marks the fissure through which you first entered the caves.

You can also see a cardboard box (empty) and a glass cage (closed) (in which is a pebble) here.

>Z
Time passes.

In scope for Robot A*: Robot A*.

In scope for Robot B: Robot B, chest and metal door.

In scope for yourself: fissure, #Robot A*, #Robot B*, yourself, flashlight, cardboard box, glass cage and #pebble.

The important part is that all of that makes it easy to enforce touchability on NPCs the same way as for the PC.

One thing it doesn’t do yet is flow out information about why a touchability test failed. The only tool for allowing that is anonymously abide by..., and that doesn’t really work with this structure. I’ve been tinkering with ways around it, but it’s the kind of kludge that wouldn’t be desirable in the real code base, so that probably wouldn’t be a feature if any of this were adopted.

Zed asked for the current code, which is fair, but it’s super-messy at the moment, so I want to clean it up first. Also, there are some more design considerations that could qualify for additional scope creep.

For fun, though, this is the game file for the current build. (Note that there may still be cryptic debug output in certain places. Also note that the scope display uses bold text for lit objects. Italics is supposed to mean something not receiving light, but it’s not working – more on that later.)

labsession01.ulx (602.3 KB)

1 Like