Lab Session (concluded): Giving commands to remote NPCs

For those following along: I’ve added a “current status” section to the top post, which I’ll keep updated as we go.


The activity framework has been partly implemented (deciding and removing), but I had to hunt down a few bugs.

One bug discovered is kind of a problem. The command >ROBOT B, X ROBOT A (when Robot A is not in the same place as Robot B) causes a parsing failure. The reason is that the word “robot” in the command part matches Robot B. With only one matching object in scope, the parser is satisfied that the necessary noun has been found, but when it tries to wrap up the command, there is an extra word (the “A” at the end). Since the noun did actually parse, the simple fallback to the PC’s scope is not triggered. After that, parsing fails with an “I only understood as far as...” parser error.

My first approach to fixing this didn’t work out. I have another idea that I’ll try soon.

1 Like

OK! My second approach seems to be working out reasonably well, though it makes some potentially dangerous assumptions regarding the human player’s intent. On some level there’s no way to avoid that during parsing when there’s any ambiguity, so the fix seems satisfactory for now.

Once past that problem, I was able to finish the code for extending senses. I was also able to work out a way to smuggle the reason the action failed from the touchability rules past the basic accessibility rule, so that the unsuccessful attempt rules of an NPC will have something specific to work with. This makes it pretty easy to produce interaction like:

Cavern
A dark pit is in the center of the cavern. Rough stairs cut into its outer edge descend into darkness.

High above, a slash of brightness marks the fissure through which you first entered the caves.

You can also see a cardboard box (empty) and a glass cage (closed) (in which is a pebble) here.

>ROBOT A, GO UP
Robot A arrives from below.

>ROBOT A, TAKE PEBBLE
"Unable to reach the pebble," buzzes the robot.

>ROBOT A, GET IN CAGE
The robot bleeps plaintively. "Unable to comply."

>OPEN CAGE
You open the glass cage.

>ROBOT A, GET IN CAGE
Robot A gets into the glass cage.

>CLOSE CAGE
You close the glass cage.

>ROBOT A, TAKE BOX
The robot's head swivels to survey the situation. "The glass cage obstructs passage," it declares.

The controlling rule is:

Unsuccessful attempt by Robot A doing something:
	if the reason the action failed is:
		-- the can't reach inside closed containers rule:
			say "'Unable to reach [the noun],' buzzes the robot.";
		-- the can't reach outside closed containers rule:
			let obstruction be the touchability ceiling of Robot A;
			say "The robot's head swivels to survey the situation. '[The obstruction] obstructs passage,' it declares.";
		-- otherwise:
			say "The robot bleeps plaintively. 'Unable to comply.'" 

I’ve updated the current status in the top post.

There’s only one more major overhaul stage planned, which is to try to elevate the core system to I7 instead of I6. The goal is to be able to declare new senses and their governing rules in I7 source, e.g. Olfaction is a sense. and so on. Before I do that, I’m experimenting with the new system to try to find examples that demonstrate its advantages over the current scope system. (If anyone would like to suggest toy scenarios, I’m listening! Especially if they would be difficult and/or annoying to do in the current model.)

1 Like

Unexpected bonus overhaul: reworking the lighting system because doors also have a “sidedness” when it comes to how they interact with light – something that should have been anticipated when I was handling inside-vs-outside for containers. This has turned out to be more difficult than expected, so complex lighting scenarios are only half-working until that’s straightened out.

The whole mess is a good example of what happens when requirements change mid-process. Making light-related logic fully observer-dependent introduces new questions like: If there is light in the room that is the back side of door D, but no light in the room that is the front side of door D, is door D visible from the front side? What if door D is transparent?

It seemed easier to deal with those questions in a reasonable way if the core logic were elevated to the I7 level, so I went ahead with that. Today I was able to add olfaction as a sense with about 25 lines of I7.

Transcript Excerpt
>SENSES
[Checking all senses for yourself.]

PROPRIOCEPTION
1: yourself (686726)
2: your hands (686598)

2 impressions via proprioception for yourself.

DIRECTIONAL ORIENTATION
1: the north (685894)
2: the northeast (685926)
3: the northwest (685958)
4: the south (685990)
5: the southeast (686022)
6: the southwest (686054)
7: the east (686086)
8: the west (686118)
9: the up (686150)
10: the down (686182)
11: the inside (686214)
12: the outside (686246)

12 impressions via directional orientation for yourself.

VISION
1: yourself (686726)
2: a flashlight (686758)
3: a dent (686630)
4: a reverse switch (686662)
5: your hands (686598)
6: a cardboard box (686790)
7: some limburger cheese (686822)
8: a label (686918)
9: a glass cage (686854)
10: a pebble (686886)
11: a fissure (686310)
12: a brass key (686566)

12 impressions via vision for yourself.

TACTION
1: yourself (686726)
2: a flashlight (686758)
3: a dent (686630)
4: a reverse switch (686662)
5: your hands (686598)

5 impressions via taction for yourself.

OLFACTION
1: some limburger cheese (686822)

1 impressions via olfaction for yourself.

Surprising discovery of the day: If the PC is inside a closed container with no light source, objects part of that container(e.g. a label) will be in scope under the standard model!

Yep, it caused big problems in Death on the Stormrider development! If things are “part of” a closed container, they’re considered to be on the inside for lighting purposes, but on the outside for reaching purposes. So if you get on the lid of the trunk from Shimat’s cabin (an enterable supporter that’s part of a closed container), suddenly the outside world disappears!

Since I was working on lighting anyway, I decided to see what could be done about things that “glow” but aren’t really light sources in the same way that a lit thing is.

It turns out that English words to describe interactions involving light are surprisingly ambiguous – so much so that coming up with names that promote clarity has been difficult. Just about the only terms that have stuck are “radiance” (i.e. light strong enough to see other objects by) and “luminance” (i.e. light strong enough to see but not see other objects by).

The basic framework is now in place with objects now able to take the property luminant, which grants visibility to them without illuminating the space in which they are located. Here’s the toy world for anyone interested in playing with it, but note that I’ve made only minimal efforts to modify the Standard Rules, so some responses may not fully take into account the new sensory model:

labsession01c.ulx (740 KB)

An observation: In practice, phrasing memory as a sense that informs scope has significant drawbacks. Every object placed into scope via memory competes with those that are already naturally in-scope during parsing, which creates undesirable disambiguation requirements.

Since we’re clearly far beyond the original problem, and since interest in this thread by the general public seems fairly limited, I’m calling an end to the thread. I believe that the original objectives were met (and then some), so the lab session was successful on that front, and it was certainly educational. Any future installments are likely be private message threads for the MSC.

Thanks to everyone for participating!

4 Likes