Release versus debugger

i am fairly far along in a game, everything is working fine in the debugger. (i’m using macos) but when i compile it into a .z8 file and start it up in Lectrote or Splatterlight i am getting constant (Technical trouble: Heap space exhausted. Attempting to recover with UNDO.) error messages with about half the commands entered. and these are basic commands like ‘examine book’. about half the objects i try to “take” or “drop” i get the same message.

i realize this is complicated and there could be something globally wrong with my code (but if so, why does it work perfectly in the debugger?). has anybody else had this discrepancy or have any idea what’s going on?

1 Like

The most straightforward solution to this problem is to increase the amount of heap space allocated. This can be done at compile time. If you enter this on the command line

./dialogc -h

you will see this (among other things):

--heap      -H    Set main heap size (default 1000 words).
--aux       -A    Set aux heap size (default 500 words).
--long-term -L    Set long-term heap size (default 500 words).

Unfortunately, I do not know of any way to determine how much these values should be increased except by trial and error.

I have only ever exhausted the heap while manipulating long lists or using recursive rules. If you want to continue to use the default heap size, you might try a solution that does not involve whichever of these is causing the problem.


The debugger does not use any of the backends and has more heap space than them, and I don’t think it can be adjusted with command line parameters like the compiler. Even so, if it was a runaway infinite recursion the debugger would have barfed just like the Z-machine.

My suggestion is that you add

	(actions on)
	(trace on)

at the start of your source code, compile it, and then try the misbehaving actions in the actual interpreter. You might want to set Spatterlight or Lectrote to fixed-pitch font to make it easier to follow the tracing info.


Ahh, thanks. i’d forgotten or not realized that you can embed the debugger tools into the compiled build.

i’m still confused, though. the heap is overloading during parsing, specifically during

(parse object name $Words as $Result $All $Policy)

not when working through a list or doing anything recursive or fancy.

it’s a complex game with a lot of timers/daemons and i’m wondering if it’s possible to just go beyond what the z-machine can do? it works fine in the a-machine so maybe it’s time to just make the move…

The recursion might have happened before (parse object name $ as $ $ $). At which level do you see that rule entered? You can check that in the debugger with (trace on) and counting the vertical bars |.

For example in a simple test scenario that I have run, I see that rule entered at level 13:

| | | | | | | | | | | | | ENTER (parse object name [red fish] as $ 0 [5]) stdlib_mine.dg:5433

For a non-ambiguous single object in simple verb like “get red fish” or “examine red fish” you should see it around the same level.

I have found that if a rule is entered at about 40 bars, Z-Machine gets close to heap exhaustion. Obviously it depends on other things as well.

Also what happens when you try @Karona’s suggestion and increase the heap size?

Edit: I have forgotten to mention one last tool to debug memory issues. You can sprinkle(display memory statistics) to suspected rule bodies to monitor heap usage. Here is my simple scenario:

> get red fish
PONa start:
Peak dynamic memory usage: 189 heap words, 233 aux words, and 6 long-term 
PONa after candidates:
Peak dynamic memory usage: 226 heap words, 234 aux words, and 6 long-termwords.
You take the red fish.

I have achieved that by touching the (parse object name $ as $ $ $) in the library like this:

(parse object name $Words as $Result $All $Policy)
	PONa start: (display memory statistics)
	(filter $Words into $Filtered)
	(nonempty $Candidates)
	PONa after candidates: (display memory statistics)
	(apply policy to $Candidates $Policy $CleanList)

(display memory statistics) only works in compiled code for specific backends, it won’t work in the debugger.

Edit 2: If you build your code including stddebug.dg you can also use meminfo command during play to see peak memory usages.


so THAT’S what the vertical bars mean…

yes, i increased the heap to the max and no change.

when i run the trace in the debugger the most vertical bars i get is 19. at the actual point in the trace where the the heap error occurs it’s around 17 or 18 bars.

using meminfo, peak memory around the time of the error is 518 head, 244 aux and 17 long-term.

which leaves me even more confused since none of this seems to point to an exhausted heap.

Interesting, is there any chance that you are overriding the (parse object name $ as $ $ $) in your source code, but forgot a (just) in there so that the execution falls into the library version as well? That rule is queried as multi, so that would possibly double the heap usage in one fell swoop.

If you are willing to share your code with me, you can send it to me in a DM and I would take a look. Other than that, I am afraid that’s all that comes to my mind for shooting in the dark.

1 Like