[i7 performance question] definition or property?

A question for memory-usage and run-time-performance gurus:

Often, when I need to create an either/or property that will only be relevant to a handful of objects in the game, I just create a definition for it instead. For example:

[code][instead of this]:

A thing can be perforated or nonperforated. A thing is usually nonperforated. The paycheck stub is perforated.

[I’ll do this:]

Definition: the paycheck stub is perforated: yes.[/code]

My reasoning is that, if I’ve got two or three hundred objects in the game world, then creating a new either-or property that applies across everything must require more memory (and/or bloat the game’s file size, and/or make the game run slower) than if I just create a definition that only gets called when it’s required during runtime.

However, I don’t really know much about these sorts of things, so my question is, am I overthinking this? Is a definition more efficient than an either/or property? (I realize there are situations where one works better than the other for a given set of circumstances; I mean that assuming either will suit the purpose, how does each affect performance?) Is there a point of diminishing returns, like if I end up having to write definitions statements for a dozen different objects, should I just make it an either/or property instead?

Just curious, thanks.

(I wasn’t able to get “Definition: the paycheck stub is perforated.” to compile under 6M62. However, I was able to get it to compile if I added a tautological condition: “Definition: the paycheck stub is perforated if 1 < 2.”)

An either-or property declares a new attribute that all things possess. An object’s object table entry (for both zcode and glulx) contains a fixed-size bitvector of attributes (this size can be changed at compile time for glulx). Those bits will be there, whether used or unused, so it’s not burdensome to add a new attribute, but if there’s a chance of running out of attributes, you might not want to spend one on something that doesn’t apply to most objects.

A definition doesn’t declare an attribute. It creates an adjective, which boils down to a routine that takes a thing as an argument and returns a boolean indicating whether or not the adjective applies to the thing. In this case (with my 1 < 2 hackery), the adjective looks like:

[ Adj_42_t1_v10 
    t_0 ! Call parameter: thing
    ;
    ! meaning of "perforated"
      if (t_0 == I126_paycheck_stub) return ((((1 < 2))));
    rfalse;
];

If we add a few more definitions, it looks like:

[ Adj_42_t1_v10 
    t_0 ! Call parameter: thing
    ;
    ! meaning of "perforated"
      if (t_0 == I126_paycheck_stub) return ((((1 < 2))));
      if (t_0 == I127_office_chair) return ((((1 < 2))));
      if (t_0 == I128_stapler) return ((((1 < 2))));
      if (t_0 == I129_copier) return ((((1 < 2))));
    rfalse;
];

So, if you had dozens of definitions, I suspect that it would be more efficient to locate an object’s entry in the object table and test an attribute than to perform many comparisons inside an adjective routine.

Slight note: there has to be some condition, but you can make that condition be a test of identity.

Definition: something is perforated if it is the ticket-stub.

Oops, my bad. I meant to write, “Definition: the paycheck stub is perforated: yes.”, which has the same effect.

That makes sense. Thanks for the insight.

I looked more closely at this, and there’s more overhead than I initially thought when testing either-or properties in I7.

The generated I6 code not only declares an attribute, but also contains adjective routines for getting and setting the either-or property.

If we write an I7 test like “if the paycheck stub is perforated”, it doesn’t directly translate to an I6 attribute test like “if (I126_paycheck_stub has p59_perforated)”, but instead produces a call to an adjective routine like “if (Adj_86_t1_v10(I126_paycheck_stub))”.

Here’s the getter adjective routine for perforated:

[ Adj_86_t1_v10 
    t_0 ! Call parameter: object
    ;
    ! meaning of "perforated"
      if (t_0) return (GetEitherOrProperty(t_0, p59_perforated));
    rfalse;
];

This calls:

[ GetEitherOrProperty o p;
	if (o == nothing) rfalse;
	if (p<0) p = ~p;
	if (WhetherProvides(o, true, p, false)) {
		if (p<FBNA_PROP_NUMBER) { if (o has p) rtrue; rfalse; }
		if ((o provides p) && (o.p)) rtrue;
	}
	rfalse;
];

which is written to handle I7 either-or properties implemented either as I6 attributes or as I6 properties. (Perhaps they’re implemented as properties after we exceed the number of available attributes?) It’s here, after a call to WhetherProvides, that the attribute is finally tested with “if (o has p)”.

I don’t know that this changes the conclusion when there are dozens of definitions, but it might alter the threshold at which one approach outperforms the other.

If this goes beyond the hypothetical, and the game’s performing poorly, the thing to do is to run a profiler (glulxe has one) and see where the program’s spending its time.

The only time I ever noticed any significant performance difference was when I once wrote a grammar token that matched “any relevant thing”. In this case, “relevant” was a definition that actually ran through a few conditions before returning yes or no (as opposed to just automatically returning yes for a single object).

The game immediately went into slo-mo whenever the grammar token got checked, and I eventually realized it was running every object in the game (3 to 400) through the definition, trying to find a match.

When I went back and made “relevant” an either-or property instead of a definition, the lag vanished.

That’s correct.

(Glulx lets you increase the number of attributes, but I7 is not yet set up to take advantage of this. Yes, Graham knows. :slight_smile:

How many attributes do you normally get in I7?

You get 48 with z8 and 56 with glulx. By my quick count, the standard rules consume 37 of these.

From the glulx technical reference:

Inform (as of 6M62) uses only 48 attributes regardless of platform.