Iterating through tables vs. iterating through lists: which is faster?

I was curious about that, so I ran this code.

r1 is a room.

xxx is a list of numbers variable. xxx is {1, 4, 7, 10}

table of yyy

chapter tabing

tabing is an action out of world.

understand the command "tab" as something new.

understand "tab" as tabing.

carry out tabing:
	repeat with X running from 1 to 32000:
		repeat through table of yyy:
	the rule succeeds;

chapter numing

lising is an action out of world.

understand the command "lis" as something new.

understand "lis" as lising.

carry out lising:
	repeat with X running from 1 to 32000:
		repeat with X2 running through xxx:
	the rule succeeds;

It appears that the TAB command is twice as fast as the LIS command. But this is a very simple experiment. TAB takes one second, while LIS consistently takes two.

I tried tables with more rows and I didn’t see much performance decrease. And I tried making more complex things to search through in the list.

So I’m wondering, is what I am seeing sensible, that it’s more just cycling through a list/table that takes the time hit and not the size/complexity of the table row or list object?

Is there any way to generalize my very empirical data to say “yes, tests are faster than lists?”

My ultimate test case is parsing through bigger tables or lists to divide up quests. I have one more way I might speed things up in Fourdiopolis, where I break a table like so

table of friends
friend-name (other data)


table of friends you find after 3 moves
friend-name (other data)

table of friends you find after 4 moves
friend-name (other data)

And I wondered if lists would work better, since you could remove items/quests from a list once you found them, thus reducing the search time.

a quest is a kind of thing. a quest can be undone or solved.
a quest-log has a list of quests variable called friends-all.
a quest-log has a list of quests variable called friends-left.
a quest-log has a list of lists of quests variable called friends-by-length-all.
a quest-log has a list of lists of quests variable called friends-by-length-left.
[when starting a quest, friends-left = friends-all and friends-by-length-left = friends-by-length-all, then you delete stuff]
friends is a quest-log.
friends-all of friends is { ace, brad, chet, dan}.
friends-by-length-all of friends is { {ace, dan}, {brad, chet}}.

The reason for having -all and -by-length-all is that the quest list you are given, where you don’t know the names, is alphabetized, so the things you get early help narrow down the later ones. So I want things alphabetized as well as ordered by length.

As it is, converting from tables to lists might be a wash in terms of delay time, since lists take double the time to search through at first and then eventually they take far less time as more quests are solved.

So are there any guidelines or data people have who’ve worked with the core, as to which is faster and why? My gut feeling is tables are good enough each way, and I have a general solution, but I want to check I’m not stepping into anything nasty or missing a better solution before going forward.

1 Like

On the whole, I would expect tables to be faster than lists, as you’ve determined.

A table is internally implemented as a set of parallel arrays (one for each column). Access to the table is by row number, which is an array index value common to all of the column arrays. Looking up a table row is just a set of array access operations. (There is a cost of scanning through the rows when looking for a row corresponding to a particular value.)

A list is internally implemented as a block value. A block value is stored as a segment or a set of segments within the heap memory array. Fetching a list entry involves scanning through those segments. Since multiple segments of a single block value take the form of a linked list, the code must scan through the segments (starting from the first one) to find the one that holds the value being looked up. (This is done for every access, even if the list is unchanged.) Overall, this tends to be more work.

EDIT: I should add that, as a practical matter, if what’s stored in the table rows is a block value (such as text), then to actually make use of what you find in that field will likely require incurring block value routine overhead, anyway, so there are situation-specific factors to consider.

In the multiple-table example that you’ve given, are the “(other data)” columns the same between the various tables? If so, then adding another column (perhaps called “turns to meet”) would let you combine them all into a single table.


Thanks for the great explanation! It’s useful to me to know I’m not missing anything that could do a lot better, and it’s nice to know the whys even if I can’t dig into the details.

Yes, the columns are the same. The tables are split by design. It may seem a bit ugly, but when I had one table of big side quests, the parser got slow, especially on the web version. There are about 60 rows all told, and if I break them up by distance (3 to 9) that’s an average of just under 10 rows to look through.

The speedup was significant when I ran tests through with Zarf’s scripts. I think I saw a 40% reduction in time tests took to run, and command response for the web version got to an acceptable speed.

So, I agree, it looks a bit hacky, but given practical considerations and player comfort, I think it’s worth it in this special case.