I didn’t have the $ trick when I wrote PQ but that hasn’t stopped me from starting to use it when I’m going back over the PQ code looking for errors and omissions
A person could probably also write a TADS function to iterate over the source files and look for duplicate tokens in vocabWords
fields, eliminating one and pasting the $ before the other…
Bumping with a little observation that came out of some performance testing I’ve been doing. Figured it’s not quite worth its own thread but it’s probably worth knowing, so figured this is as close to a “general TADS3 discussion” thread as we’ve got.
Anyway: if you have a Vector
instance, call it foo
, and some element of it bar
, then foo.removeElementAt(foo.indexOf(bar))
performs better than foo.removeElement(bar)
. Which is not what I would’ve expected.
It is much worse if bar
is an object than, for example, an integer. But it’s still true for integers.
Here’s a little demonstration that creates a 65535-element Vector
of sequential integers, creates two copies of it, shuffles the original into random order, and then traverses the shuffled list removing each element in it from one un-shuffled copy via removeElementAt()
and indexOf
, and then does the same only with removeElement()
on the second unshuffled copy:
#charset "us-ascii"
#include <adv3.h>
#include <en_us.h>
#include <date.h>
#include <bignum.h>
class Foo: object
active = nil
isActive() { return(active == true); }
;
versionInfo: GameID;
gameMain: GameMainDef
count = 65535
newGame() {
local i, l0, l1, ref, ts;
// Create a vector of sequential integers.
ref = new Vector(count);
for(i = 0; i < count; i++) ref.append(i);
//for(i = 0; i < count; i++) ref.append(new Foo());
// Make two copies of it.
l0 = new Vector(ref);
l1 = new Vector(ref);
// Shuffle the original vector.
shuffle(ref);
t3RunGC();
ts = getTimestamp();
ref.forEach(function(o) { l0.removeElementAt(l0.indexOf(o)); });
aioSay('\nremoveElementAt()/indexOf() time:
<<toString(getInterval(ts))>>\n ');
t3RunGC();
ts = getTimestamp();
ref.forEach(function(o) { l1.removeElement(o); });
aioSay('\nremoveElement() time:
<<toString(getInterval(ts))>>\n ');
}
getTimestamp() { return(new Date()); }
getInterval(d) { return(((new Date() - d) * 86400).roundToDecimal(3)); }
// Fisher-Yates shuffle.
shuffle(l) {
local i, k, tmp;
for(i = l.length; i >= 1; i--) {
k = rand(l.length) + 1;
tmp = l[i];
l[i] = l[k];
l[k] = tmp;
}
}
;
On my machine that produces:
removeElementAt()/indexOf() time: 6.198
removeElement() time: 8.713
If you comment out the first for()
loop (the one that populates the list with sequential integers) and uncomment the second (which fills the list with instances of the Foo
class) it becomes:
removeElementAt()/indexOf() time: 8.603
removeElement() time: 18.365
Anyway, just figured this was worth knowing.
This came out of a bunch of refactoring I’ve been doing. Specifically, I’ve been re-working all the rule engine and derived modules because I discovered an assumption I’d made about performance was wrong—I’d been doing a bunch juggling to keep the number of rules evaluated each turn low, by keeping track of which rulebooks were active at any given time and adding or removing them from the rule engine’s lists when they changed state. This turns out to be substantially less performant by just leaving everything in the list(s) and evalutating each rules’ active
property/method every turn.
Here’s another minor scrap of TADS3 that came out of testing.
Often I find myself wanting to know what the upper bound on the size of some T3 data structure is. For example the maximum number of elements in a LookupTable
. This is a slightly subtle question, as you can make LookupTable
instances with numbers of elements that will not cause problems when created but will cause exceptions when accessed in various ways, for example if you do a table.keysToList()
.
Anyway, here’s a little code to find an upper bound by wrapping a callback function in a try/catch
block and iterating until the maximum value (that doesn’t throw an exception) is found:
function findMaxValue(v0, v1, cb, maxTries?) {
local i, j, n;
maxTries = ((maxTries != nil) ? maxTries : 65535); // max tries
n = 0; // current tries
while(1) {
if(n >= maxTries)
return(nil);
i = (v1 + v0) / 2; // current guess at value
if(i == j) // guess same twice, done
return(i);
j = i; // remember last guess
n += 1; // increment tries
try {
cb(i); // invoke callback
if(i > v0) // update lower bound
v0 = i;
}
catch(Exception e) {
v1 = i; // update upper bound
}
}
}
The args are:
v0
andv1
lower and upper integer values to trycb
callback function. it will be repeatedly called with integer arguments betweenv0
andv1
maxTries
optional maximum number of iterations. default is 65535
Here’s an example of use. It finds the maximum number of keys in a LookupTable
that won’t cause an exception when keysToList()
is used:
v = findMaxValue(1, 65535, function(n) {
local i, l, table;
// Create a lookup table
table = new LookupTable();
// Add n arbitrary elements to it.
for(i = 0; i < n; i++) table[i] = i;
// Get the keys as a list. This has an upper
// bound of 13106, for some reason.
l = table.keysToList();
// Meaningless conditional so the compiler doesn't
// complain that we assigned a value to l and then
// didn't do anything with it.
if(l.length()) {}
});
In this case, v
should, after a few seconds, turn out to be 13106
.
it’s very curious, because 13106 multiplied by 5 gives 65530 and by 10 gives 131060, that is, the overflow exception seems to happen when reaching the next power of two (2^16 in the first case…) so I think the overflow is caused by overflowing at the the end of a 64K two-byte array.
(ugly remnant of the accursed segmented architecture of the 8086 ?)
Best regards from Italy,
dott. Piergiorgio.