If I remember right from the Technical Manual, when this happens it actually compiles a static array containing [1, 2, 3]
, then replaces the call with anonymous_array-->random(3)
. The random
function never actually sees those arguments.
Nice one!
I have a lot of sympathy for that, so I want to be clear that the rest of this post is intended to be informational, without any implication that you “should” or “shouldn’t” do something.
Apologies for inviting the “contentious subject”, I really should have seen that coming when I brought up PCG. I don’t have a strong opinion between the variants of PCG, xoroshiro, and xoshiro that are currently recommended by the respective authors. I believe it’s fairly uncontroversial that all of them are at least pretty decent, and much better than most earlier RNGs, all academic feuds about what’s best (or fastest) aside. You might as well roll dice to pick one if you don’t want to bother comparing them. None of these algorithms are so flawless that an expert can’t teach them a party trick that highlights some weakness of some weird variant or usage pattern, but those don’t seem to affect the “take a recommended algorithm, seed it in the recommended way, and then just use it as a single stream of random numbers” use case.
All of Mac, Windows and Linux have some ways of getting high quality random numbers, but they also have heaps of legacy interfaces that one has to dodge because they’re not, in fact, any good by modern standards. Learning about all that baggage and how to avoid it isn’t fun, I’m hardly an expert about it myself, but it’s necessary if you actually want better randomness than what you can get out of the box from the algorithms discussed in this thread.
In particular, POSIX random()
as currently used in Glulxe (seeded with time(NULL)
and getting full 32 bits instead of just 31 via (random() << 16) ^ random()
) fails the statistical test suite PractRand very consistently and very quickly, much more quickly than MT19937 for what it’s worth. At least that’s what I’m seeing on my machine with whatever implementation glibc uses (POSIX doesn’t specify much about it, and while I can find the implementation, there doesn’t seem to be any discussion of how good or bad it is) and a program I slapped together to feed PractRand:
hanna@eirin:~/testingRNG/PractRand-pre0.95$ cat glulxe_random.c
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <stdint.h>
void main() {
uint32_t seed = time(NULL);
fprintf(stderr, "time(NULL) seed = %u\n", seed);
srandom(seed);
while (1) {
uint32_t w = (random() << 16) ^ random();
fwrite(&w, sizeof(uint32_t), 1, stdout);
}
}
hanna@eirin:~/testingRNG/PractRand-pre0.95$ gcc -O2 glulxe_random.c -o glulxe_random && ./glulxe_random | ./RNG_test stdin
time(NULL) seed = 1695919320
RNG_test using PractRand version 0.95
RNG = RNG_stdin, seed = unknown
test set = core, folding = standard(unknown format)
rng=RNG_stdin, seed=unknown
length= 128 megabytes (2^27 bytes), time= 3.3 seconds
Test Name Raw Processed Evaluation
[Low1/32]BRank(12):512(1) R=+238.9 p~= 6.1e-73 FAIL !!!!
...and 198 test result(s) without anomalies
Actually, now that I think about it, I don’t actually know of any seedable RNG provided by Linux (either the OS proper or glibc) that’s considered to be high quality by today’s standards. There’s various sources of cryptographically secure randomness (/dev/urandom
, arc4random
, getrandom
syscall, etc.) but those don’t accept seeds. They are very good for getting an an unpredictable seed for another RNG, avoiding the usual issues of seeding with time(NULL)
or process IDs or the like, but on their own can’t be used for the Glulx RNG. And all options I’m aware of for seedable RNGs are C library functions with serious defects due to being stuck with algorithms chosen decades ago.
Ok, good to know. I’ll take a look.
Yeah, I’m seeing that. Not thrilled about it.
That’s actually ok! The way I’ve got the code right now, the native RNG (which we want to be high-quality) will be used non-seeded. (That’s the call which is srandom(time(NULL))
right now, but that’s not optimal for several obvious reasons.)
The built-in RNG – xoshiro128 or whichever – is used with a seed. It’s a 32-bit seed to begin with, so there’s only so much we can do with it anyway.
I am really trying to limit the current task to “Update the Glulxe source code, and then maybe clone the changes to Git.” I know that leaves some interpreters behind.
(For what it’s worth, yes, a Glulx function can know how many arguments it has. See the PrintAnyToArray() library function, etc.)
Any RNG used for Glulx @random
has to support seeding, even if it starts out in an unpredictable state by default, because @setrandom
with non-zero L1 has make the RNG behave deterministic (as function of the seed and of course only for that particular interpreter version). I7 exposes RNG seeding as a phrase for authors to use, and I believe the “Make random outcomes predictable when testing” setting in the apps relies on it.
Unless you’re talking about switching from one RNG implementation to the other dynamically during play, any time the game re-seeds the RNG? Yeah, I see now that’s what you’re doing in the branch you linked earlier.
Continuing the not-fun news, if you want to go this way then I don’t think any of the options mentioned are headache-free for this purpose:
/dev/urandom
is apparently more widely supported than I thought (including on macOS), but it’s still a file and doing file I/O any time the game wants to flip a coin seems a bit silly. And since it’s a file accessed by a certain path, it may be unavailable or inaccessible (I believe that’s why Linux introducedgetrandom
later) or subject to other perplexing failure modes.getrandom
is a Linux-ism and new enough that it’s apparently still good practice to check for its availability and fall back to something else if it’s missing.- On the other hand,
arc4random
is not available on Linux (allegedly this will change, but it doesn’t seem to have reached my machine yet).
And that’s just Linux, maybe Mac too. What about other platforms such as “git/glulxe compiled via emscripten running in the browser/Lectrote”? Now I’m tired enough of all those headaches that I want to take a step back to point out: we’re now talking about using cryptographically secure pseudorandom number generators (CSPRNG) for an IF interpreter. Why? I fully understand wanting high quality randomness, but CSPRNGs don’t offer better randomness than their non-cryptographic peers in any sense that’s meaningful for us here.
The objective of CSPRNGs is to resist attempts to influence or recover the internal state and thereby predict past or future outputs. Non-crypto RNGs are insecure under such threat models, but good ones are no worse than CSPRNGs under any other measure of “quality of randomness” or “unpredictability”. And of course, adversarial threat models are comically out of place here: a player trying to break the RNG can just poke the RAM of the interpreter running on their computer to set up an RNG seed of their choice! That’s probably easier than painstakingly figuring out the state of an insecure RNG from in-game observations, but in my opinion the latter is more fun to watch in a speedrun.
CSPRNGs being hopefully indistinguishable from “true randomness” is just a corollary of the security objective, and in fact the randomness can be somewhat compromised without lowering the security to the point where it’s considered broken. Another example is the observation that CSPRNGs based on a block cipher in counter mode (encrypt s
, then s+1
, then s+2
, etc. for some starting point s
) will never repeat any output block across its whole period because a block cipher never maps two different inputs to the same output. Intuitively, that’s not a property we’d like an unpredictable random number generator to have! It’s not a very practical issue because the block size is at least 128 bits, so you’d need to see at least 2^64 blocks (much more for larger block sizes) for the lack of repetition to start becoming suspicious and even then it wouldn’t help an attacker predict the next outputs. But it’s the exact sort of thing which, for any non-crypto RNG, would be taken as evidence that the algorithm is busted.
All of that is to say: there isn’t really a choice between dealing with these annoying platform differences vs. having worse randomness if you don’t. You can have your cake and eat it too with a simple, portable implementation of high quality randomness for both reproducible and unpredictable modes. That reduces the problem to getting a good “unpredictable” seed, which is much easier: fancy platform APIs can be used if desired and available, but not bothering and just throwing time(NULL)
in there still picks a different high-quality stream of random numbers every second. (It just sucks for the “launched many times by a script” use case.)
Oh, I agree. “High quality” is the goal here. David only brought in a crypto-secure Windows API because Windows has it right there.
Ah, that’s what _vararg_count
is for. Never used it for my own functions before.
It’s great to see such quick progress on updating Glulxe and Git!
However I think I think this is important enough that I7 should do at least something small to account for the older terps, but you can tune that discussion out if you want.
Someone asked before if it is only Windows terps that are affected? I’m pretty sure every Git terp is affected. I just looked at Git’s code for the first time, and saw that it doesn’t have any conditional code, it never tries to use a RNG from the OS/libc. It just has the same weak RNG that Glulxe used to have. I expect David is intending to fix this to use the same solution as Glulxe once that’s been decided.
For further testing: a new Windows build of Glulxe. This uses Windows’ RNG if no seed is set (or it is set to 0), and Zarf’s implementation of MT19937 otherwise.
glulxe_new_rng.zip (976.2 KB)
It seems to me there are 4 scenarios here:
Old story file with old terp → may not work properly, but there’s not much we can do but advise using new terps wherever possible
Old story file with new terp → this will require a better PRNG to be implemented within the new terp
New story file with old terp → this will require a new in-house PRNG to be implemented
New story file with new terp → unless the story has means of knowing whether it is running on a terp with a good PRNG, it will have to assume the worst and still default to an in-house PRNG
Conclusion, we need a new implementation within as many terps as possible for the sake of the existing back-catalogue of Glulx stories, but it may nevertheless be advantageous for all new Glulx stories to use an in-house PRNG to guard against being run on an old terp.
I can see four options for what Inform 7 could do:
- Implement some kind of xorshift-like and use it unconditionally. Good enough for lots of other languages!
- Determine (through a gestalt?) whether the VM’s built in
@random
opcode is good enough, and if it isn’t then implement an xorshift-like. - Do a minimal (perhaps <93) priming of the VM’s RNG, hoping it’s good enough for most casual random uses in Inform. Anyone with serious needs (for random combat perhaps) can implement an xorshift-like themselves.
- Do a maximal (<33K) priming of the VM’s RNG.
In this topic we’ve mostly been considering priming the VM’s RNG in order to increase the number of distinct sequences. But ignoring the question of how many distinct sequences there are, is Git’s current RNG actually good enough for the sequences it produces? Does it have a low-bit bias? If it’s not good enough then we really need to choose option 1 or 2.
If Inform 7 implements its own RNG, then how do we seed it? If Git’s current RNG returns a value closely derived from the timestamp if it’s not adequately primed, then maybe we just directly seed it with glk_current_time
.
Anyone know the name of the algorithm implemented in Git/Glulxe? Zarf is about to replace the Glulxe RNG, but here’s a permalink. If we had the name I’m sure it wouldn’t be hard to find an analysis of it to see if it has a low-bit bias etc. It only does one actual subtraction each round… which seems very weak.
Looks like a Lagged Fibonacci generator, specifically one recommended by Knuth. These are better than LCGs, but still weak by modern standards. I think, but am not certain, that it has a low-bit bias, meaning we probably want to avoid using it altogether.
Donning my “terp developer” hat, I don’t love the idea of a time-based seeding method being compiled into games produced with future I7 versions. It would make it significantly harder for anyone but the author to fix the RNG for testing (either debugging game behavior, or testing a terp). Right now, if I want to do that with the released version of a game, I just need the terp to offer a flag for that (example in Glulxe). Then the game makes the same random choices on every run, and if it doesn’t otherwise depend on the current time etc. (which most games don’t) then I get fully deterministic behavior. But if the random choices are hard-wired to depend on the result of glk_current_time
, I’d have to modify the Glk implementation (and then rebuild the terp against it) to either remove that function or make it return something deterministic.
On the other hand, if that problem can be avoided, then compiling one RNG into every future game has the benefit of making games behave identically across terps w.r.t. random choices. This can also be considered a downside (what if the compiled-in RNG turns out flawed?) but it’s definitely a plus from the testing perspective because it would mean you could more easily compare the output of two terps on on the same game and same inputs.
I don’t especially like the idea of a gestalt for “our random number generator is not flawed” because, well…nobody intends their random number generator to be flawed. I’m sure if that gestalt had existed when these versions of Git were created, it would have returned true!
Yeah you’re right that seeding it with a timestamp probably isn’t the best option. @DavidK also said that the Inform 7 test suite needs to be completely predictable. If we seed it with one call to @random
then even if the VM’s RNG is really bad, it can at least be controlled by the --rngseed
option.
So I’ve been thinking about what it would take to reimplement all of the random
system function in I7, and it won’t be hard, even the random(1, 2, 3, 4)
form.
However, I’ve discovered that the Glulx system function does not replicate the ZCode behaviour! random
as described in the DM4 will seed the RNG if given a negative number, but in Glulx it will instead just pass the negative number to @random
(which returns a negative random number). The DM4 doesn’t mention what happens if you call random(0)
, but in ZCode it will tell the VM to re-seed its RNG randomly, while in Glulx it will generate a 32-bit random number.
This makes it hard to know which behaviour the new implementation should be replicating.
It would be nice if I7 put a wrapper around random
in the template files, maybe VM_GetRandom n
. That would take care of any platform-dependent details the same way it already does for seeding the RNG.
That would free us from needing to conform to the old I6 behaviour, but on the other hand we want all randomness to be covered by the new algorithm, even if it’s in old I6 code deep in the templates or in extensions. (The only thing we can’t catch is calls to @random
itself.)
The safest option is probably to replicate the current I6 behaviour, but it really needs to be documented somewhere (The Game Author’s Guide to Glulx Inform or the I6 Addendum?) that the Glulx random
function is different to what the DM4 describes.
It’s complicated.
The @random
opcode of the Z-machine accepts only one input parameter:
@random
+ve parameter n → interpreter should provide a random number between 1 and n
@random
0 → interpreter should reseed RNG in as random a way as possible
@random
-ve parameter n → interpreter should reseed RNG using n such that it becomes predictable
I6 inbuilt function random() called with a single parameter expects behaviour in the same way, and if I6 is compiled to the Z-machine this is what happens.
The @random
opcode of Glulx also only accepts one input parameter but behaves differently:
@random
+ve parameter n → interpreter should provide a random number between 0 and n-1
@random
0 → interpreter should provide a return a random number in the full 32-bit integer range
@random
-ve parameter n → interpreter should provide a random number between n+1 and 0
Glulx has a separate opcode, @setrandom
, for seeding the RNG:
@setrandom
0 → interpreter should reseed RNG in as random a way as possible
@setrandom
+ve/-ve parameter n → interpreter should reseed RNG using n such that it becomes predictable
This means that for consistency, for a single parameter n, the inbuilt I6 r = random(n)
should compile:
Z-machine:
@random n -> r;
and return r
Glulx:
if n is +ve
@random n r;
and return r+1
if n is 0:
@setrandom 0;
and return 0
if n is -ve
@setrandom n;
and return 0
This is what happens when I6 compiles for the Z-machine, but not when it compiles for Glulx. When I6 compiles for Glulx, r = random(n) compiles
if n is 0
@random 0 r;
and return r (i.e. a random number in the full 32 bit range)
if n is non-zero
@random abs(1-n)+1 r;
and if n is +ve: return r+1; else return n+r
(meaning that random(-1), instead of seeding the RNG with -1 and returning 0, instead returns a random number between -1 and 1)
@random r;
and return r+1;
(meaning that random(-8), instead of seeding the RNG with -8 and returning 0, instead returns a random number between -6 and 1)
For more than one parameter, the inbuilt I6 random (n, o, p) should first compile a static array of length s pseudoarray --> [n, o, p]
then
Z-machine
@random s -> r;
and return pseudoarray-->(r-1)
Glulx
@random s r;
and return pseudoarray-->r
This seems to happen consistently for both Z-machine and Glulx compilations.
However, there is an additional complication when using I6 inclusions in I7 from ver 10 onwards, in that r=random (n, o, p)
does not surivive the I6->Inter->I6 compilation, being reduced (without comment) to r=random(n)
: consequently, random(2,3,4), whether compiled for Z-machine or Glulx, produces a random number between 1 and 2 rather than the intended random number between 2 and 4!
It seems to me that on one level the I6 inbuilt random() function should behave the same whether compiling for Z-machine or Glulx.
Or was the random(0) and random(-n) syntax long ago declared obsolete when writing I6 for compilation for Glulx, with the instruction to directly use Glulx opcodes @setrandom 0
and @setrandom -n
rather than random(0) and random (-n) in order to seed the RNG?
This is of course what the I7 compiler does in response to set the random-number generator to n
- which doesn’t use random(0) or random (-n) whether compiling for Z-machine or Glulx.
For a +ve n this phrase is differentially compiled (according to whether compiling to Z-machine or Glulx) to the function VM_Seed_RNG(n), containing the Z-machine/Glulx opcodes @random -n -> n
or @setrandom n
accordingly.