# Double-precision math in Glulx

In discussion, Graham brought up the idea of doubles (64-bit floats) in Glulx.

I never considered this idea because, obviously, Glulx values are all 32 bits! And I didn’t think that IF games would do enough math to bother with high-precision values.

But apparently I7 units code does enough floating-point math to see “drift” due to single-precision floats. So it’s worth considering after all.

The only possible way to do this is to encode every double value as two Glulx values. This means, for example, addition would look like:

``````@dadd xhi xlo yhi ylo resultlo resulthi;
``````

Each argument is stored as two variables `xhi, xlo` and `yhi, ylo`. The result gets written to two variables `resultlo, resulthi`.

Yes, the output order is backwards from the input order. This is so that you can do arithmetic on the stack:

``````@dadd xhi xlo yhi ylo sp sp;
``````

…and the results wind up on the stack in the correct order for the next operation!

For “convenience”, the compiler would provide opcodes (or more likely macros)

``````@dfrommem addr lo hi;
``````

These copy a value pair (two variables or stack values) to/from a pair of memory locations, e.g. an `array-->2`.

To generate constants, the I6 compiler would support literals like `\$>+1.5` and `\$<1.5`. These would be the high and low words of the double constant 1.5.

I realize this is ridiculous. No human wants to write assembly code this way. But then assembly is supposed to suck for humans. In practice this would all be handled by the I7 compiler, which already has a concept of multi-word objects. I6 doesn’t, so you’re stuck juggling pairs, but a library of wrapper functions could at least hide the stack-fiddling.

On the interpreter side, the only hard part of this is bit-tweaking two 32-bit integers into a native double and back. We already have code to do this for native floats, though, so it’s only a little bit hard.

(But this has to happen on every operation. Glulx will never be a speed demon for floating-point operations.)

2 Likes

Your concept reminds me of the virtual 16-bit processor Steve Wozniak wrote for the Apple II (to do 16-bit computations on an 8-bit computer):

In practice, his implementation ended up being almost never used, since Integer BASIC fell out of use quickly.