This is a question about writing a Glulx interpreter, but I suppose it is relevant for other VMs, too.
Various opcodes have to save or load the VM state (
saveundo, and so on).
The easy way to implement them is to keep full copies of the main memory, the stack and the registers, but I find it quite wasteful. I mean, imagine the required memory for doing that with a 4 GB game (the theoretical maximum size of a Glulx file).
So is there a more efficient way of saving the VM state? I thought about saving only the diffs between consecutive save states, but it seems a bit more complicated.
This hasn’t been a concern because the biggest Glulx games are only a few MB. But if you wanted to make an interpreter which was future proof and able to handle a GB game, couldn’t you just compress the undo states? You could still XOR it with the original RAM like you would for a savefile, but then you could gzip it instead of the run-length encoding.
I admit I hadn’t thought about compressing the undo states.
But wouldn’t that slow down the game since you’ll have to compress the state each turn? (Or maybe that’s not important on today’s computers.)
I’m not sure I understand this sentence. What’s the “standard” way of saving VM states (i.e. how do interpreters do it right now) ?
Sure. Would you prefer to handle the memory uncompressed instead? That’ll be easier.
What’s the “standard” way of saving VM states (i.e. how do interpreters do it right now)?
The usual way is to run the standard Quetzal save code, but store the save file in a memory buffer instead of a disk file. Some interpreters use a simpler form which is faster but not as compact.
Those are basically your choices – optimize for speed or storage space. But, as Dannii said, the game files just aren’t that big by modern standards.
I finally read the Quetzal specification, so I now understand the matter with XORing and the run-length encoding.
Thanks for the answers, I have a better view on how it is done now!
One way that comes to mind that would be efficient (in both memory and speed) would be to simulate paging and Copy-on-Write. Keep track of the base state and handle incremental changes (per turn) as an overlay. Rolling back would be a matter of restoring an old page table.
This would be akin to how the Linux kernel handles memory after forking a child process. Maybe you could even harness OS-level paging to do it but this would be less portable.
I’m not sure I understood everything, but in any case I’ll keep it simple for the moment. I’ll reconsider it when the time of optimisation will come. Thanks anyway!
(By the way, are you the one who made the Glk and Glulxe C bindings for Rust? The interpreter I’m writing is in Rust! I don’t really aim to compete with current interpreters and my code is a bit dirty; I just thought it would be a fun project to learn Rust and the inner workings of Glulx, and since I couldn’t find an interpreter written fully in Rust, I thought it was a good idea. But a quick search now shows me that @Dannii is onto something with his IF decompiler and ifvms.js, and what he’s doing is very likely going to be way better that what I’m doing, but anyway.)