I am basically done with the necessary changes to support 1.1:
@set_true_colour is fully supported
@buffer_screen does nothing and stores 0
@set_text_style supports all combinations of styles, either all at once or from a succession of calls
The remaining hurdle is @set_font; the 1.0 standard implies that it’s optional, though the 1.1 standard appears to assume it is supported.
Is it OK to punt and just store 0 for all @set_font calls? @set_text_style seems to offer a superset of the functionality and the interactions between the two are not rigorously specified in the 1.0 standard.
I think you can get away with that under the specification, but I’d be surprised if no games make use of it. Most Frotz ports respect the call, I think: Windows Frotz uses the fixed width font if either the font has been set by @set_font, or the style has been set, or the header bit has been set.
I guess the @gestalt is super-minimal to encourage its implementation? But without any reason to actually call the @gestalt opcode, no game would ever include it, even if it were widely implemented. And if no game ever includes it, why should an interpreter implement it?
Collect some of the extensions that have been implemented and add selectors for them. The benchmark opcodes from Zoom come to mind. I’m sure there’s been others.
Well there should be a few new selectors quite soon actually. The transcripts collection proposal for example will need one, and I’m planning on adding some for experimenting with Parchment.
It would make sense to add selectors for existing extensions (it would also be good to have a list of them in one place!) I’ll go looking for the Zoom ones you mentioned, but I’m unaware of any others. If you know of more, please do share them.
Hmm, ok, so it looks like Zoom adds EXT:128-132. It also has @sound_data, an opcode removed from the final 1.1 spec, though it looks like a nop. I’ll change the range referred to in the private use area so that there’s no possibility of conflicts. And if we assign selectors for the Zoom opcodes then other interpreters could add them too, which might be good.
More fundamentally, are you planning on assigning ranges of opcodes to have a particular meaning if some given selectors return a non-zero value? It seems to me that pretty soon you’ll run out of spare opcodes that way. Glulx handles this rather neatly by isolating the VM layer from the I/O layer with the use of a single @glk opcode. Any such Z-machine system should probably be very cautious about assigning away more than a few of the remaining opcodes.
I also have to disagree with nitfol’s suggestion of including any other past Z-machine extensions that a trawl of interpreters drags up. If it’s to be of real use, rather than historical interest, a specification has to contain stuff that people will use, and the fact that no-one has used such past attempts suggests to me that the demand really isn’t there for them. The specification shouldn’t be a dumping ground for past ideas - if the 1.1 spec business has any lesson for us, it ought to be that specs based around hopeful guesses about what will be useful don’t get much support.
I still have my doubts about whether any of this will be useful. If it is to be useful, though, the only way I can see this working is if you use Parchment to prototype ideas, and once you’ve got something that people are actually using in games, then codify it in a spec. Doing the spec first won’t work.
Agreed. A gestalt system is handy for keeping track of extensions, but where are the extensions? And where will they be specified, if not in a future revision of the Z-Machine Standards Document for which we’d increment the standard number anyway? For example, Glulx’s gestalt codes are listed in the Glulx spec and used to test for the presence of optional features which are also described in the spec. If you have Z-machine extensions in mind, like the transcript submission feature, why not wait until you can put those in the spec too?
One extension which has proved useful on Glulx, and may also be useful on Z-code, is veneer acceleration. Z has opcodes for low-level object manipulation, but Inform games have still become increasingly reliant on veneer routines for property access and bounds checking.
Thanks, will change that. For the 1.0 spec should I reference the inform 6 site, graham’s site, or the archive?
I would be expecting those requesting a selector and opcode to both have a solid implementation planned, as well as being economical in the opcode use. I for example plan to use a single opcode for all of my future Parchment experiments. And if things get in very dire straights, then we could start specifying sub-opcodes: set one unused opcode to EXT_LONG, have it’s first operand be the subopcode, and the remaining operands be the arguments for that. We’d then have another 2^16 extra opcodes if needed.
It’s important to fully document what has actually been done. You can see this in the new HTML5 spec, which documents the previously unknown errors that browsers make. Similarly, I’d want to document all of the unofficial extensions that people have made, even if they’re not any real use anymore. It’s important because if some new extension is highly popular we don’t want to exclude Zoom for supporting it because it already has a custom opcode of that value. Instead we document what Zoom has, and ensure that any new extensions will use unique opcodes. Just like the original 1.0 standard really… there’s a whole lot of useless stuff there, some of it only relevant for one game or one interpreter, but it’s still been documented.
They can be specified wherever, and I’ll refer to them in the registry (which I’ll be keeping up to date.) Just like how FyreVM is referenced in the Glulx spec. There isn’t really any need for Zarf to keep incrementing the Glulx version. I guess the version number is more for interpreters than the code itself, as it means interpreters can go “there’s no way I can support enough of what this game needs so I just won’t try.” With the gestalt system anything could have a work around, but a too high version number means writing work arounds would have been too much work.
Agreed, although I’m also interested in trying pattern matching.
I agree, in principle – I could just use gestalt from here on out. I’ve been bumping the minor version number (which doesn’t affect any terp or game behavior), and I plan to keep doing that. For debugging, is the only real reason.
If there is going to be a 1.2 release, is there interest in correcting/clarifying sticking points from 1.1? In implementing my interpreter I’ve found a couple of issues that I think would be good to fix in the standard (including one clear defect); I imagine other interpreter authors have run into issues as well.
Well, I think multiple attempts at updating the standard would be more of a problem than letting slide a few issues, so I don’t figure there’s much point, especially since the issue I’ve run across aren’t exactly earth-shattering.
The defect was that output streams 3 and 4 are listed as V5+ in the 1.1 standard (1.0 is self-contradictory on this point), but many Infocom stories that are V3 and V4 made use of stream 3 and/or 4. This doesn’t affect interpreters because they (at least the ones whose source I’ve looked at) don’t bother doing the version check. The only reason I found out about this defect is because I did do a version check, and Trinity failed almost immediately.
This might be a bit of jumping the gun to comment on this as my first post, but I have something I want to bring up. I’m currently about 80% done implementing a Z-machine, which is about the extent of my background.
Over all the idea of this seems pretty sound, but I think that the use of a number to ID the different extension to the interpreter has the potential for trouble.
One thing I’ve seen in the world is that this sort of thing can cause problems really quickly, especially if they catch on. Not only do you need to officiate the ‘official’ selector codes you have to worry about unofficial ones. While the spec does allow for a range of private codes the odds are pretty good that any private extension will have a 50/50 chance of using $F000 instead of a random number. Collisions in this space can and will cause a problems.
So, if I might make a different suggestion. Why not try using a string. It’s not much harder to do then anything numbers, it could eithor follow the same embedding rules as @print and @print_ret with the optional argument before the embedded string or be a paddr to the string. The upshot of this is you can use something like mime types or URIs to ID the extensions, which can be be descriptive and lower the chance of collisions (I would also suggest lower case string comparisons).
For example you could have things like:
Standard Version: “Standard-Version” “uri://curiousdannii.github.com/if/zspec12.html”
Transcript: “Transcript-Protocol” “uri://curiousdannii.github.com/if/transcripts.html”
Zoom: “Zoom-Profiling” “uri://www.logicalshift.co.uk/unix/zoom/Profiling”
Zoom: “Zoom-StackDump” “uri://www.logicalshift.co.uk/unix/zoom/StackDump”
Private: “X-Whatever” “url://whatever.com/”
I have some other thoughts on the interpreter number, but as that’s not part of your proposal I’ll wait on that.