I made a .z V1-V8 to javascript compiler. The .js can be run via nodejs on the linux command line or in a web browser. load/save/crashdumps supported. Testers needed. see awohl dot com (i can’t post links)
Welcome and thanks for your contributions ![]()
My first attempts resulted in the following in Firefox 145.0.2 (64-bit) on Windows 11:
- lostpig.z8: generated files. opening the .html file shows the player, but no text and stuck on “loading”
- moonmist.z3 (r13): generated files. opening the .html file shows the player and the opening text, then it is stuck on some kind of loop like its receive empty input and I get the “I beg your pardon?” prompt repeatedly. I can sneak my own command in during a pause in the loop, and it takes quite a while to respond.
- moonmist.z3 (r13): using the -o command generated nothing.
edit: after moonmist accepts an input, further inputs are at first misinterpreted. For example, at the start I type “examine dragon” and it says, “What do you want to examine” and then a few moments later I get the dragon description.
EDIT: I’m not going to retract this post, but I will definitely mea culpa for not paying closer attention to what’s going on. I just jumped in and tried it, and didn’t spend any time looking to see how this is built nor what level of attention was being made to its accuracy. Vibe-coding something and asking other people to try it out to see if it works is not a development strategy.
Why are external testers needed when the code doesn’t pass the project own internal tests (CURRENT_STATE.md)?
FIXED_AND_WORKING.md also mentions that “Games crash after ~12 instructions instead of running to completion” because return opcodes are not executed correctly. Despite this, Claude cheerily describes the project as “mostly working”.
Thanks for the bug report. I fixed some bugs, and the games are more playable. They are available at awohl dot com /moonmist and /lostpig
All the .z work does suffer from a lack of automated testing. I am used to building compilers for more normal languages and having test suites. I am currently working on test automation to play through an initial 100 games across all .z versions. It will test playing the game from start to finish. I will post back when that is done with a list of known working games.
In the meantime, if you want to test, I am happy to fix any bugs. Or wait a few days for a list of known goods. If you know any .z files that have challenged other interpreters let me know and I will be sure to test with them.
I did a round of cleanup on the docs so they are in sync with the code. I will post list of games played from start to finish when I get to 100 games.
I think the impression you are giving at the moment is that you are not actually looking at the output of your work at all. You have mentioned wanting to construct automated tests, but surely you would take at least a brief look with your own eyes before sharing? I tried playing the version of Lost Pig you posted, and immediately noticed that the location name appears in the prompt instead of in the status line, that the inventory description is littered with strangely-placed brackets, and that typing HELP causes the game to enter an infinite loop of output. Why are you asking people to test for you when there are so many obvious bugs that you could find for yourself in seconds?
I’m sorry to say that this project is a LONG distance from asking people to test it for you. I jumped in blindly and did a test, assuming you had done your own due diligence. That’s on me. You really need to do a lot more research about z-machine interpreters, how they’ve been developed over the years, how they’ve been tested, and so on.
You wrote, " If you know any .z files that have challenged other interpreters" The answers you’re looking for exist even within this very forum, but it just doesn’t seem like you’ve been bothered to look for them. Do the words “praxix”, “etude” or “strictz” have any meaning for you? If not, then you have a lot of studying to do before asking others to look at your project.
I don’t say all of this to discourage you, per se, but rather that you don’t seem clear in your own mind what precisely you’re building. If you can’t describe that to yourself, then I’m not sure you could describe it with any accuracy to an AI and hope to get a useful result.
This is effectively a z-machine interpreter, but I’m afraid you’ll find that decoding text/opcodes, setting up memory and the stack and dealing with the object tree are the EASY part.
There are a huge number of subtleties to z-machine behavior that have to be dealt with that only in-depth knowledge of the standard and a decent amount of trial and error will get right. Comparing output to known good text will only get you partway there, as eventually you need to support colors, fonts, text styles, and separate output windows with unique behaviors. Don’t even think about Z6 support until you’ve nailed all the other versions because it is a unique beast.
As someone who’s written several z-machine interpreters over the past couple decades and who has experimented with (and mostly discarded) AI assisted coding. I can confidently say that what Claude will build for you is a steaming pile of hot garbage. No offense - I’m just saying relying on your own human skill will produce far better results. AI is OK (sometimes) at taking a spec and producing a scaffold that you can flesh out, but AI is simply terrible at dealing with fine details and exacting behavior (found in abundance in the Z-machine standard). The more you try to direct it with precise prompts, the worse it will mangle the code until you are left with a soup that ALMOST does what you need but is impossible to fix.
The README says:
The compiler has been tested with:
- Zork I-III
- Planetfall
- Enchanter
- Mini-Zork
- Most Inform 6/7 compiled games
There are hundreds of Inform 6/7 compiled games, and very few interpreters can credibly claim to have been tested with a majority of them.
Could you be more specific about which games you’ve tested with, i.e., which games you’ve personally witnessed play to completion with no noticeable problems?
It looks like none at all: according to the documentation, a test is considered successful if it compiles without errors, without actually looking at the output. No actual testing (as in, making sure the result actually works) has been done.