This is a Glk library designed to be compiled with Emscripten, and as a demo I’ve used it to port Git. It presents the same API as Quixe/ZVM, and then calls GlkOte’s glkapi.js. Performance is adequate and can be improved with a less minimalistic port.
Porting other terps should hopefully be fairly straightforward, and will provide a simple way to add additional interpreters to Lectrote (and in the future, iplayif.com).
No demo page yet sorry, I’ve just been testing it in the console. If anyone already happens to have Emscripten installed then hopefully the Makefile will just work.
Git doesn’t have a real JIT (producing runable machine code). I don’t think that could work in Emscripten. I think Git produces an IR which it can then interpret faster because the cost of parsing the operands is done ahead of time.
I haven’t switched on any of the optional Git options. USE_MMAP would almost certainly work. (Edit: actually it probably would be useless because I’m not using the unix start code.) I’m not sure about USE_DIRECT_THREADING or USE_INLINE.
If we wanted to port Git properly (and I don’t think there’s much point because Quixe is faster. Although perhaps if we did what I’m about to describe then Git might be faster.) then there would be more steps to take. The C Glk API expects functions like glk_select to be synchronous, which is not possible when it’s talking to GlkOte. In order to get around this I have used the Emterpreter. Basically, for a limited number of functions (8 for Git) it doesn’t compile them into asm.js, but instead produces it’s own IR and interprets that. This allows it to pause in the middle of synchronous functions. Any function in the call stack to glk_select must be interpreted. It’s not many functions in the case of Git, but it does include startProgram, which is the central code loop. If most of that function could be split out into another function (except for the part that calls Glk functions) then it would run much faster. But that’s the most complicated code in Git, so I didn’t try. But it would be easy to do that for gidispatch_call…
Too lazy to research it again myself: does Emscripten now support manual stack management? I wanted to create a JS Tads 3 terp in the past, but Emscripten could not deal with custom stack manipulation (longjmp/setjmp.)
It uses Emscriptened CheapGlk instead of RemGlk so it has only the basic Glk features. The frontend is the same that HugoJS uses which also has an Emscripten engine.
I’ve done some rudimentary performance testing, and the Emscripten-Git is surprisingly fast: on Chrome it’s about as fast as Quixe and on Firefox, which supports asm.js natively, it’s 2-3 times faster. I haven’t measured Safari or Microsoft browsers, but Safari feels a bit sluggish and Edge should also be faster than Quixe because it too has built-in asm.js support.
I wonder if the good performance you’re seeing is down to using Asyncify rather than EmterpreterAsync. I didn’t go with Asyncify cause it’s depreciated, but if it works for you then there’s probably no reason why it couldn’t also work in Emglken.
How easy or natural would it be to integrate it with Text-to-Speech?
I remember there was a demo of espeak compiled with Emscripten that worked at least in Firefox in those days. Even better if there was a choice: use Google web service or use espeak.
The interpreter cores this will produce will have to be embedded in a larger interpreter framework like Parchment or Lectrote, and that’s where accessibility features will be implemented. But manually calling TTS should not be necessary when screen readers interact with web browsers so well.
For emscripten flags, I recommend: -O3 --closure 1. This does slow down compilation a lot, but it improves code size a lot, which is important.
If you don’t want to exit the runtime, I recommend: -s NO_EXIT_RUNTIME=1
-s ELIMINATE_DUPLICATE_FUNCTIONS=1 produces almost no benefit, and in some cases with functions returning doubles/floats, it can produce incorrect code. This has been fixed in the very, very latest incoming branch. If you’re using 1.35, you won’t see this fix probably for the next year, and you should move to incoming through the sdk.
I’ve gotten TTS working perfectly with Firefox and NVDA. That’s a very standard combo. You need to add this to the container that contains the text to be read:
Then modify the text inside the container when you want new stuff to be read. This has no problems. Stick the scrollback outside it, don’t leave it in the container.
Other solutions like role=“main” and focus() work very poorly.
Hmm… As a sighted person, I’m not very experienced with screen readers, but I still would like IF to talk to me sometimes.
As I understand there’s only Orca for Linux, and I totally couldn’t use it (maybe because I’m on Mate and not Gnome…) as neither of “orca modifier” keys worked.
Not only that, from the docs it’s clear that software like QTads (where you cannot move caret over the text you need to be read) wouldn’t work, while a browser would require a lot of keystrokes. Maybe the latter isn’t the case with NVDA or JAWS.
However, Live Regions could be potentially useful for Lectrote/Parchment + Orca: help.gnome.org/users/orca/stabl … ns.html.en
I can imagine at least one scenario when calling TTS would be meaningful from inside the game: two characters, male and female, or British and American, talking to you.
In Lectrote, a better-than-espeak TTS is possible either by enabling Speech API and putting keys into environment variables (at least theoretically, I did that with my Chromium) or by using a library like responsivevoice.org/api/
Same goes for Parchment, with only difference that it can be run in another browser where TTS support is easier/better/harder/worse.
Finally, a quick check with PyWebkitGtk, another possible meta-interpreter platform shows that it can support TTS via responsiveVoice library, but in fallback mode. Not sure what that is, maybe requires more network traffic. On another hand, Python program can probably access MS or Apple TTS platform via some modules even if Webkit cannot. (I’m making two conjectures in last sentence both of which might prove wrong.)
P.S. One more use case: one may want to develop a story with above-average TTS using SSML: w3.org/TR/speech-synthesis/
Glk probably has this: one channel for pure text, another for markup?
It’s taken a while, but I’ve published version 0.3.0 of Emglken. Release notes:
Uses the Docker image for Emscripten as it’s much simpler to install and update Emscripten than using the emsdk
Updated Emscripten version, and compile to WebAssembly
Switch to CMake
Compile against Remglk; (almost) the entire Glk API is now handled in C code (with just a few exceptions like some unicode functions), with JSON being transferred to and from GlkOte. This does mean that support for stylehints and the Gargoyle Text Formatting functions has been dropped, though hopefully they will return soon!
Now uses unmodified interpreter submodules - the latest code of each is used directly
Switched to the 0branch/hugo-unix repository for Hugo as it is now the de facto home of Hugo.
For now Glulxe is not being build with its profiler mode while I figure out the best way to initiate it
I don’t expect many people will use it, but if you do happen to be on a computer that has a recent version of Node.js and need to quickly install an interpreter, the emglken npm package will let you run Glulx, Hugo, and TADS games in a one-window console mode.
When you’d started this project a few years back, I’d thought (assumed?) your goal was to get C-based interpreters running in the browser. Is that the intended use case for this? There’s no web-based demo (yet), right?
FYI, it crashes right away with OOM when running Counterfeit Monkey Release 9, though it doesn’t crash on other even larger gblorbs like Anchorhead 2018.