More Glk progress

I’ve written out the spec changes for the Glk 0.7.1 update, which will cover simple stuff:

  • glk.h header should use 32-bit typedefs for glui32 and glsi32. (Include stdint.h.)
  • A way to set line-input terminator keys (so the game can react to function keys, etc, during line input) (this function is based on vaporware’s proposal from last year)
  • A way to set whether line input is automatically echoed to the window, when input is completed or cancelled. (Currently it always is, but in certain cases you want to suppress that.)
  • The always-popular “draw a border between these windows” flag.
  • Calls to decompose and renormalize a bufferful of Unicode text.

The Glk-Spec document is in my github repository, but you can see the changed sections on this page:

github.com/erkyrath/glk-dev/wik … ec-changes

I’ve implemented the Unicode decompose/normalize feature, and the 32-bit change for the header. Everything else is just spec so far. Next week I’ll start implementing them in GlkTerm and Quixe, and also writing the ever-popular unit tests. Once all that’s done, I’ll declare 0.7.1 final, and start thinking about the next round of changes.

Feel free to comment. However, I’m going to be wrapped up in the MIT Mystery Hunt through this weekend. I’ll be online, but I will only respond briefly if at all.

Some simple but important and good changes.

Looking forward to the profiling/timer APIs :wink:

Profiling isn’t an API, it’s an interpreter feature. I’ve already checked in those changes.

Time API – as in, “what is the current date and time” – will probably be next after this stuff. (Because it’s easy.)

Sorry, by profiling I meant a high-resolution CPU timer API, so that we can create programs to profile the terps. Creating terps that can profile works is only one part of the story.

Just to make sure I understand how this works - the winmethod_Border/NoBorder flag would be a property of the split, that is, of the pair window, in the same way as the other winmethod flags, right?

So for example, if a window’s sibling is deleted, and therefore its original parent pair window goes away, then whether it gets a border with its new sibling depends on what its new parent pair window says.

And, I suppose, the game can turn a border on and off using glk_window_set_arrangement on a pair window?

I had not planned to have a high-resolution timer, nor one tied to the CPU (only to the general time-of-day-and-date facility). I am not opposed to this, but I’m not sure why you’d want to profile an interpreter from game code. Surely that’s upside down. The game would be looking at the CPU through so many layers of code that the values would be more cruft than result. If you need to profile an interpreter, you’ll be better off using the platform (or language) native profiling tools.

Right.

I went back and forth on this, but yes, I think that’s supported. The implementation work that it requires is already required by any opening or rearranging of new windows.

We had discussed having at least a millisecond timer before, and you weren’t opposed to the idea…

Of course I wouldn’t be trying to profile the CPU or anything at that level, just the interpreter. Having two levels of profiling is actually quite common. Javascript frameworks like to compete against each other, and this could be compared to profiling different versions of I7, or indeed comparing I7 to I6 and ZIL. But there are also profiling suites which are used to compare interpreters, which would be like comparing different versions of a terp, or several terps. The advantage of doing so from game code is that it’s much simpler - any compliant interpreter could be tested (and adding timers is a lot easier than adding profiling code!) - while the results will also be easy to compare, instead of having to figure out how each platform and language can be profiled, if they even can be. Lastly, generally language-level profiling systems disrupt the running of programs so that their results are not the same as if the profiling wasn’t happening. By profiling from bytecode the terps will run as they always do.

Obviously some things wouldn’t be very sensible to profile - I’m not going to want to know how long it takes to make 5 writes to the memory. I was thinking bigger things like pathfinding, setting up large data structures and then using them (probably conversation ones) and other long algorithms. From such tests we could for example test whether storing multiple sets of the memory such that all 32 bit values take the same time to read but require 4 writes will actually end up better off or not. Sure you could write some custom code to test it, but I’d prefer a profiling game. Fill it with natural real world examples that touch on all areas of the VM and it will be useful for testing any potential optimisation.

I’m not opposed to it. As long as you’re not expecting nanoseconds.

This is generally held to be an advantage of language-level profiling, in that it tells you what is taking time in the system. As opposed to merely how much clock time is passing – which can involve all sort of unrelated stuff that you can’t distinguish. Clock time can be measured using outside tools, anyway. (“time” on Unix, etc. Or a stopwatch.)

None of this is an argument against having a sub-second gettimeofday-like API. I’m just not convinced it’s a useful approach.

No, really just milliseconds. That should be achievable in most languages.

I guess profiling isn’t the best word for what I’m planning - more interpreter performance testing, of the holistic sort. With the aim to encourage a bit of friendly competition among terp writers.

Okay, I’ve posted a draft of the clock API:

github.com/erkyrath/glk-dev/wik … ec-changes

This should be easy to implement on top of the C and Javascript standard date facilities. My only “crazy uncle zarf” touch this time are the simple_time functions, which let you get a Unix time divided by any factor (to work around the 2038ocalypse). I think these will be easier to work with for certain cases. (“How long has the user been playing this game, in minutes, and please don’t make me do 64-bit subtraction?”) If you want the full time_t resolution, it’s available too.

I left out date parsing and formatting, more or less for the same reasons that I left float parsing and formatting out of the Glulx spec. Too ugly to wire in at this level. Let game/extension code handle it.

As before, I am posting this in advance of releasing the spec or code, so feel free to comment…

Excellent. To use those structs we make word-arrays of the right length and pass the address? Easy!

One thing that might be useful would be day-of-year in the date tuple.

Day-of-year is in the C struct tm, but I don’t see how to get it in Javascript. (I guess I could do the leap-year computation… or not.)

If we think such a thing is likely to be useful (and I think it might be) then I think it would be better to do the leap year calculations behind the scenes than make people do it in their own code. Especially if the calculations wouldn’t be needed for every platform.

What is it useful for?

Any situation in which dates could be compared. But then… it might be easier and simpler to just work with day-scaled timestamps. Never mind that then.

There are no conversions between time and simpletime… is it worth having distinct functions, or should people just go via dates?

If you are comparing dates, you’re probably comparing (year, month, day) in the date structure. Having a day-of-year field only simplifies that slightly. (Julian dates are better all around for that, and that’s not exactly the same as simpletime with a 86400 factor. But it doesn’t seem worth adding Julian dates just for that.)

I can’t see a project using both time and simpletime. Maybe it would happen with different extensions?

typedef struct glkdate_struct { glsi32 year; /* full (four-digit) year */ glsi32 month; /* 0-11, 0 is January */ glsi32 day; /* 1-31 */ glsi32 weekday; /* 0-6, 0 is Sunday */ glsi32 hour; /* 0-23 */ glsi32 minute; /* 0-59 */ glsi32 second; /* 0-60 */ glsi32 microsec; /* 0-999999 */ } glkdate_t;
Could you add the comment The seconds value may be 60 because of a leap second. as an actual code comment in there? I double-taked.