I shied away from the NSPipe method early in development and I forget exactly why. I recall reading some scattered reports online that NSPipes had a relatively low lifetime data limit - some tens of MB before they get clogged. I switched to Distributed Objects for moving data around, in much the same way cocoaglk does it except not as well engineered.
I took another crack at CFRunLoop last night, and learned it’s safe to send Mach messages inside a signal handler (!!). So CFRunLoop can listen on a Mach port and wake up when it sees the message. Now everything works great - hardware sleep, lower CPU when idle - except timers, which is convenient since I have to rewrite that code anyhow.
The only discussions I see about NSPipe look like buffer size problems, not lifetime transfer problems. (Or people who are failing to read their buffers.) But if your plan works, great.
I am going to take this as a signal to go ahead and write up the sound API proposal.
Well, I got one obvious thing wrong: the glk_schannel_play_multi() call has no way to start the two sounds on two different channels, which makes it pointless.
For technical reasons, the only practical way to define this capability is to say:
It seems a slightly arbitrary rule to introduce. Given that we’re introducing pause and unpause calls, would it perhaps be better to have calls to pause and unpause several channels at once? These could then be the only calls that operate on multiple channels. If you want multiple sounds to start all together, create the channels, pause them, play sounds on them, then unpause them all together. This would require that the spec make it explicit that playing a sound on a paused channel sets it up ready to go but doesn’t unpause, though.
However, I’m looking at the OSX/iOS audio toolbox API, and it doesn’t have a synchronized pause call. (There’s an “unpause at a given time” call, but using that to do synchronized unpauses would require a bit of guesswork.) So I suspect that the Mac/iOS implementation of this stuff will fall more naturally into a synchronized-play model, where the app code synchronizes its decoding of sound data to the audio queue rather than pausing the audio queue itself.
Possibly I’m overthinking this, since I haven’t looked at how the current Mac libraries do it.
I’ve updated the Glk header files to include the new sound API.
Because I don’t support any sound code myself, I will leave this in draft form for longer than usual. Once some of you good people have implemented it, and turfed out any stupid mistakes in my header files or dispatch code, I’ll formally release this as Glk spec 0.7.3. In the meantime, the code is in repository branches:
glk.h header: see github.com/erkyrath/cheapglk/tree/glk073 (this is a CheapGlk library with the new sound functions defined as stubs, which either do nothing or generate warnings.)
By the way, the infglk.h header now defines “negative” I6 constants (for example keycode_Unknown) as large unsigned decimal numbers (4294967295). (In previous versions of infglk.h that constant showed up as -1.) This change was accidental – my new computer has a 64-bit environment – but I think it’s more correct this way so I left it. The I6 compiler treats the values exactly the same, anyhow.
I’m taking some steps towards implementing this - first up are the pause and unpause functions. Does the spec say anything about pausing or unpausing a channel that’s not playing a sound at all? Should this be illegal? If you create a channel, then pause it, then play a sound on it, should the sound not start until you unpause the channel?
Sorry, missed that. Another question: in glk_schannel_play_multi(), the spec says that “The notify argument applies to all the sounds.” Does that mean that if you play eight sounds at once with a nonzero notify argument, you get eight notifications, each with the value of the notify argument, as each sound finishes? Or you get one notification as soon as all eight have finished?
and have the arrays go out of scope at the end of the block, even though the sounds haven’t finished playing yet?
I think my implementation doesn’t need to access the arrays after starting the sound, so I should be fine if they stay in scope only during the call to glk_schannel_play_multi(). In other words, I don’t have a problem. Perhaps other implementations might need to work differently?
The caller may free or change the array contents immediately after the call. So the library needs to copy them if it wants to refer to the values later.
(This is generally true of Glk array and pointer arguments. The only exceptions are cases where the library will be returning data in the array later, e.g., open_memory_stream.)
If glk_schannel_set_volume_ext() is called with a non-zero duration while no sound is playing on the channel, is it OK for the library to do either of these two things?
Ignore the duration argument and change the volume instantaneously.
Start the volume change as soon as the next sound starts playing, so that the volume reaches its final value “duration” milliseconds after the sound started playing.
Technical explanation: apparently GStreamer doesn’t make it easy to dynamically change properties on a pipeline (i.e. sound channel) that’s not playing. This also makes it difficult to meet the spec’s stipulation that pausing a channel does not pause a volume change in progress on that channel. The alternative is to have another timer for the volume running at the same time - seems wasteful not to piggyback on GStreamer’s internal timer, but perhaps I should go for that approach. (Anyone interested in the code, it’s at chimara-if.org/trac/browser/ … schannel.c in the new-sound-api branch.)
I’m going to say “no” – the volume change should be completely independent of what sounds are going on (or not). Sorry if this makes more work.
I think I was taking for granted that there would be a separate timer running for volume changes (or else code piggybacked on a fill-a-sound-buffer timer that was already in place). I’ll stick to that idea.
Well, not much more work - it was fairly easy to do. One last question, more out of curiosity than confusion since I’ve finished implementing the draft spec now and not encountered any more issues.
The smallest possible nonzero volume change duration is 1 millisecond. Is the library required to provide that level of accuracy? Say I wanted to run my volume change timer at 10 ms for some reason, would the spec allow me to round off the durations to the nearest 10 ms?