This might be tangential, but I have maybe 2-5 MB of audio files in a test project, and the page loaded it all instantly. The moment I added three encoded SVG files of tiny size (less than 2 KB each) to the CSS code, the whole thing started taking 2-3 seconds to load, on average, but I’m also not sure if this is relevant, because CSS might load stuff kinda weird?
EDIT: Estimate audio file sizes; I don’t have any of these files handy right now and sleep issues make for foggy memory.
I’d think it’s probably better to go back to the simpler solution and only look for something else if it actually proves to be a problem. I haven’t had that happen with Parchment yet! But maybe that’s cause it loads a small number of larger resources rather than many very small ones.
Also is the wasm embedded in the HTML too? If so you’d really be better off going for a base64ed zip instead. Either way it’s just one big blob to decode.
Note that decompressing does take some time. It’s unlikely to really be very noticeable though. But if that was an issue, and you didn’t care about the file size you could also just bundle everything into an IFF, which are super quick to unbundle. A non-compressed zip might be okay too, especially if most of the resources are media that are already compressed.
So I decided to be cute and homebrew a variant of the super-simple WAD format. I’m choosing this mostly because the libraries I’m finding don’t really work when you merge all the JavaScript into a single file. Also, I’m concerned about license complications from this.
The (summary of the) WAD format begins with a list of contained entries and their lengths in bytes. The header of this list is a known length of bytes which describe the total byte length of the list. This means you can begin by streaming in this list, halt at the end of the list, and then dump the known byte arrays out of the larger file.
For this, I’m using the fetch trick, as well as the methods which come with the blob class. I’m going to attempt avoiding garbage collection as much as possible, and use built-in methods where I can, in order to lean on speed in the browser environment.
Ahahaha, I just realized the header list can be a JavaScript object written during the build process, and only the bundled data itself would be contained in the base64 string.
The build process now generates a JavaScript list, which contains embedded file names, and their respective starting byte offsets within the “data dump”. I call this the “embedded manifest”.
The data dump is then written (in base64) to a property on the singleton GAME_INFO object.
During page load, an asynchronous function is called, which converts the data dump’s base64 string into a blob via the fetch() method.
The embedded manifest is iterated over, and for each item, a MIME type is found, and a Blob.slice() method is called on the data dump’s blob.
This sub-blob is added to the list of known asset files.
The original base64 string property is set to null for garbage collection.
From here, business occurs as before:
Each sub-blob gets some simple pre-processing, to make it interface with the engine’s asset library system.
When all sub-blobs are processed, the engine marks asset loading as done.
When the page window is done loading, the engine runs its “doReady” method.
Any intro text and start-of-game stuff is handled, and the “loading duration” performance metric is recorded.
Average loading duration, using the overkill WASM method: 1844 ms
Tap to reveal average loading duration, using the data dump fetch method...!
1739 ms!!!
Not only does everything still work, but we also have an average loading duration reduction of 5.7%!!!
(The output file is also 36 kB smaller!)
Hell yeah, Dannii!! Excellent solution proposal!!
I feel super silly for not realizing this solution myself, but I have so, so many systems and mechanics whirling through my brain regarding this engine, and that one experiment I was reading about seemed like a settled-and-done thing, especially at a time where I hadn’t yet hit my learning curve, lol.
It might be slightly faster still to put the base64 data in HTML, rather than JS, so the JS doesn’t have to parse it. I don’t know if it would really matter though. Most of that 1.7s is surely other things than the resource loading.
Does blob.slice reuse the underlying memory? Or are any of the resources things that might need to change in the future? Because you could use one arraybuffer instead to ensure there’s only one copy of the data. But blob.slice might already be doing that, I’m not sure.
Oh, good question… I’m…not sure…? My assumption is using the established methods would be more efficient than anything I’m implementing from scratch, probably.
Most of ways the sub-blobs are being used is by being converted to an ArrayBuffer, which is then given to audio sources in the Web Audio API. At the moment, once I have the ArrayBuffer, I can reuse its pointer for every sound emission. I feel like there might be a way to streamline that a bit more…
Nope! There won’t be any resource editing in this engine…! All embedded data will be treated as staticconstant!
EDIT: It looks like Blob.slice() does not reuse memory, from some of the info I’ve seen in a few StackOverflow topics…? It also seems like it gets handled on the user’s disk. Oof. That might cause some loading lag…
I’m pretty sure this will mean the data is copied, because it couldn’t ensure the blob would remain immutable if you could use it in an arraybuffer. So starting with an arraybuffer will be more efficient.
Also found this, which seems promising! I’ll be able to pull an ArrayBuffer from the fetch response (skipping blobs entirely), cast that to a Uint8TypedArrayhopefully, and then maybe send sub-arrays from that to all the sound sources.
This is assuming that sound sources accept TypedArray for their buffers, which means I’ll never have to create memory or send memory to garbage collection, regarding game resources…!
Okay, so it turns out that doing everything with TypedArray—split into memory-sharing subarrays—does not work with the Web Audio API! It absolutely requires an ArrayBuffer, and will reject all similar interfaces. Also, there’s no way to get two ArrayBuffers to share underlying memory, from what I’ve seen. Attempting to do this will invalidate all other buffers, except one. The connection between buffer and underlying data is required to be one-to-one.
Even with a SharedArrayBuffer, there is only one interface for the bytes underneath; it’s just thread-safe.
So the best I could do was stop using blobs, and directly work with the buffers underneath. When resource loading is done, the dump data and any in-between buffers are nulled and thrown into garbage collection, so that all resource data should never have a duplicate floating around.
New loading duration: 1468 ms!! We’ve dropped by about 15.6% from previous, and 20.4% from original!
I’m gonna keep poking at this, and see if I can reduce memory usage and loading times in other places.
I was actually reading about the web audio API earlier today. Yes, it’s a little weird that it doesn’t allow you to pass a typed array (ie, a slice of an arraybuffer). But from what I can see, it’s also not very memory efficient, and in particular BaseAudioContext.decodeAudioData eats the arrayBuffer.
So if there’s any chance of playing a sound twice, when you need to play some audio you basically need to make a new arraybuffer and then pass its arraybuffer to BaseAudioContext.decodeAudioData(). So you could either have one array buffer and then slice it on demand, or else cut it up at the beginning and then clone the audio buffers when needed.
So, something I’ve learned and tested earlier when implementing audio is that decodeAudioDatadoes eat the buffer, but the decoded result which this creates is actually reusable…!
So I’m able to run decodeAudioData once—for all audio resources—then cache the results, and then just send a cached result to new sources, when I want to play a sound.
EDIT: A (reduced) example:
// assetProfile contains the asset name and the sliced
// segment of the data dump ArrayBuffer.
if_octane_audio_context.decodeAudioData(
assetProfile.buffer, // <- sliced from data dump
// Once decoded, the result is passed into buffer arg:
function(buffer) {
// This list contains cached decoded audio data
if_octane_loaded_audio_files.push({
name: assetProfile.name,
buffer: buffer // <- cache decoded result!
// This can be reused!
});
},
function(err) {
console.log("err(decodeAudioData): "+err);
}
);
EDIT 2: Oh, I wonder if I could hold off on decoding until the audio is needed, and then I can do caching upon the first play…
Hey @Danii, I think I might have underestimated just how baddecodeAudioData is, when you front-load it for all resources at page load!
I just implemented a new way of handling audio data, where each audio resource is marked as either decoded or not, and when an audio resource is requested and it’s not decoded yet, it gets decoded on the spot, with the results cached for later (so decodeAudioData only ever needs to be called once per resource).
Wanna guess how much of an impact this has?
The new loading duration is now an average of 391 ms!!!
That’s much better, yeah?
That is 78.8% down from my original super-cursed WASM build!! Wow!!
Again, thanks for chiming in, sharing wisdom, finding resources, and getting my brain to turn some ideas over!
Yes, but it can also be very memory intensive, because I think it’s basically uncompressed data (like a WAV file.)
And wow, that’s a good result. I hadn’t realised you were doing the audio decoding on startup, that’s definitely something that would take a long time.
Ohhhh, I misunderstood this. I thought you meant it makes garbage collection take a huge memory hit, which creates lag spikes.
Hm. That’s… gonna cause some issues, when I have compressed environment ambience and music tracks loaded and decoded. I might need to create a subsystem where if decoded audio passes some size threshold, then it gets stored in a blob, maybe, where it can be stored in disk cache, if the browser find it necessary to.
I’ll need to look into this more.
I also plan to make a system of using a couple data dumps, limiting each to a size of 20MB, which will pass the lower browser limits out there for inline data strings. I don’t think I would ever need more than 2 dumps in a game, based on the audio assets of the latest I Am Prey dev build.
I’ve never heard of Blobs being stored in the file cache. Where did see about that? You’re not mixing it up with Blobs/Files being a representation of disk files?
So I just tried to find the StackOverflow thread but now I can’t find it. I did find something about IndexedDB, though, which might work.
Somewhere in the thread, someone said that an array buffer only stays in RAM, but a Blob is cached on disk by the browser to keep RAM free. No link was provided in the thread for verification, so it’s very possible to be incorrect, and I’m assuming it is incorrect, because now I can’t find any source to confirm this.
Okay apparently IndexedDB asks for permission to store stuff, which admittedly might look invasive to brand-new users, so here’s my backup plan:
Keep a copy of the compressed data on hand for larger audio files. When a larger audio is no longer needed, ditch the decoded buffer to make room. When it’s needed again, it’ll have to be re-decoded (unfortunately).
Audio file size categories can be unknown, small, or large. Once the size category is known, future uses of the audio will be more efficient in memory, because the system will be able to better-prepare accordingly.
Part of me is really tempted to divide large audio into chunks, and kinda decode the track a few chunks ahead of what’s playing, and then toss previous chunks, but… unless I can specify a buffer as a decode destination, I don’t think I can do that in a performant way by just relying on garbage collection.
Also, I’m not sure I can do that without stuttering.