Embedding large amounts of binary data

A project I’m working on needs byte-level access to several megabytes of data, which I would somehow like to embed in the story file. In an attempt to get something working quickly, I tried defining an I6 array with the data, but this is awkward because Inform enforces a length restriction on I6 inclusions.

Possibly the cleanest approach would be to include the data as a ‘BINA’ resource in the Blorb file. However, I would have to post-process the blorb produced by Inform to add the extra resource, making testing within the IDE impossible (or at least highly inconvenient). Alternatively I could modify cBlorb to add the resource, but that seems pretty hacky.

Does anyone have a better idea for how to do this?

I’d say: set it up to first try loading from a BINA resource, and if that fails (or if that capability is not available) try opening an external file with a given name. That should work in the IDE.

I don’t remember where you’re supposed to put the external file, though.

Probably in the parent folder of the .inform folder.

Thanks! Following your suggestion, my plan was to use the external file approach for testing, and a Blorb resource for the final story, but having implemented this it now appears that resource streams are not widely supported. I tried Glulxe and git (as included in Inform), Zoom for OS X, Gargoyle, Spatterlight, and Quixe, and only the last responds affirmatively to a gestalt_ResourceStream query.

Is it likely that resource streams will become more widely implemented in the near future? I would really prefer to use them, as in addition to being cleaner they also seem to be much faster: otherwise identical code reading from a resource stream in Quixe and a file stream in Glulxe runs 6 times faster in the first case. This could be due to Quixe having the entire blorb in memory vs. Glulxe possibly making many disk accesses, but I’m not sure how to ameliorate the latter. I have been reading the streams line-by-line with glk_get_line_stream, which is convenient for my parsing, but maybe reading larger blocks at a time would help. Ideally though I could just use resource streams and not bother with this.

I really want to know what sort of data you’re reading.