Unicode is fine in JSON, the issue is that I7 only knows how to read and write ASCII. If you need higher codepoints then you’ll need some custom file functions. (Maybe there is already an extension for that?) So I don’t think I’ll add escaping for higher codepoints as it would just make strings so much longer. Except maybe for surrogate pairs, as they need to be escaped? I guess they’re easy enough to escape.
However the parser definitely does need to support Unicode escapes.
I thought about adding a traversal function, where you pass in a query string, kind of like the command jq, something like "key1 key2 [3] key4"
. I wonder if these extra utility phrases should be part of the main extension or in a second extension?
So it’s actually quite easy to add other types into the system:
JSON room type is a JSON type.
To decide which JSON reference is a/-- JSON room (R - room):
(- JSON_Create((+ JSON room type +), {R}) -).
To decide what room is (R - JSON reference) as a room:
(- JSON_Read((+ JSON room type +), {R}) -).
But I’m considering splitting off the object model into a new extension, perhaps called Collections.
It would be possible then to make the object/map type support non-string keys - any non-reference (array and object) keys would be easy to support. Would that be useful?