refly - Experimental Fluent C# IF Platform

NOTE: This is entirely vaporware at the moment, so if you’re looking for working code, come back in a month or two.

I’ve monkeyed with many different IF platform ideas over the last two-plus decades. Back in the late 90’s I created an editor for Inform. I tried XML, Visual Basic, and later on FyreVM (the credit for FyreVM really goes to Jeff Panici, a friend of mine who game me the idea of “channels” and Jesse McGrew who took my ideas and made them real in C#). I was probably not a focused enough programmer to implement a virtual machine or my own compiler, so my efforts have always died in their infancy.

Even so, this is just one of those things that’s always been on my mind. I like building tools. I love telling stories too, but finding the time for story-writing is challenging. Tool-building is easier to jump in and out of on an erratic schedule.

One of the ideas I’m exploring is data-driven IF. So everything about the story would be stored in an in-memory database and a program simply runs based on that data. An object tree is never inflated. The story progresses by manipulating the data. For this I’m building a customized graph data structure (Vertex/Edge with properties).

Another idea is that output should be a first class service. To be able to identify parts of text, types of text, and have the ability to provide the logic on how all of the given text for a “turn” is aggregated, formatted, and emitted.

The code itself is C# with fluent constructs. This just means that objects can be built in steps with functions that continually return the initial object, but with more detail. It’s also very readable.

Debugging. I really like the idea of being able to step through code. To be able to write unit tests. To show that code is functioning in the order we expect and when it doesn’t, identify the flaws.

Rules. Use some of the same concepts from Inform 7 regarding rules.

Eventually there’d need to be a text version of the story, but honestly that’s a future item. If I get past these initial design elements and get this working as I envision it, a more robust library would be required. So the verification steps are:

  • Make Cloak of Darkness work
  • Make Adventure work
  • Identify weaknesses, refactor, rebuild, repeat at least once (but probably more than that)
  • Create language syntax for import into C# program and export back out
  • Implement web version in Web Assembly (assuming C# compilation in Web Assembly will be further along by then)

All the work will be contained in a github repository:

It wouldn’t be impossible to write an Inform implementation of GraphQL. (Nor for TADS3.) Just to put another idea out there.

That’s an interesting thought, but most of the “fun” here is building from scratch.

This is a great project, and you should definitely continue.

I’d like to put forward the case that modern IF is in desperate need of new, from-scratch technologies. I’m serious. Which is why we’re seeing everything slowly going choice based. As far as i can tell, there are no next-gen world-model rich systems out there. But there really should be! And imagine what kind of parser you could have on today’s technology - a lot more than some ancient LALR effort, i’m sure.

Data driven IF is the way to go. But as this expands, how to manage the expansion of properties, their behaviours and their inter-behaviours. I’ve been looking into this problem and have come up with two main ideas;

  • a global schema
  • modularisation

in your C# confluent approach, the global schema is, essentially the set of functions you define, such as Wearable(). In my system, the global schema, or what I’m calling the base ontology, manifests as symbolic linkage between modules; globally dog == dog.

modularisation of terms and behaviour is, i think, another cornerstone of managing meaning. To pick up on the wearable example, i want to put all code that supports the concept of, say wearable into a single module that can, if you like, plug-in to any game.

I know that systems with extensions have been around for a long time. but I’m talking about taking that modularity to another level. Imagine you wrote the behaviour (and properties) for a pet dog and this was all in a single module that you could simply copy verbatim into another game. Whereupon, in that other game, characters would pet the dog, react to the dog, and the dog itself would forage for bones etc and do dog-like things - automatically and driven entirely by data and no specific code.

That sounds impossible, but really it’s not for general purpose behaviour. However, it’s important to add here, that if the dog has some story specific purpose in the new game, that would, of course, need to be added somewhere. But that’s always going to be true.

Thanks. Once I get past the basics, refactoring priorities will definitely include modular concepts. I want “extensions” to actually be nuget packages. So the author could theoretically “nuget install” and voila, a dog npc is now added to their story.

So the pattern I’m working on is:

Fluent constructs -> Services -> Repositories -> Graph Data Store


We inject IStoryService into the Story fluent class which gives us a StoryService instance. Within StoryService, IStoryRepository is injected. StoryRepository handles serialization within the graph.

Some of the Fluent constructs, like Rules will consume multiple services.

In cases where code is stored, it can be executed dynamically with reflection.

Everything will fire events, so no “loop” will be required. I’m still thinking about this, but:

user enters command firing “command” event
“command” event fires “parse” event
“parse” event runs through parser and identifies a list of actions and fires the “execution” event
“execution” event fires all commands in order (which may in turn fire other events) altering the graph and firing “text” events
“text” events are aggregated by the “text generation service” with a given “template” and emitted as output

Everything in this list has an interface and is replaceable and extensible.

By breaking the traditional loop into events, what you’re now doing is articulating an event space; so this can be more like state machine than a sequence. Are these events themselves objects, and if not, can they be? In other words, there can then be functions that operate on the events themselves. For example, transforming one event into another, storing them for later, breaking one event into two etc.

I’ve been looking at breaking up what you’re calling the “execution” event into many sub parts and stages. In traditional systems, execution is far too monolithic. Which is why things otherwise end up with code hooks and other bodges.

To take one concrete example, i want to separate the notion of intent from the notion of action, and both from the notion of output. Expressing this in terms of your terminology, the intent action would result in an event that could be processed or intercepted by agents before it reached any action.

The purpose of this is to support active worlds. Traditional systems are way too passive. A new species of world modelling requires a concomitant evolution in agents and active state.

Yes. I was massively generalizing the logic. Output especially is a completely first class operation that happens after all of the emitted text has been cataloged.

Events would have physical actions with content. Context is also a separation of concern task.

I’m following this thread with interest. I too am experimenting with my own ideas for an IF platform that makes some different choices than the popular IF systems.

I recently read a comment by one of the early object-oriented pioneers (whose name presently escapes me). The gist was that object-oriented programming was supposed to emphasize message passing. Instead, we ended up emphasizing the objects themselves, in the sense that C++, C#, Java, and other languages have given us classes, objects, inheritance, polymorphism, etc. So this event approach is intriguing to me, and somewhat similar to one of the ideas I’m pursuing.

Another goal of mine is to see if I can avoid inheritance and classes altogether. I want to build up objects by composing units of behavior. In my case, these units of behavior end up like adjectives: the cardboard box is portable, flammable, and container-like.

I do have a loop, but it’s not the traditional REPL. Mine sends a “take turn” message to each object in the world in turn. Most don’t do anything. But the player object prompts the user, interprets the command, and starts firing messages to the affected objects.

I’ve got a good start of a game prototyped in Python, and the ideas seem to work well. I can do a lot of things with relatively little code that I haven’t been able to figure out how to do in Inform 7.

I used Python because I was learning it when I started, but it’s not an ideal language to express the concepts I’m after, so it’s a bit verbose and there are some implementation kludges. While I tinker with ideas for a proper domain-specific language, I’m currently making a VM that natively handles some concepts that would take a ton of code in a traditional VM. Nothing like re-inventing the wheel, or the world (simulation).

I do the same in my system (it is called xvan). The user input is sent to all objects in scope and each object decides for itself whether it must respond or not. Objects can work together to generate the response for the user. E.g. with the “inventory” command, each object held by the player will print a line in the output. It requires a different way of thinking when writing a story because there is no central process control other that distributing the user input.

I’m going to draw some diagrams to show the differences in execution. I think we’ve talked about three kinds now: REPL, Pub/Sub?, and event driven state machine.

I started designing something like this for Guncho, in the form of “bot realms”: an NPC running in one realm could appear in a different realm by passing actions back and forth, interacting with a virtualized copy of the remote realm’s world model.

A good implementation would either need a lot more introspection/reflection than Inform provides out of the box, or a big curated list of actions and attributes and such. It’s more work than I was motivated to do on my own for something nobody would use, but I think it’d be relatively straightforward.

So I’m working on the graph data store at the moment and trying to optimize properties.

(Note this is about programming, in C#, so if TL/DR if you don’t code)

For a graph, you have noun -> verb -> noun (leaving out the bidirectional use case for the moment).

However, the noun would be something like “item” or “location” so it also needs properties. Because this is a graph we have to link things together.

The problem is that within the story game, I’d like objects to be of a type (like Item, Location, Player). But they also need to be of type Vertex (noun) or Edge (verb).

I could do this by implementing IVertex and forcing all authored objects to implement the vertex required properties. Or all objects have to inherit a VertexBase class.

I’m undecided which way to go.

I was trying to use generics and that works until I need to combine object and vertex properties.

Any thoughts are appreciated.

In addition to this…if objects are c# classes, do I build a complex reflection process to pull property names and values out by their correct type and add them to a hash table or do I use JSONSerializer and then use some complex search mechanism to identify JSON properties in a list of objects. Or is there another, better option?

I’d say hold off on anything involving the word “optimize” until you have a working proof of concept.

I’m having trouble envisioning how a graph like that is going to represent a game. Do you have a diagram?

One way around that is to use composition. Design the graph data store to be agnostic of your parser/world model, and just give every node a data field that can store whatever IF object you want.

Keep it simple for the proof of concept. Cloak of Darkness has like 4 rooms – you can get by with brute force for now.

Perhaps consider the graph as a design abstraction rather than as an implementation guide. In 1996, i faced the same challenge and this is what i came up with;

My “graph” consisted of concepts and connections, but the connections did not have names since it turned out that the relations implied by connections needed also to be concepts. It was thus unlabelled. That being the case, the “graph” collapsed into a set of properties per concept. Eventually, the meaning of the concept itself became defined by these properties and the concept itself just a label.

So, each concept is a set;

lemon = {(color yellow) ...} banana = {(color yellow) ...} apple = {(color red) ...}

For inverses, eg. “what things are yellow”, it was necessary to make a general rule; (R Y) in X => (Y X) in R.

then we get;

color = {(red apple), (yellow banana), (yellow lemon) ...}

Quickly yielding the results, banana & lemon.

It seems there are no collisions by storing the inverse properties in the very same data structures as the forward properties - since which are really forward and which really backward? They don’t clash.

A corollary of (R Y) in X => (Y X) in R, must be (X Y) in R => (X R) in Y !

Writing this out, we get;

red = {(apple color) ...} yellow = {(banana color), (lemon color) ... }

I used to call these three projections; forward, backward and sideways.

lemon = {(colour yellow)...} "the color of lemon includes yellow" colour = {(yellow lemon)...} "yellow colored things includes lemon". yellow = {(lemon colour)...} "lemons are yellow because of color".

Implementationwise, each concept was a standard tree/set or map.

This is the likely direction. There are two design requirements. Make it usable, but also allow for a fluent interface.

The graph interface so far is:

public interface IGraph { List<T> Match<T>(string vertex, Guid? id, Dictionary<string,string> props) where T : IVertex; void Save<T>(string label, T data, Dictionary<string, string> props) where T : IVertex, new(); void Connect<T, U>(T nodeA, IEdge edge, U nodeB); void Disconnect<T, U>(T nodeA, IEdge edge, U nodeB); }

If we follow the concepts idea, we start seeing things like:

Movement is an event when the player (or NPC) attempts to move in a compass direction or some other command meant to change locations.
Remove Skin is an event when the player (or NPC) attempts to peel a lemon.
Some properties are abstract. (Lemon is yellow, which can’t be manipulated, though I guess there’s probably some chemical mixture that could alter the color of a lemon, but in the case of IF, it’s an abstraction)
Some properties are real. (Lemon has a skin)

I’m not sure about reverse and “sideways”. I’ll need to ponder those a bit more.

I’m thinking about continuing refly development on a twitch stream. It will help keep me focused and allow others to chime in.

I’m sort of stuck on a big decision at the moment. There’s a part of IF that allows you to “intervene” in any action with custom code. I’m trying to decide if I want to enable that, or see if refly can be truly data-centric and declarative. My gut tells me that would close off a lot of reasonable authoring features. So if I do enable code, it would probably be C#. I just don’t want code to be so abstract that it kills the platform with complexity.

I’m just going to have to decide which direction I want to go and build it out that way. If it proves unwieldy, UNDO.

I don’t think it’s possible to do “real” IF without custom interventions – when every behavior is standard, it’s just a single-player MUD. The stories and puzzles in IF come from the places where the author deviates from standard behaviors.

But… who’s to say your custom interventions can’t be data-centric and declarative? A Turing machine is data-centric and declarative; it’s a table of state transitions. In other systems, state transitions are often modeled as workflow graphs. You have graphs!

OTOH, if you do decide to add imperative scripting, you have other options that might fit better than C#: IronPython or F# would be nice and lightweight.