Techniques for visualizing logic flow and variables?

So, I’m working on a procedural generation investigative text adventure, and I’m planning on using Inform 7 and tables for this.

I initially thought I’d have to use Tracery or something similar but from what I saw here:

It should be possible to have a wide variety of information generated without needing to use anything outside of Inform 7!

Ryan was kind enough to supply the source code for the game described in the blog, as the snippets were missing just enough for me to figure out all the steps.

Here’s the real question, though: when creating a bunch of variables and mechanics that work on those variables, are there any good guides for visualizing that outside of just writing the code?


You definitely want to create a master list or cheat-sheet for all the variables in your game to keep them straight so you don’t go accidentally coding everywhere using $hornLength when the variable started as $horn-length. I also make a list to keep track of number variables and what each level means:

Laura-relation (relationship progress with Laura)
0 - hasn’t met
1 - met briefly in intro scene
2 - friends after mall scene
3 - asked on a date (+1 to PCconfidence)
4 - went on a date, shenanigans
5 - Laura is an obsessed stalker

You can scribble this in a notebook - I like to use permanent composition books or even little spiral notepads. I also find a text note or document synced between my phone and computer can be really useful to bring up anywhere if I think of something interesting and want to remember it.

If it’s really complicated you might want to use a spreadsheet, but that can be overkill unless you like spreadsheets.


I don’t know how far into the weeds on theory you want to get, but under the hood procgen stuff is generally either

  • a grammar
  • a graph
  • something that should be re-framed to be one of the above

…where the first two are generally isomorphisms of each other and they both have a number of standard idioms for visualization.


Do you have recommended reading? Because I’m interested.


I agree with @BitterlyIndifferent !

Would love some recommended reading or at least subject guidelines for research!

1 Like

There’s a book on Procedural Storytelling in Game Design, but it reads more like a collection of essays.

Some of the authors make interesting points, but it was tough to identify consistent best practices.

Hmm. I don’t know of good introductions to grammar or graph theory, certainly not that feel like they’d be useful for narrative design. You can use graphviz to visualize graphs, and there are things like Chris Ainsley’s Puzzlon for visualizing puzzle dependency charts, but visualizations tend to get very unwieldy very fast, especially in more fancy procedurally-generated situations. For analysis there are things like Zarf’s PlotEx, maybe? Emily Short has several related blog posts, of course: the puzzle chart from Counterfeit Monkey links to the associated post, or Pacing Storylet Structures links to the previous four posts in that series (though I think those visualizations are made by hand in… Omnigraffle? (Mac only). Annals of the Parrigues has a substantial (like 30% of the document?) appendix describing how the game was made and the aesthetics of procedural text generation, and links to a corresponding video of a talk.

I tend to think once you get very complicated you want to be going more statistical, things like ChoiceScript’s randomtester: running the game thousands of times making random choices and compiling those maybe into some kind of heat map to find whether everything is reachable and how the balance is, if certain pathways are very hard to find. Also maybe Jasmine Otto’s DendryScope and her associated PhD thesis and thesis defense slides as maybe sort of an intro. My brain works differently than hers and I find her work difficult to follow through the unfamiliar jargon… but it’s cool stuff. IIRC either Aaron Reed or Jacob Garbe may have written a little bit about working on The Ice-Bound Concordance? And Max Kreminski certainly has a couple papers.

Not visualization, but I also suspect some things are more plot problems than technical ones. I feel like often the possibility spaces are so big that the tricky part is finding a way to measure what actually matters to you and to the player, and then it’s not actually too hard to visualize…


If you just want a general discussion of procgen in game dev in general, there’s the free e-book Procedural Content Generation in Games. It might be a little heavy going if you’re completely new to the subject, though.

Most of the basic concepts (formal grammars, graph theory) are basically engraved in my skull at this point so I’m not sure how intuitive the concepts are to everyone else. But the basic point I’m making is that most procgen stuff can be described via an EBNF-ish grammar.

To illustrate, first let’s consider a trivial two-word parser. Diving right in:

  • VERB: [ TAKE | DROP | LOOK ]
  • TAKE: [ ‘take’ | ‘get’ ]
  • DROP: ‘drop’
  • LOOK: [ ‘look’ | ‘examine’ | ‘x’ ]
  • NOUN: …

Here I’m using ALL CAPS to denote tokens and ‘single quoted strings’ to indicate literals, square brackets to indicate a set of alternatives, and a pipe (“|”) to delimit alternatives. The idea is you start out with a top-level production (“production” or “prod” is a term of art for this kind of substitution rule) and substitute tokens for other tokens until you have nothing but literals (or you can’t, in which case you have an error).

Okay, hopefully this example makes sense. Assuming it does, then the point is that you can describe more or less anything you want to procedurally generate the same way: a house, a dungeon, a dinosaur, whatever.

And the point after that is that a grammar (in this sense) can be represented by/is an isomorphism of a graph.

In the early days of roguelike procgen, a dungeon level was often generated by randomly picking some rooms, randomly placing them, and then randomly digging hallways. Then you’d have to check to verify that the level was “solveable”: if you had an upstair (connection to the level above) and a downstair (connection to the level below), you had to make sure there was a path between the two. If there wasn’t, you would generally just re-roll the level.

Instead of doing that, say we start with a conceptual graph: two vertices (one labelled “up” and one labelled “down”) and a edge connecting them. We can then convert this into a “real” dungeon map by saying the vertices are rooms and the edges are hallways. In our graph so far, we know the level is solveable because we know the upstair and downstair have a path between them.

But now say we replace the single edge with a vertex, call it “plot room”, adding edges from “up” to “plot room” and from “plot room” to “down”. We now know that “up” and “down” are still connected, and we know that the path goes through “plot room”. We can then do the same thing to each of the (two) edges in the new graph. And we can keep doing this, recursively, until we have an arbitrarily complicated map. And assuming we follow some simple rules (like always replacing edges with subgraphs that are themeselves traversable) we don’t check to see if the level is “solvable” because the graph is always in a consistent, solvable state. We can even use this to scatter keys and clues and so on and be sure that a key won’t end up behind the door it unlocks (although we do end up having to do a little additional bookkeeping that I’m not covering here).

And the thing to recognize here is that what I described above with graphs and what I described above with a grammar are fundamentally the same process.

I hope that all makes sense. I’m certainly willing to dig into the subject deeper—I’m actually doing a lot of lightweight procgen stuff in my WIP, although it’s in TADS3 and not I7.


Absolutely fascinating so far!

I’ll admit I’m having a bit of a hard time following that section on the roguelike-dungeon-grammer, maybe because I’m reading terms like ‘up,’ ‘down,’ ‘plot room,’ etc, a bit too visually.

Going from that grammar example you showed, would it start to look something like this?


Also, are there any examples that might show both a graph and a grammar of the same thing, so as to see how they work together?

Probably like this, where you have a rule that follows the arrow, taking the arrangement on the left (dots are rooms, lines are paths between), and transforms it into the arrangement on the right.


You might also look up L-systems: the wikipedia page isn’t the greatest introduction: I like to visualize them as “take this line segment, replace it with (or add on) this shape” but it has some examples so you can probably see how it works if you stare at it for a bit.

But that’s talking about having grammars that build you a graph or map, which is different from… I think the connection jbg was talking about earlier is that grammars usually correspond to a finite state machine that will recognize or generate the language corresponding to the grammar.

Wait, does that make sense? I’m not sure where you’re at with the terminology. So… take the second example in the Tracery tutorial at Crystal Code Palace: the grammar is the set of rules with The #color# #animal# of the #natureNoun# is called #name# and rules for what color and animal and so on can be. The set of possible outputs is called the language associated with that grammar. But you can also use the grammar the other way, as in programming language parsers, and ask: does this sentence “match the grammar” or equivalently is this sentence “in the language” represented by the grammar?

And you can represent the machine to recognize or generate a sentence in the language as a graph, where each node is a state and the connecting edges represent matching or generating a letter or word or whatever. Regular expressions are often represented this way, so for instance scroll down a little in this article.

Hopefully that helps a little…

1 Like

Or more generally that our approach to both classes of problem is just a bunch of recursive substitutions. This is assuming that when we’re talking about procgen stuff we’re only interested in computable (partial) functions.

If anyone is having trouble getting a feel for what this means, it’s more or less the same thing you were taught in high school algebra: if you start out with a big ugly algebraic expression and apply a bunch of substitutions and simplifications and eventually end up with x = 4, then (assuming you made no mistakes), that means that the big ugly expression was just an elaborate way of writing x = 4.

With procgen stuff, you’re basically doing the reverse. You’re starting out with something like x = 4, and you’re recursively inverting algebraic simplifications to produce a big ugly expression.

Anyway, I don’t know of a reference that walks through the specific equivalence we’re talking about here, much less one that would work as an introductory text. E.g., the Dragon Book, one of the seminal references on compiler design, uses graphs to illustrate state transitions in lexical analysis, but a) not in a way intended to specifically illustrate the equivalence of the grammatical and graph representations, and b) the Dragon Book is a pretty crunchy read even if you’re really invested in learning compiler design, and is waaaaaay overkill as a general introduction to formal grammars.

I’m sure I’ve seen the subject discussed in e.g. gamedev presentations and so on, but I can’t think of a specific example at the moment.

In terms of the “dungeon” example:

In the terms I was using, the productions correspond to graph vertices, which I was imagining as rooms. The hallways are the edges, which in the simple grammar are implicit.

That’s not necessarily true (that is, I’m not trying to advocate for this specific kind of system). I was just trying to sketch out a specific use case. That is, older roguelike procgen map generators frequently used trial and error to dig maps. The point I was making is that you can avoid this by always having a map that’s in a consistent, “solvable” state. So instead of randomly placing rooms and then randomly digging hallways and hoping everything ends up connected, you can use a graph where the rooms are vertices and the hallways are the edges—if you start out with a graph where you know you can reach each vertex from every other vertex (because there are only two and a single edge) and you expand the graph only by replacing edges with subgraphs that are themselves traversible, then you know that the final graph will be “solvable” by the player.

In this case I’d start out with something like (I hope this is accessible):

     upstair - downstair

…and then say one of your building blocks (“known good” units of stuff you can use in your substitutions) is a pair of rooms connected to each other. We’ll call each set of rooms “foo” and “bar” and we’ll number them to keep track of the order we added them in. So after one substitution we’d have…

     upstair - foo0 - bar0 - downstair

…and after another we might have…

     upstair - foo0 - foo1 - bar1 - bar0 - downstair

…or, alternately…

     upstair - foo0 - bar0 - foo1 - bar1 - downstair


     upstair - foo1 - bar1 - foo0 - bar0 - downstair

In our simple grammar that would be something like:

  • LEVEL: ‘upstair’ FOOBAR ‘downstair’
  • FOOBAR: [ FOOBAR ‘foo’ FOOBAR ‘bar’ FOOBAR | '' ]

…making no provision for bookkeeping like our numbering system.

That is, our level always consists of an upstair and downstair connected by some stuff, where the stuff is recursively defined.

In this case all of the hallways are implicit. We could make them explicit via something like replacing the empty string in the FOOBAR definition with ‘hallway’, or we could (if we didn’t want to build simple linear “dungeons”) use some more elaborate syntax to accommodate rooms with more than two connections.

But if we wanted to do that, we’d probably be better off just using a graph, because that kind of information is generally easier to handle that way.

The opposite is generally true when you don’t care about the relationship between the individual parts or the relationships are implicitly obvious/handled by something else. E.g., in the original example of creating procgen dinosaurs (in IF), then you probably don’t care that the head is connected to the body and the body is connected to the legs but the head isn’t connected directly to the legs. So a grammar that doesn’t attempt to enunciate the relationship between the parts is probably all you need.


I also just remembered Mark Gritter’s 10-minute lightning talk at Roguelike Celebration 2020 about “emojiconomy,” his experiment in using graph grammars to make interesting (?) trading economies for games.

Mark Gritter - Procedurally Generating Economies with Graph Grammars (and Math)

1 Like

I think I’m starting to get it…

In your ‘FOOBAR’ example, one might have a routine or loop that would build as many ‘foo,’ ‘bar,’ etc, as necessary until a certain condition would be filled?

To get specific:

Right now, my design involves generating a bunch of details about a potential ‘new student’ to an elementary school, particularly the contents of their locker.

One variable would involve their wealth level, which would then mean that details involving the description of the quality of their stuff would be different. From ‘pristine’ to ‘battered’ for example.

So the wealth variable, generated randomly, then affects another variable, which then chooses random descriptions from a table tuned to that element.

(The whole point of the game is to determine the nerd/jock ideology of the student, and most of these items will have some level of nerd/jock score. The player will assess all the details to ‘make a case’ for one or the other. I want to use proc-gen to build a LOT of details so it’s not as simple as seeing one item that solves the case.)

So would a grammar for this student begin something like:

  • WEALTH: [‘rich,’ ‘middle-class,’ ‘blue-collar,’ ‘destitute’]
  • NERDJOCK: [(spectrum between -10 and 10, with no zeros)]
  • LOCKERCONTENTS: [draw 20 things from tables with exclusions based on WEALTH and NERDJOCK]

This is really boiling down the logic I’m wanting to play with, but I’m trying to see if I’m getting what you’re talking about!

Right. In principle this could be enunciated in the grammar spec, but I was trying to supply an example with as few moving parts as possible. If you want to get more into the formal nuts and bolts, things like the graph grammars mentioned by @JoshGrams are probably want to you want. That’s if you want to be able to prove/verify specific results involving whatever you’re generating (i.e. if you wanted to only generate dungeons that contained certain kinds of paths, or things like that).

The stuff I’ve been talking about is more aimed at just addressing the original question. If you’re doing procgen stuff and you’re looking for something to help you wrap your mind around the problem, graphs and grammars are two fairly intuitive “idioms” for working with most things we generally end up using procgen for in game dev.

It turns out that graphs and grammars tend to work for these sorts of things for very deep, fundamental reasons…and that means that we can often leverage the idioms “formally”, but the point I want to make is that you don’t need to go all the way down the theory rabbit hole if you’re just looking for something to help you visualize/internalize a procgen model you’re working on.

When I’m doing this sort of thing I sometimes go all in on implementing a grammar/compiler/whatever for handling procgen stuff, but usually I end up with a sort of hybrid system, where a lot of the crunchy “big” structural stuff is “explicitly” a graph or grammar, but all the fiddly one-off details are just implementation warts handle in the code for individual game objects.

In terms of your locker contents example, it looks like in your head you’re working more with a character sheet kind of idiom. Which is cool, and if you’re actually interested in numerically parameterizing objects (like if you’re doing skill checks against numeric scores like in an RPG) that might be a idiom worth leaning into.

But if you’re looking at this as a “pure” procgen problem, I wouldn’t worry about generating numeric scores like that; the numeric scores, if you need them, should be computed from the generated object, not the other way around. So STUDENT (or I guess LOCKER) is a container for a bunch of stuff, you add stuff to it per some selection criteria.

If you need/want to generate an object with specific properties—I don’t know if that’s just a numeric value or if there’s more elaborate logic involved (like one of those “Alice is taller than Bob. The shortest person is six inches shorter than Carol…” grid logic puzzles)—then you don’t really want to be building a model of the things, you want to be building a model of the puzzles. If that makes sense.

My interest in procgen is two-fold:

I want there to be an abundance of detail such that there is an emergent implicit element of narrative that the player can construct. This wouldn’t be connected to any of the scoring mechanics, but I do want them to be influenced by the STUDENT biographical details. (If they play baseball, there’s a possibility of a catchers mitt, for example.)

Among those details are going to be elements about determining the Nerd / Jock placement of the student, and the gameplay involves collecting those details and assessing them to try to guess which one the student is.

So it’s not the same as building a roguelike dungeon, in that it’s trying to generate a specific, solvable puzzle governed by clear logic. (The whole aesthetic point is for the player to experience the tension of trying to judge someone with only partial information.)

So it’s also not quite a character-sheet idiom at least in terms of judging skill checks and such… Although we’re talking about a person, in terms of gameplay maybe the idiom is more like a topographical map? Some details are going to be higher or lower in altitude (biographical details) and some details are going to be closer or further apart (nerd/jock spectrum), following some general rules, with enough variety that it will create emergent, plausible connections that also won’t be the same every time.

It actually sounds more like you’re talking about three things:

  • Greeble (adding little details to “sell” the setting)
  • Emergent gameplay
  • Specific solvable puzzles

Which are all cool, but they tend to be distinct things. That is, if you want to be able to generate solvable puzzles of some particular format, the things you probably want to be modelling are the moving parts of the puzzle and the constraints that relate them.

That’s generally different from “just” generating bits and pieces of stuff and letting them interact in novel and unpredictable ways, or from generating details to give the environment a more “lived in” feel or whatever.

For example, if you’re writing a mystery then you probably want to start out with whatever the mystery is and the clues from which it can be deduced. If you add red herrings, you probably want that to be part of this process, not a separate process in which you’re just adding random details to the environment. Because you don’t want to create a situation where the random details just happen to line up in a way that points to an incorrect “solution” to the mystery.

If you look at the trivial dungeon generation “algorithm” I sketched out earlier, that’s implicitly built around a specific puzzle and solution: the player is in a dungeon, and wants to get from point A to point B. So you start out with just point A and point B and connect them. That’s the puzzle and its solution in a nutshell. And then bits are added in a way that can complicate the solution, but never break it.

For generating more complex sorts of puzzles, you generally want to do something similar: work out the actual steps the player will have to take in order to solve the kind of puzzle you’re implementing…as in, work out an algorithm or perhaps even jot down a sample transcript or that kind of thing…and then embellish the puzzle by adding additional necessary steps to that process.

I don’t know what your design looks like in terms of mechanics, but let’s think about a trivial mystery game where the mystery is who took a cookie from the cookie jar.

A very simple design would be to generate a single NPC with a cookie in their inventory and then an arbitrary number of NPCs without cookies in their inventory. Solution of the puzzle is just examining NPCs until the player finds the one with the cookie.

A more complex version would be to declare that the cookie jar was locked behind glass in a museum with various sensors and tripwires to twart theft. Then you might have a spec for several random kinds of thieves’ tools consisting of many different parts. You generate a random mystery by generating one NPC with a complete set of one type of thieves’ tools, and then generate all the other NPCs with incomplete kits. A further embellishment would be to distribute the various NPCs’ possessions throughout their homes, places of work, and so on. Or details about a specific theft implying a specific set of thieves’ tools, and random NPCs can be generated with complete sets of other tools, but not the tools required for the specific crime being investigated. And so on.

This is different from what’s usually called emergent gameplay. In your example if you were going for emergent gameplay you might implement a game where there’s no notion of factions in and of themselves, there are just NPCs that tend to cluster together with other NPCs with similar interests and the factions just kinda happen without ever being explicitly designed.

This is fascinating! Thank you for engaging with me on this.

On the point of ‘solvable,’ this might require a bit more explication.

The aesthetic intent is not to generate a solvable mystery that then has a variety of ‘noise’ in terms of clues regarding that mystery. (There’s a game called Shadows of Doubt that I think does this pretty well already!)

I was thinking of something like the McCarthy trials (or the Salem Witch trials) where cases are being made against people on the basis of circumstantial evidence, but often an abundance of it. ‘This person had a ticket stub from a lecture, and one of the speakers at that event was a known Communist sympathizer,’ as an example of how those investigations appeared.

I chose a schoolyard and nerd/jock as a less politically charged (and less research heavy) framework for the prototype of the mechanic I’m envisioning.

So my hope for procgen is to generate all of the details in question about a new student, which the player will have to sift through, choosing certain items as ‘evidence’ of the nerd/jock affiliation, which are then tested at the endgame.

The intent is for there to be too much in terms of detail for the player to simple collect and rate it all. And for the details to never be as simple as a ‘smoking gun’ clue of (for example) a Lego X-Wing or a catchers mitt signed by a pro ball player.

Or to really boil it down: I want to make a procgen student full of details that all have some proximity to either Nerd or Jock, as well as details simply related to their biography. The player is examining their locker, and can choose 10 details (out of, say, 100) to make the case for either Nerd or Jock. The details are judged and the truth revealed. (Maybe by the teacher or something.)

Each time gameplay starts would create a different student, so these are not hand-tailored clues. (Though the datasets would be tailored.)

The emergent elements are (I hope) a certain level of player engagement with the details that help them build the ‘rest’ of the student in their minds, as well as finding interesting/entertaining combinations of the details.

(Also playing with the aesthetic or idea of ‘building a case’ as the main gameplay mechanic, like a lawyer preparing for trial. Those scenes where they’re in someone’s living room, boxes open, flipping through paper, trying to find a solid detail.)

I’m already thinking that I’ll have to make sure the details never point to the ‘middle,’ otherwise the details would be too vague.

With all of that explained, how does my topographical map hold up as an idiom? It is both Greeble and Emergent, as each detail generated would have both a ‘lived in’ related quality and a nerd/jock spectrum quality.

So is the intent for the “puzzle” to be unsolvable, or for it to be impossible for the player to actually correctly deduce a conclusion from the available evidence?

Because if the intent is to force the player to make a guess explicitly only when they can’t have worked out the correct solution, then if you want to foreground this fact you can do a sort of Schrödinger’s procgen thing: instead of generating the “solution” to at the start of the puzzle, only pick one specific option for any variable when the player does something that forces the selection. Then for the “reveal” you can then force the value for all of the un-observed variables to be the opposite of whatever solution the player offered. That’s if you want to underline the fact that the player is making a judgement on bad data.

If you’re not doing something like this—that is, you’re not making a game where thwarting the player’s attempts at solving the puzzle is “the point”, to make a thematic point about judging people or whatever—then you still probably want to do the sort of spec-driven procgen that I mentioned before. Because if you don’t, then you can potentially end up in a situation where you’ve generated a set of clues that the player could reasonably make a specific inference from…that isn’t the “solution” you’re after. If that makes sense. That is, unless intentionally confusing the player is part of the game design (and sometimes it is), the you probably want to be very careful about generating random clues that might or might not have that effect. In other words, you want to develop a generation scheme that will only confuse players in specific ways that are part of the game design, not in random other ways that will just frustrate them.

If neither of these apply and you’re wanting to do a “pure” random sort of thing (just rolling random students), then I don’t know enough about the rest of the design to know why you might prefer a top-down versus bottom-up generation mechanism. That is, whether it makes more sense to decide faction membership and generate clues based on faction, or instead to generate clues first and allowing them to determine faction membership. The first probably makes more sense if you’re trying to build a NPC population that’s stable over multiple iterations of the game loop (that is, the investigation cycle will be repeated by the player against a static population of “suspect” NPCs, and you want to insure a particular distribution of properties across that population) and the second probably makes more sense if you’re more concerned with distribution of “stuff” (clues, red herrings, and so on) in the environment.

Again, thank you for engaging with me on this! These questions are really honing both the design intent how I’m going to need the variables to interact with each other.

There’s a few different aesthetic intents coming into play here, and possibly some assumed elements because the nature of the game is being posed as an investigation/mystery.

In terms of intent, as a designer, I want the player to feel as though they have to make a conclusion based on incomplete information. But I also want that information to imply a bigger picture. So the faction element is (in this concept) part of that bigger picture that is being implied, and also what the player needs to make a conclusion about.

To me, it’s important that the player feel that this incomplete-but-existing picture is fully present as they are investigating. (Which is to say, not changing based on what they choose to investigate.) I want to generate the student and clues / details first so that they are all present ahead of time. (Part of this is also to make sure the details can have some level of relationship that might work better generated all-at-once instead of along the steps of an investigation.)

It’s not about making the puzzle unsolvable, or obscuring the solution. It’s about generating a big picture (perhaps that’s the ‘solution’) and only giving them partial elements of it. With the idea that the generated student and their faction are the ‘big picture.’

Regarding assumed elements: it’s been tricky explaining this concept because the idea of investigation or mystery has often been represented in gameplay as: find clues x,y,z, which then solve the mystery. Or some variation thereof, but generally centering around a puzzle solve that unlocks the next part.

The win conditions here are a binary: pass/fail, but the method of getting there isn’t solving the puzzle, it’s about choosing the details for a faction and then finding out if your collection of details and conclusion about them were correct. The ‘puzzle’ element is more about looking at each of these details and making the decision to use it as part of ‘building your case.’

This will probably be confusing, but not in terms of red herrings, if that makes sense. Again, going with the faction-judging element, part of the dramatic tension will be about judging someone based on circumstantial evidence. It’s lack of clear conclusions is meant to be part of that tension.

So my vision of procgen is to give the player a wide amount of details they have to sift through, deciding what’s relevant. And for it to not be just one iteration of that experience… for it to be replayable because it will generate new students each time.

There is an emergent quality to this as well because I hope the player will have some ‘greeble’ moments where they find certain details endearing or interesting in ways that make it come alive!

I think you’re confusing what you know as a developer and what the player knows (or believes) when playing the game. There isn’t any in-game measurable difference between rolling a five on the loot table to determine that there’s a copy of The Origin of Consciousness in the Breakdown of the Bicameral Mind in a locker during preinit versus when the player opens the locker. Unless there’s some side channel by which the player can obtain that information without opening the locker (in which case you’d roll the dice then instead).

Not trying to talk you into anything, just making the point that whatever relationships exist between the various bits of procgen stuff are entirely baked into the design of the procgen algorithm, and there are frequently very good reasons to defer generation until the “stuff” is accessed. I’m using a bunch of instanced seeded PRNGs with procgen stuff entirely to manage game/heap/savegame size, for example.

Right, but if you’re generating these things via procgen (instead of using prebaked, hand-generated puzzles) then you’ve got to have some notion of what the actual mechanics of solving the puzzle look like. That is, what actual in-game steps the player is going to do in order to solve the puzzle.

That’s the thing you’re procedurally generating. The individual bits and pieces of the puzzle (like, in this case, the contents of lockers) are part of it, but they’re not the whole thing. Because you’re not just generating lockers that indicate this or that faction, you’re generating puzzle pieces that happen to be lockers that indicate this or that faction, and how that relates to the overall puzzle is literally the reason they exist in the first place. If that makes sense.

That’s all asuming that solving the puzzle is something that the game is actually tracking. As opposed to a “pure” emergent thing, where it’s basically just a side hustle that the player can take it upon themselves to figure out, but the game isn’t actively cluing/tracking/rewarding/whatever.