Specific procedural generation tech?

(This is long, but it seemed like the best way to frame the question!)

Framing question

I’m trying to figure out how to incorporate procedural generation into my text game. I’m assuming Inform7 for now, but this can change!

I need to figure out how to incorporate the procedural generation and player-driven ‘scoring’ of the information generated.

I had initially assumed Tracery or something similar, but I’m still pretty green and haven’t found a clear way to incorporate that into an Inform7 game. I have read Emily Short’s article on the subject and though useful, doesn’t engage at the gameplay-level I’m looking at here.

(In this post I explore the general idea of procgen and mystery in conversation.)

What are good text-adventure tools to generate a large amount of data, that would change with every playthrough?

What follows is a sketch of the design, which may answer or help structure the answers available!

Gameplay outline

I’m trying to make an ‘investigation’ game, which I’m seeing as somewhat distinct from a ‘mystery’ game. Mystery games are, usually, puzzles, where finding the right clues or details solve the puzzle.

This concept is more about the engagement with and assessment of a large amount of information, using it to build a ‘case’ around a particular decision. Like a lawyer or journalist assembling their files.

I had initially thought this could be a mechanic to examine something like the McCarthy trials, or the Salem Witch trials, where the information is rarely conclusive, and so assessment is key.

To simply the design prototype and to avoid inherent politics confusing the design conversation, this prototype will involve a kid on the playground trying to figure out if the new arrival is a ‘jock’ or a ‘nerd’ by examining the contents of their locker.

To generate the amount of detail necessary, I plan to use procedural generation to build the details. The intended gameplay loop is for the player to encounter a large amount of information, explore it and decide which elements to keep.

It will be communicated that it’s not about using all the data, that selection and assessment are key. It will also be communicated that this is generated, so there is no single correct answer.

Generation of Data

generates:

  • individual in question
    • biographical info as well as ideological
    • including ‘ideological affiliation’
  • variety of life details (based on what can be found in investigation phases)
    • with at least four spectrum values elements
      • ‘jock’ value
      • ‘nerd’ value
      • importance to individual (favourite item, indifferent item)
      • general description (colour, age, etc)

Briefing of Issue

New kid arrives, with a partial description that also adds to the mystery. Description includes generated details. Prompts player to investigate ‘playground allegiance’

Investigation Phase:

Locker: a variety of personal details, all with a focus on scholastic pursuits or general playground politics.

This is where the data generated really kicks in, in the form of what the player finds and the amount of information they can sift through.

In this phase, the player will choose a variety of ‘datapoints’ in the information as things the believe offer clues to the ideology of the student.

The may do this in the form of ‘taking’ the data into their inventory, or transcribing the information into a notebook of some kind.

Assessment Phase

This is all about taking the data accumulated and scoring it.

Perhaps the player has to decide on the direction of the judgement before they assess so as to frame the data scoring? (If Nerd, then the clues Nerd scores are tallied, if Jock, the clues Jock scores are tallied.)

Which means that each data point would need at least two scores.

Data scores would be based on the proximity of the datapoint to a conclusive ‘proof’ of the statement, although no datapoint should ever be 1-to-1. (Equal at both Jock and Nerd levels)

Judgement / Submission Phase

Judge is ‘older kid’ on the playground, this is seen as a test for social capital.

Tallying the scores compared to the known quantity at the centre and determining proximity.

Close proximity means a ‘winning’ result, or a ‘high score,’ medium proximity is ‘win, but with issues’ and far proximity is ‘you got the case wrong.’

There could also be a slower judgement phase, even possibly having extra investigations.

Or if the dataset is too ‘middle’ the judge sends it back; not clear enough.

2 Likes