Is there any good in-depth do's and dont's guide for IF?

The problem with calling the expectations of parser IF players unreasonable is that those expectations keep being met. You’ll note that almost none of the parsercomp games had pervasive parser or world model problems. People expect NPCs to be well-implemented because there actually is a decent number of games with well-implemented NPCs.

Honestly, if someone had a game with lots and lots of NPCs, and they decided to implement them according to the Bioware-esque convention that they all stand in one place all the time and are interacted with through conversation trees, I don’t think people would complain very much. They’d recognise the convention and play along if it was done correctly and consistently. A lot of things in the default world model of parser IF are conventions that don’t quite match up to reality or fiction, but which people play along with. There’s a difference between presenting something as a simplifying convention that makes the game easier to implement and play, and something being broken.

A Long Drink had NPCs that were, from what I could tell, supposed to move around, have some degree of autonomy, start scenes, show up and talk to the player of their own accord, and respond to an Ask/Tell conversation system. That set up a lot of expectations that weren’t met. If it had NPCs that stood in one place and talked to you through conversation trees, but that worked, it would have been much better.

I’ve explained at length in another thread the limited usefulness of that approach: Every room description will include nouns that aren’t concrete objects, or aren’t present, so the output from the tool would be mostly noise. You’d either ignore it, or have to go back and tell it to ignore half the nouns it finds. It makes more sense to simply bracket every noun.

This is one of my fundamental disagreements. In my experience, IF reviews, even by veterans, will mention this a lot, even in cases where it’s not important.

That’s fair enough, though I’m curious what the best practices is for implementing this. I’m not taking about bracketed nouns or lists of synonyms, but how to deal with multi word objects and rooms that will be globally scoped as a programmer. Some of the issues players encountered were because I had different sets of printed names, true names and synonyms, etc. for wine glass, glass shard, shattered glass, and so on. Inform’s “It’s so easy, it handles everything!” approach to natural language leads to some really weird interactions when it’s doing object resolution, and the workarounds I encountered. Is there a reasonable coding standard for Inform7 on how to gracefully handle this situation?

Yeah, I recognized this, and had a hard coded list of things I could think of that would act as valid input, it bypasses the entire verb system because I didn’t want to have to implement sensible ways a ton of one-off verbs needed with a one-off noun. I passed it to a bunch of friends and beta testers here, and implemented things they tried so that they’d work, too. I didn’t want this to be a serious puzzle, I wanted the literally first thing people thought of to work. The commands “unbuckle” and “unbuckle myself” which were in the error hint would also have worked. I’m less interested in defending the puzzle or my design than I am pointing out that several IF veterans, yourself included, encountered this and made specific assumptions about how it was implemented and why:

undo, remove, and unstrap would have worked as well, which my testers tried. I don’t know why I didn’t think of “take off,” but as you say, this may be a difference in expectation between player experience levels. There are some odd parser quirks and phrasings I think veterans are used to (“leave” comes to mind) that I didn’t anticipate.

Again, a matter of player expectations: a normal person does not know what “typical IF stuff” means, right? And the IF player does not always jump to “this game is implemented for a normal person” but often to “this game is not implemented.”

This is true, IF has a lot of entries that Break The Mold and remain internally consistent design-wise. But these are the exceptions, the Rules are long established and well-codified. I don’t think the idea that the parser playing community has certain (numerous) expectations is something I pulled out of thin air, the IF Theory Reader, Inform 7 book, body of reviews, and even the “How to win the IFComp” thread has plenty of truth in it.

I appreciate the design discussion, but the thing that makes me regret entering ParserComp is that I no longer seem like the right person to participate in the discussion. I don’t want it to be about “defensive parser game author forces critics to justify themselves over issues they brought up that were largely true.” I’ve seen and want to talk about trends, and the way we talk about parser games, and the expectations that a game will either advance the state of the medium or follow conventions (see also AltGames) in a way that other subgenres don’t, and I’ve wanted to for a long time. But now I can’t, without it being about my game.

When I mentioned automated testing, I meant, for instance, a good way to unit test NPCs. The standard advice these days is “NPCs are complex, avoid complexity.” I reject this completely. Complexity requires more rigorous testing. There has to be a happy medium in here that is between “NPC conveniently is too busy to talk to you” and “good luck implementing something that passes the Turing test.” That’s a false dichotomy, and I don’t see why parser IF is this medium that needs to be held to standards other games aren’t held to.

[/quote]

I don’t like Inform, but I like everything else a lot less. I thought about running screaming from I7, and then tried to think of what I would run to.

Could you perhaps give an example of how unit testing you’re thinking about would look like? The kind of unit testing I’m used to works by implementing tests for expected program behavior from a spec or from bug reports. The problems you’re describing seem to be specifically situations that the author has not anticipated beforehand so it’s hard to visualize what kind of testing system you’re looking for.

Okay, fair enough.

Conventions/minimums:

Yes, it’s true that there are standard expectations in the parser IF community about minimum levels of parser behavior. Absolutely, that is the case. I am not convinced they’re as prescriptive as you think, but the emphasis on internal consistency is definitely there.

I think scenery implementation is sometimes taken as a quality litmus test, in the sense that people feel that if the scenery in the first room is not implemented, then there’s a good chance the author hasn’t put a lot of time and testing resources into the game. Other types of games absolutely have this type of litmus test as well; even in the Twine space, many audience members react notably better to pieces that have customized CSS so that the first thing you see isn’t the same old standard template. Is this because you can’t have a good Twine game in the standard template? Not at all: howling dogs was initially released that way, and it was so well received as to more or less spark a revolution. But “has custom CSS” in that space has become a kind of shorthand for “author put some customization work into this”.

There are a few spaces for parser IF that is explicitly not trying to meet these standards – no one expects SpeedIF to be well-implemented. Communities around certain tools – ADRIFT and Quest, notably – have also tended to embrace alternative sets of expectations, with the result that those forums have a different and less critical flavor. So there are a few spaces where one can go to escape from this, but I feel like part of the challenge is that, because of the tight mesh between fiction and mechanic, it’s really easy in parser games for “rough-hewn” to become “unplayable”, in contrast with many types of altgame in which (say) the low-quality art doesn’t affect the mechanical experience much.

NPCs specifically

When it comes to NPCs, I think part of the reason for the “avoid complexity” feedback you see is that parser input plus the possibility of abstract content plus theory of mind generates a near-infinite, very hard to test state space. X NOUN is hard enough, but in conversation, ASK BOB ABOUT TOPIC, “TOPIC” could theoretically be any concrete or abstract thing; moreover, Bob’s response may need to vary depending on what Bob knows, how Bob feels, and whether you’ve asked Bob about that topic previously in the story. In a game with an advancing plot, hidden evidence, and possibly secret NPC motivations, the possible state space gets even bigger.

There are several strategies for coping with this:
– rigorous implementation and testing based on a really well understood state space and (ideally) a set of topics that are listed or heavily hinted to the player. If you want to be exhaustive and you are able to basically sit down and list all the relevant states your game can get into (per scene, etc.) then you can make a testing harness to set each of those state spaces in turn and then iterate through ASK NPC ABOUT ALLOWED TOPIC.
– crowdsourced authoring/beta-testing, which is something we did in the creation of Alabaster: involve other people really extensively (more than is usual for betatesters) in identifying content that could be augmented; this will be less rigorous than the previous possibility,
– cut complexity in some respect, often by going to a partially menu-based approach (which is what TC sort of does) or by limiting some aspect of the character (they can’t hear and can only be shown physical objects, e.g.)

And, again, the reason why I think parser players worry about this more than players in many other genres is not that they’re all grotesquely unreasonable, but this question of design communication. In a space where I could type anything and the topic possibilities aren’t even constrained to nouns in the room description, how do I know what I should type in order to make the game go forward? A really rigorously implemented NPC says “talking to NPCs is important, so you should spend a lot of time on it.” A minimally implemented one can either direct my attention towards the space that is explorable (“this mechanic robot only answers questions about trucks, but you should definitely ask them about every truck in the game!”) or not try to give me a parser-style exploration experience per se at all (“here is a menu of choices you can pick from”). But a conversational NPC that does none of those things, and has a couple of keywords that are going to advance the game surrounded by a lot of unimplemented blank “Sally has nothing to say about that” is going to stump and frustrate a lot of players.

Moving-Parts games in general

I totally hear you about the desire for tools, testing mechanisms, etc. that make the kinds of thing you want to write more possible. This is huge.

I’m not sure that it’s possible to make parser-game tools that guarantee high-quality results, but I do think it would be possible to make much, much better support for the lots-of-moving-parts type of game. This is loosely related to things people asked for in the Missing Tools discussion a while back.

Threaded Conversation is one stab at handling part of this problem, but it’s got a significant learning curve of its own and would need have some aspects built into Inform in order to become easy to use; it’s also fundamentally fighting the problem that it’s sort of a menu-based conversation system trying to live in a parser world. And at best it only deals with a bit of the issue. (I’m happy to hear about your struggles with it if you want to share; I’m not the maintainer of the extension but I am interested in looking into how these things can be built out to be kinder to authors.)

What I’m wondering is: might we be able to provide better tools for specifying a game that runs on a semi-flexible schedule with a lot of moving NPCs? What would be the natural way of describing the schedule, the rules for where NPCs should go, their response states at different times? I think probably there are things we could do in this area (and I haven’t recently looked at all of TADS 3’s scheduling features, so maybe some of this is covered in the TADS 3 model a bit more deeply). Versu handled some of this, but it did so by having a very light conventional world model and focusing mostly on social situations.

I think there’s potential research space here, as well as in the automated tool area.

Criticism focused on advancement

On the issue of wanting to see parser IF that advances the discipline – guilty as charged, at least in my case. I know not everyone is coming to this in the same way or for the same reason, but my reason for being involved with IF is that I’m interested in advancing the art of interactive storytelling. So I want to look at games that do that in some way, and I want to talk about how they do it and how those developments are situated relative to other pieces in the history of interactive stories, and I want to draw other people’s attention to those games, and I want to be inspired by them. So the way I engage with stuff and the way I discuss it is pretty different from the way I engage with products where I’m more of a passive consumer. (Even within the game space, there are certainly genres of game, such as tablet puzzle games, where my feedback is much more on the order of “okay, I had fun with/was pleasurably frustrated by that” or else “that was not fun”, not “this ruleset was really derivative of PuzzleBlaster 2013”.)

One of the big things about Twine for me lately is that Twine is so new that it’s still cropping up innovations pretty much every two weeks. This is so much fun!

This is (I think) not the only kind of thing we talk about other than missing nouns/verbs – I’m thinking of Sam Ashwell on the theme of the monstrous in Krypteia, or Jenni Polodna on characterization in One Night Stand, or Liz England on the presentation features of Zest, or Victor Gijsbers on the philosophical disciplines underlying Metamorphoses, just off the top of my head.

So, take an NPC that’s based on a series of heuristics, a bunch of if/then statements. Essentially, they are given a set of inputs, and have an expected output. The inputs might be “the current scene” or attributes of the player “if the player is wet” or the state of the player’s inventory, and so on. I’d love it if there were a way to codify these inputs, or run through an NPC’s behavior given a list of scenes. You could then assert that these outputs were working as expected. Things like:

  • For every scene, list the NPC’s available topics when the player has examined an object and holds a thing
  • Test NPC pathing, assert that the NPC is in a location, change the scene to “fire alarm pulled” and 3 turns later, assert that the NPC is outside.

There’s similar cases where objects or the player or rooms would change depending on player or game state-- when these things have complex interactions, it’s be great to have a way to test the room as a unit and create the game circumstances that serve as its inputs without having to “test me with” and essentially play the game to try and create those circumstances. This results in a lot of work when we want to just test a single piece as a “unit.”

Right now, we generally have to do this by end-to-end testing a case where the player could somehow observe each step of this, then find that branch in the skein, and bless it.

I’m a newcomer to IF. I just discovered it through Porpentine’s games a few months ago. So my opinion’s not too valuable about what games have done in the past or how player expectations have evolved, but I can chime in as a newcomer and say what appeals to me from what I’ve seen.

I mainly want to chime in after reading the comments here about developing more complex worlds and NPCs as opposed to simplifying them. The parser is such a wide-open format – when you load up a game, you can type anything – that I suppose making it receptive to more varied input is a natural goal for authors. But the games I’ve enjoyed most have pared-down actions, which you learn pretty soon into play, so that you don’t have to worry about tinkering around. You know your capabilities; you have an environment; you employ those capabilities.

I’ve never yet had an “aha” moment where I realized I had to do some new action to solve a puzzle. Normally when that happens, it produces an anticlimax because I’ve been fiddling around and have failed a few times already. I just want to get on with it.

But I do enjoy complexity in games, and I like talking to NPCs. Weird City Interloper was great for me for this reason, because your actions are super limited, and you know it, but that doesn’t stop the game from having a very colorful and diverse cast. Maybe not the fairest example, since the game is built explicitly around NPC dialogue, but I think it’s a good example to show how you can have a complex story and interesting NPCs while having the most (appealingly (to me)) bare-bones implementation.

So the answer in my view is not to avoid complex NPCs, but rather to figure out a system where a player has limited actions with wide-ranging applications. Even the basic commands are often too finicky for my tastes. I want “examine/search” to be the same. I want “talk to/tell about/ask about” to be bundled together. I want “push/pull” to be fused into something like “move.” A command like “engage” could even combine “push/press/pull/turn,” etc. But the main thing would be to spell this out when the game starts so that the player knows what tools they have to use. I imagine it would also free up the author, since the author wouldn’t have to worry about implementing various random things the player might attempt.

When it comes to unimplemented objects, I do find those “You can’t see that” responses annoying, but that’s easy enough to fix by writing a more flexible error message. Something like “Your attention would be better directed elsewhere.” I’m actually a little surprised that the default message isn’t already something like that.

Anyhow, that’s just one more opinion.

An example of what a testing library (Similar to RSpec or Jasmine) might look like for I7:

Book - Unit Tests (not for release)

Describe the palace guard:
  with:
    the default story state, and the player in the Royal Gallery;
  he gets mad when the player steals something:
    input "take portrait";
    expect the response to include "The palace guard glares impotently at you.";
    expect the palace guard to be angry;
  he can be bribed with booze:
    with the bottle of wine being carried by the player;
    input "give wine to guard";
    expect the palace guard to be drunk;
  he won't notice theft if he's drunk:
    with the palace guard drunk;
    input "take portrait";
    expect the response to not include "The palace guard glares at you impotently.";
    expect the palace guard to not be angry;
    expect the palace guard to be drunk.

An important part of this would be a test runner that could run through all of the tests, ideally in parallel for speed. This is why it can’t just be an extension; it needs, for instance, to be able to run tests across separate restarts of the game to be effective. Another aspect of such a testing library would be a Skein-like functionality to warn when a test produces different output even if it passes.

Also notable, like all TDD libraries and similar things, this can’t save you from your own assumptions. And it definitely wouldn’t tool away the issue of world model insufficiency or parsing errors.

In a way, I can see how what you’re saying makes sense; the way in which parser IF is reviewed – essentially by comparison to The Canon, through which is established a set of conventions – does, in a way, contribute in some ways toward a conservative trend in design. Hunger Daemon took 1st place at IFComp last year – it’s a well-made, funny, enjoyable game – but no one seems particularly excited about it, or at least I don’t see continued discussion about it in the way I still see discussion about With Those We Love Alive and Creatures Such as We. But… but criticism always operates from within convention. How else could someone review a game except by noting how well it follows conventions and/or innovates? I don’t think this is limited to parser IF. When IGN reviews the latest Call of Duty game, they mention weapon loadout, the ‘feel’ of the shooting, the smoothness of play, etc, all of which are shooter conventions. And there’s plenty of convention they’d only mention if it were mis-implemented: twin-stick controls, quality of visuals, ability to jump, existence of a pause menu, etc. The same applies to criticism of 4x games, strategy games, platformers, etc and beyond video games to movies and novels and music and fine art.

I agree that there are certain ways in which the current general world model for parser IF has never been experimented with. Adam Cadre, for instance, is brilliant at pushing the technology and conceiving ways to expand the idea of what parser fiction can or should do; every game he makes subtly tweaks the audience’s idea of how to play IF and why they’re playing it. But even his most experimental stuff (e.g. Shrapnel) doesn’t alter the world model much. I’m thinking specifically of maps that can be drawn with boxes connected by lines in compass directions, the sense of your PC ‘floating’ or instantaneously transporting from place to place, the PC as the player’s avatar, the lack of physics, the turn-based timing model and the sort of ponderous weight given to light quality and ability to support objects and containment. I was actually pretty excited by matt_w’s Terminator because it’s one of the very few parser games I’ve tried that has a physical sense of space and movement.

I’m also a bit surprised that the interface itself hasn’t received more love, though this isn’t by any means a unique complaint. We’re working with text, yes, but text can be styled; it doesn’t have to be static. Twine is great because it amply demonstrates how text that has shape and color and is dynamic can be used to punctuate the narrative. Taco Fiction, to me, has one of the greatest scenes in any IF (your initial entry into the taco shop), partly because it ratchets up the tension by playing with the player’s expectations of how the text will be presented.

Regarding I7: I’m sure there are good technical reasons for continuing to use the z-machine and glulx and T3, but they strike me as historical holdovers. Java has its own virtual machine, as does C#, and there are robust cross-platform implementations of both. I sometimes wonder if a Java API might serve the needs of some parser devs better than a custom programming language that compiles down to code meant to run on a bespoke VM. (But not too bespoke; all of the parser IF VM’s, as near as I can tell, are actually pretty generic machines, with z roughly analogous to a classic 8-bit micro, glulx to a micro with a more modern command set which betrays some knowledge of higher-level programming language conventions, and T3 to the Java VM.)

Part of it is just the fact that the z-machine and glx are extremely lightweight because they don’t have to support a tenth of the features that the JVM or CLI have to support. There’s a fully-functional Glulx virtual machine written in Javascript that runs on browsers, for example. The only conceivable virtual machine to use as a basis for parser IF would be a Javascript engine itself.

That approach would involve either a simple JS library with a conventional API (The Undum approach, which I’m not actually very fond of); writing a native JS or CoffeeScript DSL for defining parser games (More natural, probably, but also harder); or writing a transpiler from an IF-specific language to JavaScript (Very, very hard, especially since that language might have very different semantics from JS). Those are nontrivial tasks; yes, the boutique VMs that are used to run parser games are kind of a historical holdover, but they also work quite well and replacing them would be a pretty huge undertaking.

There is I believe some current work going into fixing the Vorple/Glulx compatibility issue, which would open up a whole new range of presentation possibilities.

My experience with testing Hadean Lands:

I didn’t use the I7 skein feature. It doesn’t work reliably for me. (Not everyone has this problem.)

I didn’t use TEST-ME testing facilities, because debug code can change the behavior of the game. We saw this with “Terminator” (a bug that was asymptomatic in debug mode). The reverse case has turned up with the “Otranto” example in the I7 manual – turning on ACTIONS debugging breaks the PULL command, because the code depends on the “mentioned” property in a fragile way.

Basically, if I was going to spend four years developing a game, I was damn well going to prove to myself that the golden-master release was playable and winnable. That meant end-to-end tests that did not depend on debug mode.

I also wanted a lot of unit tests – tests of specific game elements. (A ritual-by-ritual test list, for a start.) My solution there was somewhat outre, and won’t work for most people! I built a second game which included all of the ritual rules and portable objects of HL, but none of the map or scenery. (But I added a test workbench, test retort, test kiln, and so on.) Obviously this required putting big swathes of code into a private extension. I could then write “end-to-end” tests which covered individual rituals.

I think the most accessible solution is to rely on debug verbs for unit tests, while maintaining end-to-end tests that run in release mode. This is a lot of work, but testing is always a lot of work.

My test format (eblong.com/zarf/plotex/regtest.html) is butt-simple:

* TEST-NAME
> command
Should get this response.
> another command
Should get this response.

Repeat as needed. Each test section (starting with an asterisk line) is a separate interpreter run. Since it only analyzes game output, you have to put in appropriate examine commands to test object state – but this is usually possible. If not, use SHOWME inputs. To set up a particular situation, write debug verbs or use PURLOIN/GONEAR. It’s messy, but any other solution will be messy in different ways and ultimately no more reliable. Reliability is what your end-to-end release-mode tests offer.

I eventually added an “>{include}” facility so tests could share chunks of test code (or just strings of setup commands). This wound up being less helpful than I’d hoped – just cutting and pasting might have been easier. It’s available though.

I agree with this. At this point I consider the Java VM to be permanently tainted(*). In terms of “platform for getting stuff to run”, there is nothing as massively available as the Javascript runtime. I’m not writing off Glulx yet – I am obviously fond of it – but if I ever have cause to invent a new IF platform, it will be built on Javascript.

Yes, the plan to integrate Glulx and Vorple moves slowly forwards. This afternoon’s plan is to get Quixe’s graphics capability polished off.

(* It was fine when the Windows JRE installer caught adware, but now that the Mac installer has gone the same way… kill it with fire().)
(
That was a joke. The time to kill Java with fire was back when Oracle bought it. Everything since then has been symptoms of the morbidity.)

Yes, I’m pretty much at a loss as to why anyone would want to write new software targeting the JVM outside of servers. And the CLI just doesn’t support enough platforms to be viable as a target for an IF system.

Back in the Infocom days, this was (effectively) divided into two different error messages: the parser-level “I don’t know the word [WORD].” and the world-model-level “You can’t see any [WORD] here!” Later this was changed to have them both give the same response, to prevent “spoiling” future objects: if the game knows the word “dragon”, it tells you that there’s going to be a dragon somewhere up ahead. But one reason why I prefer the older style: it tells you exactly why your command failed. Interacting with something unimplemented will tell you directly that the word you’re trying isn’t used in this game.

If you have a single error message for both of these occurrences, though, “You can’t see any such thing.” seems significantly better than “Your attention would be better directed elsewhere.” or such.

Let’s say I’m playing a game which uses the message “Your attention would be better directed elsewhere.” If the game describes an (unimplemented) cloud in the sky, attempting to interact with it will indicate that the cloud isn’t important. That is reasonable. But if I set down my lamp somewhere and then mistakenly try to >EXAMINE THE LAMP (once I can no longer see it), I will get a misleading message saying that interacting the lamp isn’t something that’s possible in this game. I might not realize my mistake until later, then get frustrated when I need to use the lamp to solve a puzzle.

This is an edge case, certainly. But I’m wary of giving the player false negatives like that, accidentally convincing them that they can’t do something when they actually can.

Java is the main development language for all Android platforms. I’ll agree that if Android didn’t run on hundreds of millions of mobile computers, Java would probably die (except for Minecraft mod development, which by my careful observation is about 72% of all programming done these days.) Android, however, has breathed endless and lasting life into it. Note that there is at least one Java VM implementation written in Javascript: int3.github.io/doppio/about. So you can (theoretically at least) run Java programs in your browser without installing Java.

There is no major platform without CLI support. Unity uses the CLI. Mono runs on everything.

I don’t think glulx and I7 are going anywhere; they’re too obviously useful to the community. I just sort of wish for a more conventional option, as someone who’d rather learn a new API than syntax, testing, and debugging paradigms for a new language.

Quixe, by the way, is awesome. I spent several hours the other day reading through the code base: a huge amount of work, but beautiful, highly readable, efficient code. Thanks for all the time you put into these projects zarf!

Which is why I said “the JVM,” and not Java. Keep in mind Android applications don’t run on the JVM; they run on the ART or Dalvik runtimes, which are actually incompatible with the JVM. Meaning you can’t target the JVM and get Android compatibility “for free” in any meaningful sense.

True enough, Mono will run on everything.

I’m not sure how big the market for that is, though. IFDB shows 11 results for Undum (I’m about to release the 12th), for example. And the first thing I did with Undum was write a wrapper library to give it a more DSL-like API that was friendly to CoffeeScript, because I wanted my code to resemble ChoiceScript more wherever I was just writing content as opposed to logic.

It’s true enough that problems like this could occur. I didn’t mean “Your attention would be better directed elsewhere” to be the absolute best alternative, only that an alternative must surely be out there. For every example like the lamp one you provided, there must be, what, a hundred other examples of people getting the “can’t see any such thing” message for something clearly in the room? Maybe a hundred is an exaggeration, but it doesn’t feel like it based on the games I’ve played. And that error message is no less annoying to me now than it was when I started playing IF a few months ago. Maybe after a few years I’ll be able to tune it out better.

I actually prefer the new way, and it goes back to a traumatic childhood experience. (Not really.) One of the very first IF games I ever played was Beyond the Titanic.

[spoiler]Early in the game, as you’re wandering the ice caves at the bottom of the ocean, you come across a room where you can see the Titanic through the ice above you. In the room description, this is mentioned as “the hull of the Titanic.” So naturally I type EXAMINE HULL. And what does the game tell me? “You can’t see the alien spaceship here.” OH IS THERE AN ALIEN SPACESHIP IN THIS GAME? GUESS SO.

I never finished the game. By all accounts it’s pretty mediocre anyway.[/spoiler]

By far the best solution, in my opinion, is to simply distinguish objects that the player has seen from those the player has never seen (actually, the player character. You can, of course, never guarantee that the player is paying attention to every word of the text). That way, for objects the player has seen you can say “The lamp isn’t here” or “You left the lamp back in [room]”, while using the default response if they type an object that’s in the game but the PC hasn’t seen. This is easily accomplished in I7, for example, and there are a couple extensions that do it, but it’s up to the author to actually implement it.

And thus became the subject of an enormous lawsuit. This was my first big signal to get the hell away from Java. When Godzilla and Gamera are fighting, that is not the time to take up residence on one of Godzilla’s toenails.

This is one of those fundamental decisions that design requires, right? A static, unchanging game world versus a game where things can happen independent of the player’s ability to predict them or trace their direct cause. More than one reviewer felt like too many things “happened” outside of their control. Granted, most mystery games exist entirely as puzzles for the player to solve, but I wanted to go for something more dynamic and urgent to imply there was a murderer still on the loose. Naturally, other IF games like Make It Good or Deadline have done this before (and better.)

You brought this up in your review as well, and I’d like to talk about this from a design perspective:

All of these questions are answered, some of them only implicitly in the course of the game. Of course, if technical issues kept you from turning the page, that’s on me. Some people picked up on things right away, some of my testers didn’t at all, even after multiple playthroughs. There’s certainly a murder mystery, but as I said in the postmortem, I wanted that to be a backdrop. Who the protagonist is, how they got there, and what the deal is with Val and the victim are also mysteries for the player to investigate. It’s a sort of reverse dramatic irony where the characters know more than the player. Certainly similar things have been done in games with an amnesia device, but I liked the idea of doing away with this and just starting in medias res. As I said, some player seemed to pick up on things and some didn’t, but how do I know how to tune things for too much/too little exposition, a bell curve? I think a well-organized mystery benefits from rewatching/rereading-- things you might only notice once you’re looking for them.

How much introduction is necessary for a game/book at the outset? I’m of the opinion that worldbuilding and character motivation shouldn’t really be thrown at the reader in big chunks. Don’t games like Anchorhead and Galatea start off with way more questions than answers? One of the IF games that’s stuck with me for years is All Roads (which I now realize is by Jon Ingold). That piece similarly has “who is the player character” as a central mystery, with very little explained at the game’s start.

One of my big influences was Gone Home, where you are given almost no information about the protagonist, and it just relies on player curiosity of its central mystery to move the game forward. An emotional connection is later formed with the sister character through journal entries, but the father and mother also provide a good chunk of the mystery, and they are far harder to read from the beginning.