The first piece of advice on the IF Comp guidance for authors page is to playtest your games. But with Comp season coming up – and the concomitant testing requests – I realized that though there’s copious advice for authors (not to mention prospective judges), there’s not really much out there to help folks be good testers. So I thought it might be helpful to open up a thread where folks can share how they approach testing, or for authors, things testers have done that have been especially useful (or things to avoid), and hopefully create a useful resource moving forward.
(My other motivation here is that I while I feel like I’ve got a good handle on how to be a good tester of parser games, I’m less confident about how to approach choice-based games. So I’m hoping to get some advice here, especially on stuff like learning how, or whether, folks communicate the particular path they take through a choice-based game in the absence of built-in transcripts)
To get things started, here are some things that have worked for me (at least I think they have; if folks whose games I’ve tested have feedback for me!):
-
Per the above, if a game has a transcript function, make sure to use it! This is a no-duh, of course, but I’ve personally messed this up in a few ways, usually when forgetting to re-enable transcript tracking after reloading or restarting the game (in Gargoyle at least, I think the transcript will continue after typing RESTART for a Gluxe game but not a z8 one…) So these days I try to open up the file after a couple moves at the beginning of each session to make sure everything’s working.
-
When I have time, I usually like to do an initial playthrough more or less straight – mostly approaching things as a regular player, albeit one who likes to examine everything mentioned in room descriptions and try out lots of synonyms for parser games – but then a follow-up one or two that’s more focused on trying to break stuff: taking everything even if it seems nailed-down, cramming stuff into containers, stacking things onto each other (I had a lot of fun with this testing The Impossible Bottle), taking wildly suboptimal choices or ignoring clear prompts to do X Y or Z thing, to try to stress-test things. The theory of separating these two is that providing a sense of pacing and how the game flows can get really lost in the latter case.
-
Speaking of that, I try to provide feedback on big-picture issues like pacing! Usually in addition to sharing a transcript or detailed list of feedback (for a choice game), I write 3-5 bullet points summarizing my overall take on the game, and flagging areas where I think the author could consider making more significant changes. Often this is where things like structure, voice, etc. come in since it’s hard to address those in a granular way.
-
And then the flip side of the previous one is that it is of course helpful to provide that super granular feedback – I like to drop lots of comments, earmarked by *'s, in parser games, to flag typos, missing scenery items, buggy responses, etc., but also positive feedback where a joke lands or a puzzle clicks so authors know not to mess with something that’s already working. I also like to provide a bit of running commentary, like what I understand my current goal to be or whether I’m frustrated by some busywork or finding it no big deal to work through, since sometimes it can be hard to assess that kind of stuff just from looking at the transcript. For choice games, as I mentioned, this is often harder – I try to keep a text file where I paste in whatever chunk of the game I’m on and add comments from there, but honestly that can feel clunky.
-
This is a suggestion for authors, but I think it’s helpful when they provide specific prompts for where they’re looking for feedback beyond just “look for bugs and typos”; it helps focus my attention as I’m playing. The flip side of that is that sometimes alerting a tester to an issue makes it harder for them to assess it the same way a player who comes to it “cold” would, of courses – but I think the tradeoff usually cuts in favor of asking for the feedback that will be useful.
-
For puzzle games I really try not to rely on hints to the greatest possible extent – this isn’t always possible, but I find a good middle ground when stuck is to send a status update to the author lightly fishing for clues (“I made it to the inner cloister but now can’t get past this one ornery monk – I think I need to get him to inadvertently violate his vow of silence so he’ll give up the habit and let me through”), since that lets the author step in if there’s a bug or I’m wildly off base, or let me trundle along on my merry way if things are basically fine.
-
Lastly, I think it’s important to communicate clearly with the author on your timeline. Sometimes testers can’t get to a game for a couple days, or even weeks, which is totally fine – we’re all doing this for free (or so I assume!) – but it can be rough on authors not to know when to expect feedback, or whether they should get the tester an updated version reflecting bug-fixing they’re doing in the meantime.
-
For parser games specifically: examine anything mentioned in a location description, including using any adjectives mentioned. Then examine anything mentioned in those descriptions, until you run out. LISTEN and SMELL whenever it seems even slightly interesting to do so. Try any custom verbs on any object you can, especially inappropriate ones (in Inform at least, it’s easy to write an action that applies to more kinds of things than you’ve written responsive logic for). TAKE ALL whenever you can. Always X ME. Try to drop plot-critical items and leave them behind in inaccessible places. Try all the different potential conversation verbs (TALK TO, ASK ABOUT, TELL ABOUT, SAY…)
I’m sure there are lots of others – and lots of things wrong with the stuff I’ve listed above – but I’ll stop there since I’m curious what works well for y’all!