Automated testing strategies

I am familiar with IF testing tools in the workspace and writing test scripts. Are there any generally accepted principles, or known effective principals for organizing test scripts making testing a bit more automated? Is there a particularly good resource available I can tap into to learn some proven testing strategies? Thanks in advance.

You might be interested in the discussions in this thread, or connecting with the people who are after similar kinds of tools as you are: https://intfiction.org/t/inform-7-a-coders-approach-cli-dev/9353/1

The tests you most commonly run (for parser IF) are:

  • Is the game winnable? (Run a start-to-end command sequence. Only need to verify occasional highlights within the sequence, like scene-start texts and the final win message.)
  • Does the game react properly to all the possible actions in a given situation? (Might need a debug command to jump to that situation. Make sure you test both successful and unsuccessful actions.)
  • Is a custom game mechanic working correctly? (This is a traditional regression test situation. Write code, then write tests for it.)
  • Does the game correctly disambiguate (or ask correct disambig questions) to these possible player commands? (You spend a lot of time adjusting the parser to get this stuff right. Therefore, you should write a lot of tests to verify that it stays right. This is particularly important because one parser tweak sometimes screws up an apparently unrelated parser tweak.)

If you mean how should one carry out running all your tests sequentially, I couldn’t tell if this was possible in the Inform7 IDE, so I just wrote a python wrapper that invokes a simple glulx interpreter process and then checks the output, the thread is linked above. I just type “make” and it does the Inform7 compile and runs through whatever tests I have defined which is currently just “Make sure the walkthrough still wins the game,” though I plan to include smaller scope things such as what zarf mentioned. Does that answer your question?

Testing a sequence of playthroughs: use the skein and blessed transcripts. it has a ‘play all’ button in the upper right corner.

Sorry, you’re right, it is totally possible to use the skein to automate tests. Where the skein excels is regression tests with a full “expects” syntax to make sure the given output does not change. My skein has typically become very messy, and upon active development, I find I have to “re-bless” passages too often, which is why I’m trying to pursue a way to give myself unit tests as well as “assert” syntax over specific game states. It’s of course possible to do this via the skein as well, but I think it’s a matter of what interface one prefers.

Lock the ones you care about then use Trim to remove all the other unimportant nodes from the skein.

No, I mean my skein gets messy because I “care” about all of them. That’s one of the reasons I was trying to look for something that tests a functional change (what is the net game effect of this action) versus the user-facing behavioral change (what is the text displayed as a result of this action).