Penny Nichols, Troubleshooter by Sean Woods
This review will set aside the many possibilities for and concerns over using AI to tell a story while acknowledging that there are many strong feelings about those matters on many sides. Instead, I’ll address the game as it was intended: a parseresque experience fueled by AI. You copy the provided prompt, drop it into Claude or ChatGPT, and play.
Except I did not play the game as it was intended. I knew I would be dealing with AI, and the premise (you’re an insurance investigator at a space station where a mysterious artifact has recently disappeared) was fine but I expected the game would quickly melt into various sci-fi cliches.
Instead, I played the game in bad faith, attempting the following actions:
-
Starting a conga line
-
Throwing a frisbee at someone’s head
-
Playing a Lady Gaga song on a ukulele
-
Revealing that I am completely illiterate
-
Making valentines out of red construction paper and giving them to various crew members
-
Asking if we could, like, go to a sports bar and get some wings
These choices led to some iridescent prose, such as:
You drop your arms to your sides, letting the ukulele clatter to the crystalline floor of the revealed dragon’s hoard. “You know what? I’m tired. This investigation is weird, I can’t read, and I never did find those chicken wings.”
It was an entirely sophomoric way to engage with the game. But the fundamental premise of using AI for interactive storytelling is that you can enter literally anything and expect a sensible response.
And you know what? Despite the many, many weird wrenches I threw into the story, and no matter how much I actively worked to derail it, it remained surprisingly coherent. A contemporary concern about generative AI for text is that its “memory” is poor—it can’t remember details large or small over the course of extended interactions, but I didn’t seem to encounter glaring issues like that using Claude.
But I wonder if, from a narrative perspective, that’s a problem. The AI prompt explicitly identifies the AI to respond like a considerate game master. Despite my theatrics, in the end I “won” the game and made a dragon friend who was going to teach me how to read. If I completed the story with my dumb commands, is there a point to allowing literally any interaction? I didn’t seem to appreciably influence the fundamental narrative on a macro level. Were I playing a real-life Spelljammer D&D campaign and I behaved in a similar fashion, my DM would have rightfully a.) killed my character and b.) kicked me out of the group before the end of the first session.
But knowing there was no actual human on the other side, I had a genuine, stupid fun making my character do batshit things and forcing the AI to earnestly respond. Does that make Penny Nichols a good game?
I dunno. I’ve been a writer for all of my adult life, and I am also a teacher of writing. Serious uses of AI for writing fill me with an existential dread, which is why I played as I did. But conventional wisdom suggests that players don’t really care how a game is made; they just care that it’s fun. And, to most players, particularly those who don’t usually seek out IF, could roleplaying with AI be more fun than bumbling through a traditional parser game with its ossified and esoteric commands? I think this game suggests the answer has the potential to be yes, even if I don’t like it.
For now, perhaps it’s best to stop there. As we continue to think about how AI will shape interactive stories, I’ll end with my own version of Penny as I tried in my very roundabout way to solve the mystery:
CONGA! Everyone join the conga! We’re gonna conga our way to the TRUTH!