In my first few minutes with The TURING Test I found myself comparing it to Kevin Gold’s Choice of Robots, which I played and considerably enjoyed earlier this year. Both are games in which your choices define the character of the AI that takes over the world. This is a theme I like, but also an unfair comparison - Robots is a large work of commercial IF, not a two-hour comp entry. It altered my experience of TURING and I find myself wishing the one had borrowed more from the other.
The game opens with a series of multiple-choice questions about robots and ethics, and tells you up-front that you’re training the plot-important AI.
Remember that all of your answers are being recorded and fed to the TURING machine to guide its moral philosophy.
I found myself frustrated early on by these questions, because often the answer I wanted to give did not correspond to any of the presented options. For example, here’s the third question we are asked:
3.) Science fiction author Isaac Asimov proposed the Three Laws of Robotics. The second states that “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” Do you agree with the axiom that it is never okay for robots to harm human beings, even if such harm was required by a directive given by a human?
The yes-or-no answers here seem insufficient, given Asimov’s own work explores so many ways around these “axioms.” I really wanted to say something about the zeroth law here, but the game doesn’t afford that… which is too bad, because it should definitely come into play based on later events.
After this series of questions the game jumps to Act 2, which in my case began with a long description of how the AI kicks off a nuclear war and drives humanity back into a pre-tech era in a matter of weeks. I struggled to understand how my choices in Act 1 influenced this part of the game.
Next Dr. Ayer (who interviewed us in Act 1) shows up and recruits us to save the world. At this point I have to admit that I started nitpicking the writing and some tech things. There were enough weird inconsistencies to detract from the experience. A sampling (not exhausive):
- Why is Ayer described as a “forward thinker” when he built the world-ending AI?
- How did the complete copy of the AI on a hard drive survive the nuclear bombs?
- For that matter, how did two separate ready-to-fly shuttles survive the nuclear bombs? And they’re in different states, but somehow we can reach them right away.
- “…able to develop a virus that could be transmitted wirelessly via satellite signal” misunderstands viruses and networks.
- The 200-foot antenna is overkill for broadcasting a virus back to earth.
- Why is there nobody on the ISS?
- At one point the AI is called “a sentient–not living, though, surely not!–being.” This was a really distracting phrase. Shouldn’t “sentient” be scarier than “living?”
Finally I reach the ISS and am about to upload the virus when I hit a game-ending bug; it seems to be a terminal Twine node, with no links. I backed up and tried changing a few choices, but am unable to get past this point.
I see lots of potential in this work and I really like what the author seems to be going for - this has the bones of a really rollicking pulp adaptation of an Asimov story. Besides the gamebreaking bug, the single biggest issue for me is that the game seems to promise up front that my choices are meaningful, but they never felt meaningful as I played.
The Act 1 choices are all of the “contextless moral quandary” type. This would be okay if the game called back to them somehow, either underlining how my early decisions drove the AI’s destructive behavior, or (even better) pushing me to revisit these quandaries later as concrete situations with clearer consequences, and see how that changes my response. Who knows, maybe this happens after the bug!
Act 2 decisions also didn’t have enough context to seem consequential; for example, I get to pick which of the two space shuttles I will take to the ISS, but there’s no apparent reason to take one over the other. As a result Act 2 feels effectively linear, and the pace drags a bit. I’m not sure how best to address this. One thought I had was to do a bit more interactive “living in the consequences” of my Act 1 choices before Ayer showed up with the world-saving plot - maybe meeting some characters I could care about to raise the stakes of what happens next.
In summary, I liked this more than the sum of its parts but it had enough issues that it’s probably in the 1-3 range for me. I’d love to play a revision of this, and look forward to the author’s future work. Thanks Justin!