A rule banning AI will be much, much more difficult to enforce than I think most people recognize.
As many of y’all know, I run Choice of Games, a publishing house for interactive novels, as well as its sister company, Hosted Games, a separate label where anyone can submit a game for us to publish on Apple’s App Store, the Google Play Store, and on Steam.
Two years ago, Hosted Games implemented a rule:
We can’t accept art, code, or prose that was generated via AI, due to ongoing legal uncertainty around the copyright of AI-generated content. Your submission, including all artwork, must be created by a person or group of people who have the legal rights to publish it, and we have to provide them credit in the Credits.
This rule has been hell on our staff.
How do you you know whether a body of text has been generated by AI or not? By feel. Does this text “feel” like AI slop? Does it seem like this text uses the word “delves” too often? Too many emdashes? Oh, well, the new version of ChatGPT doesn’t have those stylistic indicators, it has these new stylistic indicators. Our staff is forced to keep up with the very latest in AI-detection techniques, to read every submission under a magnifying glass.
Our staff can do this because we’re getting paid to do it. Volunteer competition organizers would just have to do it out of the goodness of their hearts.
There are “AI detector” tools, but they are deeply flawed. (They are, themselves, AI.) We’ve tried using multiple such tools, and they frequently disagree with each other, including numerous false positives on material that we confidently know was hand-written.
Enforcing a rule against AI commits you to “aivestigate” submissions. You’re pretty much guaranteed to make mistakes.
The rule also terrifies authors who want constant clarification about whether this use of AI is “OK.”
- “I used the Grammarly grammar checker to check my grammar. It fixed a bunch of grammar errors in my game, but I didn’t realize it was using AI to do that. Am I allowed to publish my game?”
- “I used Google Translate to translate a few passages in this game. Am I allowed to publish my game?”
- “I chatted with ChatGPT about the plot of my game. Am I allowed to publish my game?”
- “I used Claude to help me write the code of my game, but I wrote all of the prose by hand. Am I allowed to publish my game?”
These authors want assurance, but we can’t give them assurance without reviewing their game, which we can’t possibly afford to do until the game is totally done and ready for submission, by which point it’s too late.
(And, the author asking for reassurance might be lying.)
Banning AI also means we have to argue about it. Constantly. Users want to pipe up to tell us that they think an author has snuck one past us. “I’m a very good judge of AI slop, and this text is certainly AI. I can’t believe you guys let it through.”
When we get reports like that, we have to check on them. We have to reread the text, now with an eye towards the issues the user raised. Sometimes we do miss stuff, and then we have to go back and confront the author.
Sometimes, the author admits that their game was LLM-generated. We have ever pulled a game post-publication over this, forcing us to discard all of the work of packaging a game for publication on all platforms, submitting it for app-store review, responding to app-store review.
But frequently, the author doubles down, claiming that they wrote the game themselves. Now what? A customer who thinks they’re very good at detecting AI slop thinks the author is lying. Now we have to judge whether the author is lying or not. And that either means arguing with the author or arguing with our own customers.
Sometimes authors have claimed that they just used Grammarly in a few places, but it seems pretty clear to us that they had an LLM generate the whole thing.
Then they tell other authors, publicly: “You won’t believe the nerve of those guys! They refused to publish my game even though all I did was use a grammar checker!!”
What do you do then? Do you correct their lies in public? (Are you sure they’re really lying? Will the public be able to be sure?)
AI policing requires an attitude of mistrust and suspicion. Assume they’re trying to pull the wool over you, and then find the evidence. Enforcing an AI rule requires volunteer competition organizers to assume an antagonistic relationship with every author.
Sometimes authors admit to using AI, and then send us another submission, in which they claim they “rewrote the AI parts by hand.” Did they, really? They’ve admitted to lying once. Do we spend our time and money re-evaluating their game?
If not, are they effectively just banned for life? Because their text “feels” like AI slop?!?
IMO, anyone proposing an AI ban has to also propose their preferred enforcement mechanism. Here are a few options. (You can pick more than one of these.)
- The IFComp volunteer organizer has to read and evaluate every game before it goes live.
- Users can flag games that they suspect to be AI. The volunteer organizer then has to review those games, discuss the matter with the author and the public, and then make a final judgment.
- We might decide to have those conversations privately, out of respect for the author.
- Then again, we might have the conversation publicly, to ensure we get the community’s buy-in that a given game is or isn’t slop. (If we reject a game, the conversation will surely become public at that point.)
- The author has to generate a transcript and the volunteer organizer has to submit the transcript to one or more AI detectors (which ones?), and decide what to do when they flag a game. (AI detectors are frequently wrong, both ways.)
- We could also try to trick authors into outing themselves. Leave a little checkbox on the submission form, or a box where the author explains how exactly they used LLMs in their work. If they admit to it, reject their submission.
- No enforcement mechanism at all. Just add a rule to https://ifcomp.org/rules/, and force authors to check a box promising that they didn’t use AI. When someone submits a game with “obviously” AI-generated art and the game is all AI slop, even though that’s blatantly against the rules, we’d simply let judges rate those games poorly. (And when users complain on the forum of “obvious” cheating, we’d say “yeah, it’s a rule, but we never enforce it, and here’s why…”)
As I hope you can see, all of these options suck. #5 would be easiest on the organizers, but it’s not clear to me that it’s actually better than the rule about disclosing AI that we already have. (At least with the AI disclosure rule we have, authors aren’t directly incentivized to lie about their use of AI.)
So, to answer the question at the top of this thread: No, we cannot “just” ban AI content. AI bans require lots of hard work, lots of arguing about whether you did a good job or not, and create an antagonistic relationship between the organizers and the authors. It’s miserable, unrewarding work.
Speaking for myself here, if I were organizing IFComp as a volunteer, and the community voted for the maximum-enforcement options #1 and #2, I would simply resign. Life is too short to waste it on forensic aivestigations for no reward.