We don’t have to speculate. There was a game this very year that had signs of being AI generated but didn’t have a disclosure, and nobody put it on blast – it just didn’t do very well. I’ve raised this same concern back when the disclosures first became required and have been keeping an eye on it, and it hasn’t been an issue in the two years since. People here seem to have cooler heads than the Internet at large.
If you’re talking about ParserComp, while the game at the center of the controversy used AI the actual issue was whether or not the author had engaged in vote manipulation.
An interesting question! I’d be surprised if there hasn’t been, honestly, since there’s cash on the line and people have recently proven they’ll cheat to get a line of text essentially saying “YOU’RE WINNER” attached to their IFDB profile. On the flip side I am also not surprised that IFComp doesn’t talk about any attempts publicly so that people can’t gather info about their cheating detection tools. (On the third hand, I love drama so I hope someone spills the tea!)
I very much don’t wanna get into specifics, and it’s a separate conversation anyway, but I can tell you that bad-faith ballot manipulation is absolutely a thing people have tried now and again in order to boost certain games. I think that’s the origin of the “Vote in good faith / Don’t encourage bad-faith voting” rule pair during my tenure, actually.
I believe my style is somewhat robotic and I don’t use an LLM. In any case, my point was about authors fearing their work may be mistaken for AI, not whether it actually would be.
I don’t think that’s a suitable comparison. Lying about AI use when AI is allowed is very different from lying about it when AI is banned. It’s more difficult to get angry at someone when they would have been fine just by admitting what they did.
That was the one I had in mind. I didn’t mean to suggest that AI was the cause. My point was only that we can’t rely on the community being above controversy.
Judging by some of the anger and bitterness that suspected AI use has caused over at the Choice of Games forums, with threads being locked and discussion banned, I’m not sure we can rely on this outcome.
I don’t recall there having been any conclusion to those discussions. I seem to remember people agreeing that it’s very difficult or impossible, but that isn’t a solution.
If Imperial Throne is typical of your work, then your style is very much the opposite of an LLM. Those things know no brevity.
I agree with you that enforcing the new rule could prove tricky in some circumstances, and controversy could result. No group is immune to it.
But I think people will just have to cross that bridge when they come to it. If one’s goal is to avoid controversy and acrimony, continuing to allow an unconstrained flow of AI stuff into the comp is a clearly worse alternative.
Deep down, I don’t think what you suggest is impossible. It’s just I still feel it’s an edge case, and one which is now further towards the edge because of the newly clarified rules.
I do think the examples Encorm pointed out already are evidence that the comp or its judges can reasonably handle these edge cases as they come up. This forum and better logicians than myself have posited 100s of edge cases by now, and I’ve read 100s or 1000s of messages speculating about them across threads. And I’ve broadly came to agree with the occasionally-expressed view that where it is an edge case, and not a hole one can drive a truck through, it’s okay for it to be there. It may not even come up. It’s more important to clarify the centre and close big closeable holes.
No community is above controversy, obviously, and while people (including me) did get very mad at that author there was no witch hunting there. I think the community handled the controversy fairly well, so I don’t think this is a useful example? Unless you mean to say that we can’t trust every individual to be above controversy and the comp should have a plan for when someone inevitably tries to skirt the rules, which is my personal takeaway from the ParserComp drama.
Choice of Games has found themselves in a strange position where there is a strong perverse incentive to use AI – their products are paid with a generous cut to the author and there’s a chance of really boosting one’s career and even winning major awards (CoG games have won the Nebula prize before), but require a huge word count to even get considered. On the flip side, because they’re paid people will have very strong opinions about games trying to sneak AI in under their noses. So yeah, while I don’t hang out there it doesn’t surprise me that this is a very contentious topic in their space.
But on the flip side, IFComp games are 1) free, 2) much shorter and 3) in a competition where you can give them a score, so while we’ve also had some knock-down-drag-out fights here about AI use in the abstract I don’t see any reason why people wouldn’t do what they always do with bad/buggy/offensive games in IFComp which is give a bad rating, optionally write a polite but negative review, and move on.
Choice of Games/Hosted Games does not allow the publishing of games with AI content for legal reasons. (They have not been entirely consistent with how they have applied this policy, but there is that.) So I wouldn’t say there is a strong perverse incentive. That said, from my own tests, ChatGPT is also pretty bad at writing choicescript code.
The AI-critical segment of the CoG community largely holds that position for ethical reasons (Copyright, displacement of human labor). It’s nothing to do with products being paid.
I suspect @AlexMeowmight have been talking about a case on the CoG forums where a forum user accused an unpublished writer of using AI writing, which unfortunately led to a lot of forum users jumping on the bandwagon and piling on said writer. At that time, the writer was dealing with RL issues and unable to log in and reply. When he finally logged in, he had a huge panic attack when he saw the mountain of angry AI accusations aimed at him, talk which had also spread to other sites like reddit. He confirmed (going up against countless angry comments) that he did not use AI to write his games.The thread was eventually locked as emotions were just running high. The incident is a very important cautionary tale on how AI accusations can spiral out of control.
I’ve no plans to participate in IFcomp, so I have no skin in the game. Just clarifying this point.
I don’t think this is incompatible with what I said - CoG bans AI for very good reasons, but the other stuff I mentioned means it’s tempting for authors to try to use it without getting caught.
Anyway, that incident sounds really nasty and I’d hope that the ability to express frustration via voting + being a smaller community would prevent something like that here. The mods have been pretty good about keeping things civil.
That’s kind of you to say. I attempt conciseness, but also, when I see a list of common AI words, I think, ‘I would use all of these’.
Without a ban, people can say ‘I will give your game a low score’. With a ban, they can say ‘your game should be disqualified’. I think the latter is the one more likely to provoke conflict.
I agree insofar as I would prefer that people didn’t submit low effort AI games. I still think there’s a potentially irreconcilable conflict between giving people the benefit of the doubt and actually enforcing the rules.
It was bad enough to warrant trying to avoid anything like that happening again. Also, the outcome was essentially a compromise that I doubt could be replicated in another competition.
I think that proves my point: the stated AI detection policy is that the organizer will look for signs of AI. There’s no detail about how that can be accomplished.
They can also say “this should be banned” or “I’m never coming back to IFComp because it makes me sad to see this once-great event filled with stuff like this.”
If baseless public accusations against entries become a problem, the organizers do have an existing tool to deal with it - such behavior could be interpreted as a form of harassment and thus prohibited by the rule which states:
Participants who personally harass other participants may have their votes or entries disqualified from the competition, and may also be banned from further participation in the IFComp, at the organizers’ discretion.
Why this doesn’t work with IFComp’s judging rules has already been covered by Bruno Dias’ 2025 blog post and the previous exhausting thread on this topic.
(Briefly: it incentivises bad-faith voting – giving low scores for games you didn’t play, which is against IFComp rules – since abstention doesn’t actually penalise LLM works.)
Those things don’t require that organizers adjudicate individual entries, a task which may prove so difficult that no one wants to do it. There will (rightly) be a lot of scrutiny of any decision or lack of a decision. I don’t know about anyone else but I certainly wouldn’t want that job.
The problem is that there’s likely to be disagreement about what’s considered baseless.
It would be incumbent on people to play the games at least enough to reach a reasonable judgement.
I’m confused. There was overwhelming support for a ban, and we’ve agreed that the edge cases are merely that—edge cases—so why is there a vocal minority only now speaking up about the ban, after hundreds of messages discussing it last year and a consensus being made by the organizers?
I’d take that job. I think it would be easy. Here’s how I’d do it, if the organizers for some reason decided to grant vast and unspecified powers to me. If an entry obviously had prohibited AI content, I’d disqualify it. If there was any reasonable doubt, I’d allow it and tell everyone to keep their suspicions outside the public sphere for the remainder of the comp, lest they incur the wrath of the anti-harassment rule.
If someone sneaks some prohibited content into their entry by editing it for plausible deniability, c’est la vie. I’d rather live with that than with a bunch of raw, obvious AI writing.
Well, to be fair, the survey’s bottom line indicates only about 70% of respondents supporting a rule at least as restrictive as the one that was chosen.
I’m very happy with the new rule, but I also think its detractors have something worthwhile to say. It’s good to consider how one might navigate risks and downsides, rather than acting as if there aren’t any.
Well, there was a minority of around 25% that had other ideas. And even if there was only one person with an opposing opinion, they’d surely be entitled to continue to voice it. That’s kind of how pluralistic spaces are supposed to work.
Also, we might have a case of everything having been said already, but not everyone having had their say yet.