Can we just ban AI content on IFComp?

This is the point of my choice, yeah. The majority of games in IFComp are not generated by genAI or LLMs, I am not trying to punish the humans who submitted their own creations born of hard work. But the IFComp organizers’ choice not to ban it by now is a problem. My choice not to participate is a protest of their decisions. If human authors feel the sting of several people opting out and thus push toward a ban for the sake of their own games, that’s a fine side effect.

11 Likes

Yeah, I will say, I played that game and while the ChatGPT disclosure made me worried going in, the prose quality was fine, and from the author’s comments it seems like it was used for translation in a way that might be hard to distinguish from other not-exactly-generative-AI tools. And I do think the participation of lots of authors who have a mother tongue other than English is a very positive part of the Comp (though as a caveat to the caveat, it’s often fun to see how folks like that sometimes use English in ways that wouldn’t have occurred to a native speaker, which would be ironed out by overuse of LlM translators). So there are some advantages to having run the experiment this year just with disclosure and seeing what results. But I like your proposed three rules (especially the valorization of dinky MS Paint covers, albeit that benefits me personally as well :slight_smile: )

7 Likes

This thread is repeating the same thread we had last year.

I’d personally prefer that there’s a voluntary AI disclosure rule with no enforcement. That’s not likely to happen.

More than that, I want IFTF/IF Comp organizers to take a clear stance on AI, either stating their rules are set by their own ideological stance, due to an overwhelming amount of AI entries, or by community demand.

Hopefully that results in a rule that’s continually considered.

I can’t remember whether I submitted a survey response last year or not, but the AI rule as applied in 2024 wasn’t very eventful (apart from its surprise addition) and it was probably off of my mind by the end of the comp. This year looks more or less the same, but I’ll commit to submitting a survey response.

2 Likes

Yeah, the “incorrect” use of the English language can actually be a great boon for a game. I’m reminded of the Darkiss series, and especially the first game. Darkiss is translated from Italian, and although the English isn’t “perfect” from a native speaker’s perspective, it is perfect for the game. It has quirks and peculiar turns of phrase that wildly enhance the text. If Marco Vallarino had fed Darkiss through an AI chatbot to “smooth out” the translation, all the character would have been lost. I’m glad that he did not!

17 Likes

This suggests to me that the outcome of that discussion did not resolve the issue that the community is having. If the outcome of the discussion last year is that not enough people responded to the IFComp feedback survey (and so consequently no action was taken), perhaps we’ll try a different outcome this year to see how that changes the situation.

14 Likes

I don’t think this has come up in this thread yet, so you may be right, but speaking as someone who wants nothing to do with genAI in my daily life or hobbies, my list of concerns starts with energy and water usage, its increase in yearly emissions, the fact that the wealthy would like to use genAI to lay off as many people as possible, and that they can only train their models by stealing and profiting off of others’ labor. I also care about creative integrity, but it’s relatively far down the list.

I don’t blame anyone whose main concerns are creative, especially in a passion-driven hobbyist community like this, but casual use of genAI has material costs. People got help with inspiration, language and grammar, and code before genAI (I know this forum/community helps each other with all of these regularly). I think that these material costs (and possibly community cultural costs) are worth considering in the discussion.

32 Likes

Absolutely! But there’s more to be said in the realm of artistic concerns, than whether the output is passable. Even if a computer program were able to meet some standard with regard to the craft of games, there would still be the question of an automated generator of endless content means, artistically. That question isn’t separate from the negative externalities of generating that content in an expensive way. The word “content” itself points to the problem, that is, the move to view every piece of writing, all music, every video as part of an undifferentiated stream of content, the most important attribute of which is that it never ceases to flow. Affirming the necessity of endless content is an affirmation of paying the cost of getting it, so the creative concern touches directly on the material concern.

The value we place on endless content determines whether spending resources on it is an unethical waste, or a cost of living. I would affirm that some investment in art is a cost of living. So discussing the artistic value of “AI games” doesn’t have to mean neglecting material ethical issues. I would guess that the cost of using these AIs is context for most critics. But I would feel on firmer ground when pointing out that cost if I had a firmer grasp of these games’ value. After all, while AI is being used for many problems which have cheaper solutions (such as opening a book once in a while), there’s nothing else that does what these “AI parsers” are doing. So if that were valuable, maybe AI would actually be the most ethical way to do it.

2 Likes

Perhaps my opinion counts for less as someone who doesn’t show up here often, but despite all the sturm and drang it’s not at all clear to me that there’s actually a significant problem that needs to be addressed?

-There are, by my count, seven out of 85 games that use genAI internally in a significant way. (I really don’t care about cover art.) There doesn’t seem to be any real risk that this kind of game will overrun the comp.
-A percentage of the Comp has always been crap. Right now some of that percentage is taken up by poorly considered human-genAI collabs.
-The Comp has a long and storied tradition of being a big tent.

So aesthetically there is no crisis. And there are significant precedential reasons to take no action.

The crisis is social. I see some people saying that even the mere presence of a small class of works that use genAI discourages them from participating. And this is where I take a risk posting, because I just don’t respect that. That reaction is neither rational nor natural; it has been inculcated by frankly bad actors, and in fact it indicates that the sense of truth of those who feel it has been so compromised that it is likely to affect their own art for the worse. The extremist not-one-drop faction may be as small a loss to the Comp as the actively genAI-using faction.

None of this is an aesthetic defense of genAI. It is not able to produce worthwhile art by itself, because it cannot surprise. It may have some value as part of a larger human-directed piece, but experiments so far have not been promising. I deny only that AI-generated art and prose is uniquely bad to such an extent that it requires emergency measures.

(The “ethical” “objections” are not arguments I have a lot of time for. I’m not interested in pretending that shaking a box containing all of human culture to see what falls out violates anyone’s rights. And the “environmental concerns” have been widely debunked now.)

4 Likes

Source for that debunking? Lack of specific number aside, sources I’ve seen seem pretty sure that genAI uses more energy and water than pre-existing alternatives it’s trying to replace (Gemini’s text prompts may be more efficient than they were, but are they better than a search engine query? What about image generation for cover art that someone could’ve done with free assets and open-source editing software?), and Trump literally signed an exec order in April to encourage coal burning to power data centers. Also, hard to set aside ethical concerns when there’s so much money being made off of remixed creative works by corps that scraped them without consent or compensation.

I agree that a lot of it may come down to a community cultural issue, but I also wouldn’t dismiss those with concerns as bad actors or a small loss. People are here to talk about the ideas and code they’re making themselves, effort put in because they like doing it and enjoy talking to others about it; you can’t blame them for being discouraged at the ubiquity of tech that’s designed to bypass all that. (It’s like people showing up to a cross stitch group with an embroidery machine-- sure, you’re all going to end up with embroidery, but concerns about quality aside it’s going to dramatically change the space.)

18 Likes

You’re welcome to have and share your opinions on AI use—personally, I agree with you that a single-digit percentage of entries using AI isn’t really a crisis, and they’re clearly labelled so people can avoid them if they want—but we ask that you not attack other forum users. Saying that others in this thread only disagree with you because their “sense of truth” has been “compromised” is not acceptable.

Go after positions or opinions, not the people expressing them.

13 Likes

Well this is flatly untrue, as evidenced by the enormous power consumption of AI companies’ data centers, and the steps they are taking to satisfy that consumption. OpenAI operates the world’s largest data center, currently consuming 300MW / yr and on track to expand to 1GW / yr by 2026; Musk’s xAI consuming 250MW; etc. To say nothing of Microsoft paying to reopen Three Mile Island, which I take somewhat personally because my mom spent much of the 80s protesting it.

Whatever other arguments may be made pro or con, it is undeniable that AI represents an environmental threat.

The briefest of searches will turn up a ton of reporting on this. Here’s just one article.

19 Likes

Look. If you’re going to insult me despite not knowing a single thing about how or why I’ve come to my decision, and you don’t know what I know about the topic, or where I’ve gotten my information, the least you can do is provide some evidence or source of your own claims (re: “And the environmental concerns have been widely debunked now”) instead of pulling it out of your ass, using random scare quotes, and telling us to just trust you.

20 Likes

Data centers consume a total of 1.5% of global power usage and produce only 0.5% of carbon emissions. See the IEA report. That’s going to keep going up for a bit, and some fossil fuel companies are trying to capitalise on it, but it’s not a lot at baseline, and one suspects it will peak sooner rather than later.

I see the argument that anything which increases total power usage is a problem until renewables are clearly dominant (as in, this is the wrong time to be building new power plants), but an increase in a single sector just doesn’t seem like the main issue or even a major one.

Also, bringing back nuclear power is good.

Saying that others in this thread only disagree with you because their “sense of truth” has been “compromised” is not acceptable.

I recognize it’s rather agressive to say “my interlocutors are so wrong that it probably affects their other output”, (which is what I think I was saying) and if that’s a problem I won’t say it again. I have a hard time controlling my tone in these conversations because - although I don’t even really like genAI - it’s distressing to me to see positions that are so obviously wrong (in my opinion) become so entrenched.

I find this odd, since several professional game designers in our community worked hard on AI based conversation systems in the past and no one ever considered those games off limits. If the concern is around unethical training, that does make sense, but as I said, there are ethical models trained on purely public information that can be further trained on your own writing. Does that still “cross the line”?

What if someone used Claude to write the code, but added the actual text themselves? Is that crossing a line?

I know this is going to be an exhausting debate, but I see it playing out with positive results. Think of it as augmentation, not replacement.

1 Like

Can you mention what these games/ systems are? I’m ignorant about things-- thanks! :smiley:

What if the author uses GenAI to write code and tests, but all the ideas and text are theirs? This could enable authors to dig into much more complex ideas.

Speaking as a reviewer/judge and not as an author for this IFComp, I’ve recused myself from judging most generative AI-based work because I know I am deeply biased and cannot look at such a work in good faith. Since the rules specifically state I should engage the games in good faith and it would be unfair to the authors anyway, I just take the L and move on.

But as time goes on, my refusal is looking no different from me not having enough time or energy to play the other games. There’s titles I still like to play that I will get to, likely after the competition. I’m sure many other people here will feel the same. But since I’m not as active as I would like, my silence on the generative AI games will read the same as the silence I have on the games I want to play next.

This is what’s stressing me out. I’m trying to not be ideological because I think it’s a waste of time. There’s no point in writing a review that’s just me going “I don’t like this because it’s AI”, and it’s gonna be platforming something I don’t even care about. This is easily a Luddite position, and I don’t mind embracing it because it’s how I roll. But I feel uncertain about how I should respond to these new wave of games without legitimizing or maligning them. I want to be neutral in this sense: they’re not for me, so why should I care? But even my silence on them is pushing me to say something.

Indeed, throughout this year’s IFComp, I’m just unsure how to participate in the discussions or even know what to say in the post-comp survey. I wonder how other judges feel about this and how they approach generative AI content right now.

If this is too off-topic, I don’t mind it being moved to a new thread.

12 Likes

This is impossible to police so if we have to draw a line somewhere I’d be comfortable with it. That said, IF systems generally don’t have enough training data available for LLMs to generate useful code so the point may be moot for now.

3 Likes

This is wildly not true. Anthropic Claude Code is insanely good at writing code, as long as you know how to “direct it” and stick to first principles like TDD. I’m absolutely certain I’ll be able to port mainframe Zork to my platform Sharpee in a very compressed timeframe.

It’s true at least for a system like Twine, where the time spent fixing AI-generated code could probably be better spent actually figuring it out.

14 Likes