AI in competitions

The argument that games that use generative AI are low quality I suppose is meant to ameliorate fear or uncertainty in potential authors who are not using AI. But perhaps there is the risk that allowing it in the competition will turn off people who don’t want to be a part of it and don’t want their work compared to and lumped in with AI works. In that case the purported low quality of generative AI is actually detrimental to the competition, beyond any other AI specific considerations. The disclosure requirement seems like a mild attempt to avoid this (among other things). I’m sure the organizers already know they aren’t going to be able to make everybody happy. Some people seem happy with the disclosure requirement. I’m still thinking about it.

2 Likes

The Text Adventure Literacy Jam does not allow AI generated code or text, but does allow AI generated art, providing this is acknowledged.

4 Likes

There are numerous ways to generate cover art without using someone else’s image without permission.

Many independent artists are open to negotiation. They might give you permission to use their image if you ask, or make an offer - I’ve paid $20 and game credit for an image.

It’s not really that hard to take your own picture and digitally edit it and layer in some title text. [1]

And there’s not much difference from typing a prompt into an AI generator and searching for “royalty free cat pictures” - just make sure the image is fair use.


  1. The image for Cannery Vale was a picture I snapped at my dad’s house at dusk. I added a free vector image of a ferris wheel and typed in some lettering in a font.



    ↩︎

7 Likes

I like the framing that the AI declaration for IF Comp is just an extended version of the licensed content declaration. IF Comp can only do so much on policing, given limited resources.

In my limited experience, the negative reaction to AI art is much greater than any positive reaction to paid art (independent from price), and use of commons art seems neutral at best. I commissioned art for my IF Comp entry’s cover art last year (and paid significantly more than $20 and had to negotiate legal usage) to no comment. I also had Midjourney credited for two social media-like images in the game and had a bit of negative feedback (I cannot know if it extended to ratings).

On this very compelling anecdata, I think that for competitions, the optimal strategy is to not use any AI art, and probably pay as little for it as possible. If this means being an artist, then even better.

7 Likes

Even images generated by artists using Photoshop leverage AI to remove backgrounds or make other modifications. Additionally, in translating texts between various languages, tools like DeepL use AI. If we were to follow this logic, we would have to disqualify those who are not native speakers or those who simply remove backgrounds from images using AI. I don’t think we should fear AI; it is just a tool to enhance content. I appreciate all the works presented at IFComp and will try to play and evaluate the adventures for what they are, regardless of the medium used.

5 Likes

It relies on people to be honest about their use of AI. But why would they, if people are going to down vote or not play their games if they are honest?

To be clear, I’m not a fan of the use of AI in games entered for competitions. I’m still not convinced a lot of people would be able to spot AI-assisted text, though

3 Likes

What about using AI to generate ideas? Not using the material itself but a sort of muse?

Also, Is using cover art, for example from Adobe Firefly, when attribution is given much different than using art from an artist?

Just curious…

1 Like

IMO, it’s mostly a problem when using it for extensive generation purposes. I say mostly. But you get what I mean.

2 Likes

I would mostly say using it as a muse could be good, but I tend to stay away from it.

I’m not sure what you mean about the attribution.

2 Likes

I am a bit baffled by the hate against “AI” (a ridiculous buzz-word). But where do we draw the line with computer-assisted authoring? I mean, spell-checkers are allowed, right? What about grammar-checking? What about AI proof-readers? If Chat-GPT proof-reads my work do I have to list it as a beta-tester? :slight_smile:

Doesn’t Inform 7 write Inform 6 code for me? Is that “AI”?

What about other code-generator tools, like a tool to convert maps to Inform code?

I think this is a slippery slope to start trying to define what constitutes AI or not. I guess as long as everyone is crediting any and all “tools” used in their works regardless of whether or not they think it’s “AI” then that is fair.

7 Likes

I think, for most people, that in a contest judging creative works, that it is important that those works are truly by the people claiming to be their authors. If we’re judging the artistry, craft, conceptual, or literary merits of a work, then those are the things that need to be by its authors. Just as plagiarism from another human author wouldn’t be acceptable, so too is the passing off of the “intellectual” output of generative AI as your own. Even if it’s disclosed, many people will decline to play or review such a work, because most of us have not found genAI worth our time. In a community of largely amateur artists and hobbyists, the ethics of the major genAI systems is naturally a big concern. Were someone to use a genAI system trained solely on ethically sourced data it would probably be much better received (especially if it could somehow attribute its output to individual sources, though I don’t think that’s really possible with the current genAI designs.)

Just as having beta testers and proof readers is acceptable and doesn’t take away from your authorship of your work, so too is using AI for similar editing generally considered acceptable. But just as testers should be credited, so too, probably, should those AI systems. Using AI for a first pass of a translation might save time, but it will still need to be heavily edited afterwards.

No one in the major competitions is judging the source code of a work, whether the original language or a transitional form like Inform 7’s I6 output. But occasionally there have been minicomps where the source code was judged. I’d expect that in such a competition any tools, whether judged as AI or not, would need to be disclosed.

An author might think that something like the cover is incidental to their IF work, and so why not use AI for it? Well if it is incidental, then it can do with a cover that is unquestionably ethically sourced, even if it’s more boring than one AI could generate. There are probably millions of creative people giving away their works to be used in just that way. And of course even more who would love to be commissioned to create something for you.

14 Likes

Certainly. I spotted and confirmed two cases of it in the last ParserComp. ChatGPT generated text especially has a certain smell to it. It’s repetitive, vague, clichéd, flat, and is grammatically correct but lacks style. Once suspected, it’s easy to think up a prompt to test and see if you can generate the same phrases.

9 Likes

Part of the problem here is that “AI” is a hot buzzword right now and is getting slapped on everything, regardless of the actual type of algorithm. Of these, generative AI is the one that people have ethical concerns about. This is because they’re, in a sense, large pattern recognition and generation engines, and in order to be “trained” they have to ingest large amounts of data scraped from the Web (most of which was used without permission). This scraped data is then remixed into the output. This particular learning model has a lot of controversial issues around intellectual property, copyright, fair use, etc. as well as the large amount of resources needed to run them.

Someone upthread mentioned Photoshop’s AI tools, some of which have been around for years under different names and only got rebranded to “AI” recently to capitalize on the zeitgeist. These are fundamentally different from generative AI so I think it’s important to keep track of the difference when we’re discussing this. Automated tools and algorithms (which most of these newly-labelled “AI” tools are) are generally good! It’s not an issue of wanting to punish authors for taking the “easy route”.

10 Likes

I agree with using AI to translate some difficult phrases, I agree with you. However, I do not agree with an extensive use where AI replaces the artist and, for example, writes a book. Another matter is graphical elements, which are useful if they provide a context to the text and do not become of primary importance.

Of course, people being dishonest throws a wrench into the whole affair. But were I to find out that someone had been dishonest- then it would be a terrible stain on their character.

I would remove their work from our event, push for banning their future participation, and announce very loudly as to why their work had been stripped so as to make an example out of their dishonesty and make it extremely clear as to why it had occurred.

I would not want to work or associate with someone who respected others so little that they found it acceptable to lie straight to their faces. I’d caution others about their lack of decency if they were contemplating collaborating with them and asked my opinion on the matter- because I would want to know if I was potentially in cahoots with a liar who felt no qualms about lying in order to twist things to their own benefit, and who disregarded the ethical concerns individuals might hold for their own stance on not wanting to interact with AI derived works.

Frankly, no matter how highly I might have thought of them before- I would lose any shred of respect or appreciation for them and their work. I would refuse to review, engage with, or recommend their works into perpetuity- that which used AI or not. It would be a permanent mark on their record. Theft is one thing- but the utter contempt and disrespect it would take to lie to people about it? That’s inexcusable, and demonstrates a deplorable lack of character I can’t respect.

It is, of course, a gamble a hypothetical someone would have to personally weigh for themselves.

9 Likes

Extremely slippery slope, I agree, I mean what even is a computer? Me and the Haunt Displacers have been in the palace basement whispering forgotten memories into a cagefurnace of fallen angels we’ve chained to a Sinbeam electrified by pure hate such that each pulse of humanity’s loath jolts through the chain as a distinct signature of soulless agonies, which we’ve been calculating according to the Disorigin Grimoire to ascertain the anointed hour of That Which Persists The Cessating, and I mean, is that a computer? It’s at the very least a difference engine.

12 Likes

Yes. “Please dole out more of your grammatically correct mind-numbing crap.”

-Wade

8 Likes

Even if the rules of a given competition are fine with that, I tend to think you’re better off finding other ways to come up with ideas… these statistical methods (LLMs) may look more “creative” on the surface since they make different connections than a human would make (and that touch of alienness can initially be very compelling), but once you start seeing the patterns, they actually have a LESS range than even an untrained human, let alone one with some practice coming up with ideas…

I have a bunch of programmer friends who are into this stuff and every time I see something creative they’ve done, when they say “isn’t this bit great? ChatGPT came up with that!” it is, without fail, ALWAYS the bit that I was trying to find a polite way to suggest that maybe they should work on because it fell kinda flat for me and they usually come up with more interesting things than that.

And I’ve seen a couple research studies cross my social media timeline that had similar findings: both seemed (to me) to have enough flaws that I don’t want to cite them, but they were suggestive… ChatGPT, Midjourney, etc. might make you feel more creative, but in practice you probably aren’t doing yourself any favors, so if you can find any other options… maybe you could ask ChatGPT for ways to come up with ideas? :stuck_out_tongue:

In 2014, Raph Koster (Ultima Online, etc.) gave a GDC talk Practical Creativity – he’s kinda full of himself and I’m not sure he has all his facts straight, but as an intro to some of the basics I think it’s good enough…

15 Likes

Agreed. When people are like, “Oh that’s great, I couldn’t have done it better” half of the time if they spent as much time writing a paragraph as they did tailoring the input to the generation and editing it afterwards, it would be more interesting to read.

5 Likes

It’s the new/current AI consisting of black box learning models, where the output is completely unquantifiable (we can’t trace it discretely) that I personally call AI for the purposes of these topics.

I think with pretty much all computer tools pre these boxes, the path from any input to result could be traced through all their code if we really wanted to see it. By their quantifiable nature, our surprise at their output should be and is limited. We can anticipate the output based on input. We wrote the code. Where we didn’t write the code, we still wrote the code that wrote it, and could follow those paths too.

If we produce random output and subject it to human taste (basically choosing, ‘I’ll use this’ or ‘I won’t’,) we can also still see the rules that produced the random output.

I’m sure things will get slipperier, but it’s not complex to me at the moment. Of course, this is only my own thought process. Others may see lines in the sand in other places.

-Wade

9 Likes