Is AI Generated IF Discussion OK Here?

This is definitely right, but “might pose a danger to its users” is a category that can be broader than it first appears. Like, a couple days ago some folks released a “legal information” LLM*, and people messing around with it found that one of the first things it did when you said “so I did [X thing that could be a crime], should I be worried?” was ask “did you do that accidentally or intentionally?”

The trouble here is that proving bad intent (“mens rea”) is often one of the trickiest elements of a crime for prosecutors to prove, so someone saying “yeah I did that intentionally” could be very powerful evidence against them. If you were having that conversation with an actual human attorney, attorney-client privilege would apply so there’s (almost) no way a prosecutor could get that information, but of course that wouldn’t apply here, so one subpoena later the DA is reading all the incriminating stuff you typed into it (I think this is a pretty credible worry, since these days when you’re suspected of a crime investigators very often go for your internet history, and if they see you visited www.freelegaladvice.web3.com or whatever, sure seems like they’d want to check that out too).

Anyway just to say that as in many cases, the rate at which techy folks are creating new products and applications far outstrips the rate at which the actual risks, benefits, and impacts are being assessed.

* I am intentionally not linking to it.

10 Likes

Makes it sound like an AI should graduate from a University or a College before we give it any responsibilities. I wonder if an AI might actually take part in our academic studies and earn its place within society.

That might be an interesting Sci-Fi story… choice-based… what story format though… hmmmm… :wink:

1 Like

Or make it a closed model with restrictions on what it “learns”. Like if your AI is answering product questions, it can’t “learn” the product sucks, but it can provide statistics like “36% of people I talk to find they don’t like the product.” It can’t learn an opinion like “This product is for stupid people” and parrot it back directly, but it might increment the “displeased” metric for discussion.

My aforementioned spaceship scenario would benefit from essentially ‘safe words’ that allow manual override and prevent it from locking into a dangerous course of action. If you’ve seen the movie M3GAN a major plot point is when the AI decides to only pretend to shut down on voice command and remains inert but still listening so it doesn’t miss anything people say in pursuit of its given “prime directive” of protecting another character.

3 Likes

Kind of sort of not really?

AI-generated interactive fiction combines technology and storytelling by using artificial intelligence to create dynamic narratives that adjust based on the reader’s choices. Platforms like NovelAI and Squibler offer tools for AI-assisted authorship, allowing users to construct personalized stories with the help of AI algorithms trained on real literature. These platforms provide a sandbox environment where users can steer the narrative, emulate famous writers’ styles, or explore specific thematic directions. AI Story Generators like NovelistAI craft instant narratives for novels, scripts, or screenplays, offering a range of genres and styles for a personalized reading experience. The key to AI-generated interactive fiction lies in the AI’s ability to remember and adapt to user choices, maintaining consistency while providing a sense of freedom and exploration. This blend of human creativity and AI computational power offers a unique storytelling experience that can be both entertaining and thought-provoking.

1 Like

I would put that in the “not even close” category, but thanks for humouring me. :slight_smile:

1 Like

Yes, I will definitely add that, probably this evening. What I copied is the actual Zork1 with the output text “augmented” (I asked the LLM to make the terse text more ‘interesting’). Without those augmentations, the output is standard Zork (so I won’t copy here). The commands given were (translated from my obfuscated input):

take mailbox
southeast
open windows
east

Overall, LLMs are good at “language translation” tasks, if you give them enough context. Here, I give the LLM the user input (“convoluted means of asking for something”) as well as all the ZIL-game-file verbs and prepositions as well as all of the noun phrases that the user can “examine” in the current room. (I should also give it some of the game context; it will do even better then, but I haven’t done that yet.).

Notably, I don’t give the LLM all of the syntax clauses from the ZIL files, but that doesn’t seem to hurt. At least the ChatGPT4 LLM seems to generally know how to construct IF command syntax, so it doesn’t even put in the ‘the’ article.

Then, I ask the LLM to translate the users input into a valid game command, and I give it five tries to get it “right.” I assume that the LLM has gotten it “right” for some definition of right, when the game response isn’t some sort of ‘parser error’ response. I use the bocfel feature of push/pop save state to avoid changing the game state when making these tries.

Below is the concrete LLM prompt I used yesterday.

parser_preamble = '''
This response is due to a limitation of the game engine and its text parser.  

The game understands the following verbs and prepositions, where all words on the same line are synonyms.
{{{verbs}}}

The game understands the following noun phrases, where all noun phrases on the same line are synonyms.
{{{nouns}}}

[NOTE: In the above, sometimes only the initial six-character word prefix is used.]
'''

parser_rewrite_tries = '''
The following alternative commands (one per line) are NOT accepted by the game:
{{{alternative_commands}}}
'''

parser_suffix = '''
Can you rewrite the players command into one that the game accepts?
When making your suggestion, try to align as closely as possible with the
players intent.

Please enclose your suggested new command in triple plusses, as follows:
+++SUGGESTION+++
'''

This reminds me of Racter

It was a very old game. About the only thing I remember is " The policeman beard is half constructed. "

4 Likes

“I’m afraid of jazzy church mice.” Never seen this one before, was cool to look into! Thanks for sharing!

1 Like

Regardless of the reasons provided, I think the real criterion for a topic being banned is whether the it has generated a flame war. That depends on whether the pros outweighs the cons for a particular group of people, and if it generates strong emotions or beliefs among some of those people. For text generation perhaps the low level disdain for someone who sends you an AI generated email or relies on AI to generate ideas does not rise to the necessary level, or at least not yet. There are absolutely people whose livelihoods are being threatened by text generation, and we will all suffer, as well as benefit, from it. For IF, creating a more flexible parser is perhaps the least objectionable application of AI, and not at all representative of the promise or threat it poses.

2 Likes

That was kind of my impression. You can converse with an AI chatbot, but in my brief experience with ChatGPT it’s like having a friend who only answers questions in short book-report form.

5 Likes

Speaking of mechanical story idea construction, does anybody have any comment about Tarot Story Maker? Why do I keep thinking about SeedComp?

PS: I broke down and got myself a tarot card deck. It’s been interesting.

Edit: Also look at Paizo’s Pathfinder Harrow Deck.

Edit: Fable maker

There’s a Tarot+ChatGPT at 10:27

1 Like

I dunno about SeedComp, but the SingleChoice Jam had a Tarot-like entry:
Le plaisant jeu du Dodéchédron de Fortune

4 Likes

It’s interesting that this is a common topic of discussion regarding AI art (here and elsewhere), yet it’s not as widely discussed that ChatGPT is trained on text “stolen” from the original authors. Although there are suits pending against OpenAI contending just this.

It’s a huge problem for sites that depend on advertising if their content can be summarized by Bing or Google using AI.

4 Likes

I’ve also been wondering why people treat text generation and image generation so differently. One aspect could be the different use cases. Text generation has many use cases and can be used in more tool-like ways, or its use could be hidden entirely. On the other hand image generation seems mostly to get applied as a direct replacement of artists creating an image. Although there are lots of other possible applications for images - creating audio and music from images of waveforms, image processing (I’m thinking of the image processing precursors of alpha go), …

3 Likes

Without intention of fostering debate about how one form of plagiarism might be “okay” or “more acceptable”…

The simple answer is “most everyone can and does write; many fewer people can create original graphic visual art.” If someone asks me what a turtle is, I can easily write a serviceable description myself. If I need to show them visually, I’m going to likely share or link to someone else’s photo or illustration (unless I happen to have my own photo in the camera roll of my phone) without claiming to have taken the photo or drawn the art and benefitting from it commercially.

Writing is made of words that other people also use...

Words are made up of recombinations of 26 letters in English and you can’t really claim plagiarism if someone uses the same words you do in different orders, especially if it’s conversation that is not benefitting anyone for commercial purposes. We’re all using the words “the” and “for” and “misanthrope.” It’s also possible to quote people with attribution or adopt turns of phrase like “a stitch in time saves nine.”

Most people know how to write and using similar phrasing as another person or author doesn’t necessarily imply plagiarism unless it starts getting into the level of lifting entire paragraphs and document structure. And unless I’m making a speech and claiming I came up with “The only thing we have to fear is fear itself” or trying to sell a book where I copied and pasted sections wholesale from someone else’s writing without attribution, prose is going to always use “pieces” of something else someone has already written.

Visual art is not created from a fundamental common alphabet or phrase language. Even though there are similar styles - like “anime” or “surrealism” and disregarding that art does have its own version of remixing known as collage (where sources are credited or well known enough that it’s obvious) it’s much easier to lay one piece of art over another and agree “yep, that’s copied.” Where it’s not credited collage or outright theft, there is also appropriation (link NSFW - fine art nude model photo further down the page) which is physically copying or re-drawing an artwork, which dips into the “fan art” realm but can become a problem if someone claims their renderings of someone else’s art or characters as their own.

The legal system had to arbitrarily draw a line in music plagiarism since there are only 12 fundamental notes to create melodies from. If a melody or a song isn’t attributed as a “remix” with credit to the original, a composer can only “quote” up to seven consecutive notes of another melody in a row before they are potentially liable for plagiarism. It’s agreed that an accidental lift likely won’t last eight notes or longer unless it’s via cryptomnesia (which is inadvertent unconscious plagiarism; the inverse of “great minds think alike”) for which a composer would still be liable even if they didn’t intend to do it.

9 Likes

This brings up the interesting idea that the primary users or beneficiaries of the technology will be (a perhaps small number of) experts in the respective field - eg artists who can use image generation effectively and have the skill to touch up the generated images.

From a legal point of view the question I think is about fair use and copyright, which itself is based on the idea of fostering productivity and creativity. How exactly this applies to AI is not clear.

In any case I’m not sure there are simple answers or complex answers, but I like the points you’re making.

1 Like

Never truer words spoken. It all relates to the level of damage perceived by another, and then affirmed by others with a vested interest. How’s that for a vague and accurate statement? :wink:

That’s the crux of the argument: if you’re going to hire an artist or an author anyway, why even bother with machine generated art or prose?

I do understand that AI might have a place and be useful during pre-production for concept art/mood-board/placeholder text, but none of that should make it into the finished commercial product.

To swerve this hopefully back into the IF lane, AI text and story generation is great as a tireless resource to brainstorm ideas, but ideally then an actual author is going to take bits of this as inspiration for their writing and not copy-paste wholesale.

If you’re going to use AI actually within an IF engine, it might be good for making NPCs have unique and engaging things to say rather than text-generative randomized atmosphere messages, but they’d need to be directed and fully aware of the narrative structure. I’d worry a bit about an AI construct getting a bit too creative and involved in their improv-roleplay and derailing the story or deciding they actually have a different role than specified if they can just make up anything as they go.

I could see AI working really well in an experimental format similar to Aisle which was a one move game that basically became a wildly different story with varying resolutions based on the player’s single allowed action command.

4 Likes

To me the difference is that most people can write something better than ChatGPT can output, but most people can’t draw something better than Stable Diffusion or whatever can output. So ChatGPT isn’t being used to replace writers, it’s being used to replace knowledge workers of various sorts…which is its own problem, seeing how confidently it produces absolute nonsense.

8 Likes

I could list hundreds of technological advances that wiped out entire fields of jobs, companies, and products. Interactive fiction should understand that better than anybody. I’ve yet to see why AI will be any different. Employment in the US has been at historical lows for some time now, so I hardly feel worried that there is any kind of real threat from AI. If the question is - could a small community of independent writers and game designers with limited resources embrace AI to help produce more professional and polished games, then I think the answer is yes.

1 Like