Is AI Generated IF Discussion OK Here?

Hey, I’m new here and I was wondering if it’s alright to discuss tools that use AI to make an interactive fiction game. It doesn’t appear to be listed in the rules, but I know AI can be a controversial subject so I wanted to check first before stepping on any toes unintentionally.


Welcome to the forum!

There’s not a rule against talking about these kinds of tools or anything (note: I am not a mod!), but conversation about them has often gotten a little contentious, and my sense is the general vibe of the community is generally pretty down on them, especially “AI”-generated art. There have been some threads about folks using LLM tools to try to write IF code, sometimes just for the comedy value, sometimes seriously – I don’t think those tend to attract much controversy though there doesn’t tend to be much engagement either?

Sorry to be a bit of a downer – and I should admit I’m pretty firmly unexcited by this stuff, so others with a different perspective might have a different view! Basically I guess I’d just say that this is a friendly crowd and discussions of these tools are totally fine by the rules, but it probably wouldn’t hurt to be mindful of the concerns when posting about them.


There’s a discussion that’s started up a few times on the subject of “could LLM-based language parsing improve parser command recognition.” (Not trying to create big chunks of AI-generated game – just do a better job of turning player input into game commands.) This has come up a couple of times.

My answer to that is “probably yes, but it will be a lot of work; it’s not something that ChatGPT can just magically do.” And the discussion hasn’t progressed into the specifics.

  • Procedural text generation has always been a tool used in IF and is fine.
  • Discussion of AI text and conversation generation and command-parsing as it relates to games is also perfectly fine.
  • AI generated Art discussion - we’d like to avoid as many of our community are working artists with strong opinions since these utilities are known to scrape the internet for reference images indiscriminately without notifying, crediting, or compensating the original authors. We had a discussion here and it quickly devolved into a flamewar. Arguing the merits and cons of graphical AI art will likely be quickly shut down.
  • AI Generated IF Discussion - While you can discuss the subject and usage of AI as it relates to textual IF as yourself, what we do not want is a situation where someone is using an AI to masquerade as a real person/forum user with an account and having conversations with other unwitting people in a manner where it is confusing and potentially deceptive.

If you post text or conversation that is generated by an AI on here for demonstration purposes, we ask that if it’s not clear as AI-sourced material that you cite the source as you would a quote from a person or reference material.

If you use AI in an assistive manner to write an essay (for example) it should be attributed as your own words, and ideally also disclaim what AI tool was used in the writing of the material. You, as the actual live-person who posted any AI generated prose, are responsible for the posted material as yourself.

The reason for this: An AI construct or LLM cannot itself legally agree to the Code of Conduct and cannot be moderated in the usual sense if it happens to offend someone or misbehave.

TL;DR: Discussion of AI text and tools is fine, AI art-generation is not, and this is not the place to field-test your AI to see if it can fool people and pass the Turing test.


I don’t see any reason not to talk about it. I for one am quite interested to keep up with the state of AI and its potential applications to IF.

I recognise some people don’t want to talk about it, but then they don’t have to read those threads, i guess.


Games have been developed with AI assistance before. BJ Best did You Will Thank Me As Fast As You Thank a Werewolf, a game AI-written using his own writing as a base. It was an interesting experiment; legally sourced, but not very coherent to play. And recently, The Fortuna used AI generated writing and imagery, which it caught flack for.

Creative communities do have a strong pushback against AI, so be aware of it. For instance, the Neo-Interactives forbid it in their jam submissions. It’s possible to discuss it, but it’s not highly regarded by a lot of users.


I feel using AI for writing code is fine, but the actual story should remain human.


Actually, it isn’t hard at all to get the LLMs to rewrite the input commands into something that the game parser understands. Below is an example from Zork. It even does typo fixing as well. Getting it to work in my LampGPT experiment took only a few hours.

West of House
You find yourself in a sprawling open field, the air fresh with the scent of
adventure. To the west lies a mysterious white house, its front door boarded up
as if hiding secrets or guarding against unwelcome visitors. Nearby, a quaint
mailbox stands, seemingly waiting for your curiosity to explore its contents.

>with a fierce spirit uproot the communication receptacle
With all the might you can muster, you attempt to uproot the mailbox, but it
remains firmly anchored in the ground, as if mocking your efforts. The mailbox
stands unyielded, a silent guardian of correspondence.

>traverse the ground towards the southeasterly direction
South of House:
You have moved to the south side of the white house. Here, the walls are
unyielding; no doors invite entry, and windows, firmly boarded, deny even a
glimpse inside.

>unbar the windows
The windows here are all securely boarded up, leaving no way to unbar them from
the outside.

>head in the direction of the rising sun
Behind House:
You find yourself at the rear of the white house. A narrow path snakes into the
dense forest to the east, promising adventure. Near you, in a secluded corner of
the house, a small window beckons with its slightly ajar stance, hinting at
secrets within.

In Alberta, the colleges are allowing students to use AI to supplement their work assignments. Most instructors are wary of AI these days and can spot suspicious usage. In the technical courses, a teacher can challenge a student and if the student cannot explain their reasoning in an essay or provide a reasonable understanding of the programming code they submitted, they will immediately fail that class. I feel this is the proper approach given how prevalent and useful AI is now. I personally don’t use AI for anything, but I can appreciate its utility.


Is the actual Zork 1 even involved here? It’s hard to tell from your example.

Can you get it to display the generated commands? (That the original game would understand.)


Thanks for the response. I’ll have to consider if I’ll post it here, then. The main focus of what I would like to discuss would definitely be the text generation but the platform itself doesn’t have an option to disable the resulting images so it would possibly come up at some point.

Additionally, thanks to all others for their personal takes as well!


That’s one of the reasons I want to discuss this particular platform here - the creator has obviously spent time working on getting them to create something like a coherent story. The better models he hosts are able to create fairly coherent games that successfully remember the player’s past actions, as well as dynamically suggesting future course of action based on what the player has done so far.

Of course, there are a lot of conditional statements in there… It’s definitely not anywhere ready to replace an actual curated experience by a good interaction fiction writer, but it’s getting a lot better.

Personally, that’s the sort of thing I’d actually be very wary of. AI is usually right-ish, and VERY authoritative about what it “knows”, which can be a very misleading situation… I could easily see it becoming a problem when a student starts trusting it too much to double check its knowledge elsewhere, especially if that habit continues out of the classroom and into a professional job.

That’s not an issue with obvious IF, thankfully. It’s pretty hard to confuse a procedurally generated game about defeating a demon lord with a historical textbook.


Agreed. Schools want to foster creativity and critical thinking. We all want smart students. Employers want to trust the schools are graduating capable students. What we don’t want is a generation of students who know how to manipulate AI in an effort to masquerade as critical thinkers.

This is not to say that people who use AI are being disingenuous, but my fear is as time goes on, an over-reliance on AI from a young age will create a “dumber society”. We have to use AI responsibly. This is what I like to talk about when it comes to AI.


“Is AI Generated IF Discussion OK Here?”

Discussion of AI Generated IF: Probably.

AI Generated Discussion of IF: Please no. :wink:


Too late…Not just students either. People in important positions have already been found relying on AI instead of vetting information themselves. I’ve said before - current AI is really stupid, but it presents information with no hesitance or uncertainty. If it were human it would be the worst kind of pathological liar. But it isn’t human and doesn’t think, it just spits out words according to probabilities.


AI-generated interactive fiction is a fascinating blend of technology and storytelling, where artificial intelligence is used to create dynamic narratives that adapt to the choices of the reader. This form of fiction allows for a personalized experience, as the story can branch in various directions based on user input, leading to multiple endings or story paths.

Platforms like NovelAI offer tools for AI-assisted authorship, enabling users to construct unique stories with the help of AI algorithms trained on real literature1. These platforms often provide a sandbox environment for imagination, where users can steer the narrative, emulate famous writers’ styles, or drive their writing in specific thematic directions.

AI Story Generators like Squibler help craft narratives instantly, transforming ideas into captivating stories for novels, scripts, or screenplays. Similarly, NovelistAI uses AI to craft distinctive novels and interactive books, offering a range of genres and styles for a personalized reading experience3.

The key to AI-generated interactive fiction is the AI’s ability to remember and adapt to the user’s choices, maintaining consistency in the story while providing a sense of freedom and exploration. This creates a unique storytelling experience that can be both entertaining and thought-provoking, as it merges the creativity of human writers with the computational power of AI.


That certainly hasn’t been my experience. AI even lies about lying. :grin:


@Hidnook Ha! You’re funny. :wink:

I wonder if all that generated text could be summarized into 2 or 3 sentences by an AI and actually capture the crux of what is being said. Like, can AI show some level of artificial understanding?

I just looked up Artificial Understanding and got a hit…


1 Like

I think that’s the entire crux of it. For an AI to simulate “human understanding” there has to be a lot of data. And it also depends on what the goal of the AI is

The original reduction I’ve heard was:

A computer is a fast, accurate moron;
A human is a slow, error-prone genius.

Being that a computer can only parrot back math and information in a database it’s been given without having any opinion of what part of the data is important to the query.

For years and years, the concept of an AI was that it can “answer all your questions conversationally” and “know everything” and in SF an AI could essentially be tasked with monitoring, say a spaceship, and make logical decisions based on data it receives. It’s dry in the garden and plants are dying? Increase the water-scheduling and humidity to prevent that

The problem is this isn’t really AI - it’s a complicated flowchart that makes weighted decisions based on the data it’s fed and goals such as “prevent the plants in the garden from dying.”

Recently the focus has been more on a model that also learns over time from interactions as a human does, and makes associations from its interactions. If the AI makes a wrong assumption, you can tell it it’s wrong and it should remember that - actually adjust the weighting of giving that piece of advice again, or perhaps include disclaimers “I think [this] but it’s possible that it’s actually [this] [as I’ve been told].” If you want your AI to have “personality” it can’t really know everything and be perfect.

So it depends on the purpose of the AI: It can be a knowledge base on a specific subject it’s been fed all factual information about, such as a consumer product and it can answer sort of unexpected questions: “Is the blue version of this thing pretty?” “All our products are designed to be attractive. In fact 58 percent of customers I’ve also talked to have expressed they like the blue version and find it attractive.”

Or it can be allowed to learn from mistakes (as told) and have opinions about the quality and relevancy of its own information and what the correct answer should be based on previous experience.

The problem is the LLM has to have conversations with regular people to learn things, and people carry biases, and bias is learnable. This is why so many times when a company release an AI character that the public can interact with, there are trolls who will invariably feed it wrong information, or teach it a bunch of racist or sexist ideas and it gets pulled down.

Which technically does make it akin to a human in that it can have wrong ideas. That’s where the techno-horror comes in with fictional situations like HAL deciding humans interfering with it are dispensable, or Wargames where the AI decides that mutually-assured nuclear destruction is inevitable and works to assist the process.

In short, if you want “human-like conversational learning” AI - it’s going to make mistakes. Part of conversation is learning things you didn’t know, are wrong about, and potentially changing your opinion or personal knowledge. And seriously, you don’t want that when your spaceship decides the best route to your destination is straight through the sun it’s been told is “not really as hot as the others” and the efficiency overrides the need for the passengers not to immolate during the journey.

To avoid techno-horror, you don’t put “learning” AI in charge of anything that might pose a danger to its users.