I think the ideal way to have generative AI create decent interactive fiction is to create an IF dataset and then fine tune a non-chat model to produce content based on this. This could be a challenge - you’d need to collect works and create transcripts of them, and feed them into the model as training data. While the output model itself may or may not be a derived work of the transcripts, the transcripts themselves will be copyright protected so permissions would have to be sought if you shared it, which you absolutely should because having a public dataset of IF transcripts would be a boon to world-building and using AI models in future. It may ultimately help pave the path to ordinary people creating Star Trek holodeck type content at home. (Combined with other stuff)
One way to create a dataset would be to use an AI tool to convert pages of public domain books into the IF style, then upload this to huggingface. It’d not be ideal, but it’d be a step in the right direction. Another idea would be to build a dataset using open-assistant code where people take turns being the computer and the user, and use that as a data set. But you’ll need thousands of conversation trees and to rate them.
Anyway, regardless of whether it’s been sharpened up with fine tuning, don’t use a chat interface to make your content. Use text-completion instead so you get the correct narrative style. You want to give it as much context as you can, and have it create continuations.
For example ChatGPT originally did something like:
This is a transcript of a conversation between a user and a very helpful, ethical, friendly AI chatbot:
User: Hi. Can you help me?
AI: Yes of course, I’d be glad to help; just ask whatever you like!
User: What’s the capital of France?
AI:
Then you tell it to keep generating text until it generates "User: ", and insert the next prompt from the user and continue. But it turned out it’d tell you how to do unethical things, so they fine tuned it with conversations between cheap workers and rated the conversations. The result tuned for “this is what we African workers think that our bosses think that woke US corporate types want to hear” - and that’s how ChatGPT responds! Using the OpenAI playground rather than ChatGPT sidesteps a bit of this, but it’s not free and it’s still trite and generates content that lacks rawness of a real author. It can only explore a very small subset of art.
There’s other models though, and recently LLaMA2 was decensored somewhat. You can likely find instructions of how to run this on Google Colab or on Huggingface for free. RWKV is another good model, and there are more out there. Ideally you want to have a long chain of text that already exists and ask it to autocomplete the end, then cut the start off and carry on - that’ll give you the best consistency.
If you’re anything like me, what you really want is for you to give examples of your writing style and works that you love, then have the AI build the details of your world for you on the fly, remember what’s going on in it, and introduce characters that you’ve written yourself from example dialogues you’ve written. This requires a bunch of different technologies that I’m not sure exist yet, and data sets that need to be gathered so they can be fed into models that people can use freely in order to innovate. But the most promising is stuff at the moment is built on top of langchain using decensored models. Character dot AI have some cool stuff going on in this area if it’s your thing you should check it out, but it’s not there yet or wasn’t the last time I checked.