The line between IF and text-based RPGs

This isn’t necessarily entirely true. There is, for example, some interesting work providing strong evidence that a transformer trained to model valid sequences of moves in the game Othello actually learns to model the board state (and therefore, by extension, also learns the rules of which positions are valid to play in, what happens to the board after each move, et cetera).

Obviously that’s a toy example that doesn’t directly translate into any particular strong claims about larger LLMs, but the point is that it’s possible, in principle, for models with this architecture to implicitly learn some sort of world model if it helps with the sequence prediction task they’re trained on.

3 Likes

Global variables and “Every turn” rules in Inform 7… I wish I knew how to quit you :pensive: :heart:

I would say because they cause surprising dependencies. Say you write an application that displays text in a box, and you implement it with global variables. You’re in for a very bad time if you later decide that the app needs two text boxes. You’ll have to untangle everything and define some sort of context that you can have two of. It’s easier to design that way from the start.

The rule is almost completely irrelevant for IF games. If you’re writing an Inform game, just go with globals for everything. You might want to attach properties to an object, which is sort of the same question, but you’re really starting from a different place.

Other kinds of games, it really depends on the framework and what you’re doing. RPGs tend to have multiple instances of things, so object-oriented thinking is natural.

9 Likes

Now he tells me! :wink:

I’ve always been cautious in Inform because people kept telling me globals were bad.

Re: locals, apart from times when they’re systematically demanded, it can be mentally nice having a variable for a character or object attached to the character or object. But then you have to write things like ‘now the chapter 3 door-slammed-in-anger of Jane is true’. You could write a ‘to decide’ phrase to make this easier to parse, but you don’t want to do that every time.

So I’d often turn it into a global with a more programmy name. ‘now c3-jane-doorslam is true.’ (Half anybody reading this will probably think, ‘That’s not a great improvement’. But it no longer has an ‘of’ clause, and it’s about half the typing.)

Yeah, off in completely different LUA-land where I wrote that Roblox game, I’d never use a global and I understand all the reasons why.

-Wade

5 Likes

I thought making a variable global without a good reason was bad because it opens up the potential for the variable to be modified by code that shouldn’t be modifying it. Similar to why you make an object’s properties private and make non-class code go through getters and setters to access said properties.

Of course, the thing about rules of this sort is that they are generally good advice that isn’t necessarily universal across all languages, paradigms, and types of program, and sometimes, breaking the rule is arguably better than slavish adherence.

2 Likes

I’ve been wondering about that. The current state seems to be you can’t give an LLM/AI full range of access to the internet (it might be just too much data that might better be served by an intelligent search query rather than AI interpretation) and “learning” AI can become biased from learning from random conversations with people.

But I do think AI would be more useful if given a smaller dataset to work in. As I said, chatbots and AI and LLM seem to be most useful for generally summarizing a huge amount of data, so it might work best, say if instead of giving the public access to query the AI that it was used internally by customer service agents as a knowledge base query. At my work, we have a Wiki resource to explain hundreds of our processes and specific information about insurance plans, and if I need information about a specific thing, that information is sometimes scattered and duplicated and perhaps updated or not across 10 different wiki entries where the information might just be tangentially mentioned and would take me 10-15 minutes to page through to find the specific information while my caller waits. An AI would be really useful if I could ask “What are the specific restrictions of this type of Medicaid plan?” and it could pluck everything relevant to my question out of those multiple database entries and present it to me directly. I don’t have to quote it word for word, but it would assist finding the one paragraph of information I need that’s buried in a 20-page source document.

The other problem is having an AI that “learns” from talking to other people who might troll it as we’ve seen in some public chatbots who distressingly begin to believe racist things some people might say to it purposely. If the AI only had the dataset of what’s in a game and isn’t sharing learned knowledge experience between player/users, that becomes less of a problem.

2 Likes

The current state seems to be you can’t give an LLM/AI full range of access to the internet (it might be just too much data that might better be served by an intelligent search query rather than AI interpretation)…

It seems a bit like you’re thinking of LLMs as primarily a search technology, when that’s not really a good way to think about them IMO.

Modern LLMs are essentially trained on “the entire internet” (GPT-2 was trained on every outgoing Reddit link, GPT-3 uses Common Crawl + some other corpora, GPT-4 who the hell knows?), but they don’t have access to the “live” internet at runtime, and their ability to “memorize” the training data is both limited and, to a certain extent, in conflict with the goal of generalization that’s core to the idea of LLMs as a general AI tool.

There are, of course, specific AI products that access the internet in some way (for example, by doing a web search and then adding the results into the LLM prompt behind the scenes) but that’s essentially building an additional feature on top of an LLM, not part of the LLM technology itself, and it’s always gonna be limited by the quality of the search tool, which will not itself be an LLM.

At my work, we have a Wiki resource to explain hundreds of our processes and specific information about insurance plans, and if I need information about a specific thing, that information is sometimes scattered and duplicated and perhaps updated or not across 10 different wiki entries

The sort of problem you’re talking about - searching for relevant data within a corpus - is a specific field of research called Information Retrieval, which certainly involves plenty of machine learning and NLP techniques nowadays, but LLMs specifically are not really a tool intended for IR.

The other problem is having an AI that “learns” from talking to other people who might troll it as we’ve seen in some public chatbots who distressingly begin to believe racist things some people might say to it purposely. If the AI only had the dataset of what’s in a game and isn’t sharing learned knowledge experience between player/users, that becomes less of a problem.

Unfortunately, the biases in an LLM don’t just come from talking to users. Even pretrained models (that don’t continue to learn after being deployed) consistently display all sorts of biases. This has actually been a problem in NLP since even before the current LLM hype began; see this article from 2017 about how following standard-practice steps to train a sentiment analysis model still results in racism.

You’re right that using a dataset restricted to facts about your game would probably mitigate that; unfortunately, the generalization / “few shot learning” abilities of LLMs fundamentally rely on a truly massive amount of data (e.g. “basically the entire internet” as mentioned previously). You simply cannot train something with the same abilities on a small dataset. You can finetune an LLM on a smaller dataset to be more focused on your particular task, but this doesn’t necessarily get rid of undesirable biases; conversely, there are ML/NLP things you can do with a small dataset that aren’t LLMs, but then, well, it’s a different thing and not an LLM.

(EDIT to elaborate on that last issue: in particular, the ability to interpret an arbitrary natural-language prompt and respond in natural language is something you can only expect from an LLM trained on an ungodly amount of data)

4 Likes

Beyond Zork interrogates where that line is, with stats and levelling and lots of combat. But that requires randomness, and either limited/no UNDO or tolerating very easy save scumming, which makes something similar unfashionable.

Are you familiar with GNS Theory in tabletop roleplaying games? IF can be written to be Narrativist, Simulationist, or Gamist, but the former – Narrativism – is what really unlocks the power of the genre. Your question seems to presume Gamism as the definition of what a RPG is, which overlooks a lot of really cool modern indie games.

Take a look at Hamlet’s Hit Points, by Robin Laws. He divides scenes into different kinds based on their function, with the main two being dramatic scenes (interaction between characters) and procedural scenes (a character doing something uncertain with the environment or an opponent). IF has been described as “a crossword puzzle at war with a narrative,” with the puzzles serving as the procedural part more than randomized combat with stats.

If you mean “RPG” in the sense of gamist randomization with stats, IF can do that, but it’s not the best use of the medium. If you take a wider view, you can totally write IF in the style of a game like DramaSystem or Gumshoe.

4 Likes

I’m getting this framed.

3 Likes

I’ve always understood ‘IF’ to be a classification of the medium/presentation first and foremost. So a text-based RPG would still be IF, albeit unconventional IF. (And I don’t recall anyone questioning whether Skybreak or Lost Coastlines qualify as IF when they released, despite both having very strong RPG elements.)

Plenty of works that are widely considered IF have about as much story-telling substance tying together the puzzles as your average Call of Duty campaign has story tying together the shootouts (that is to say, almost definitely less than Elder Scrolls or Dragon Age or …) so I’m not quite sure where you’re trying to go with that comparison.

(I could go on but I’d just be reiterating what @VictorGijsbers has already said so much succinctly than I would have.)

9 Likes

This is tangential but something interesting I’ve noticed about the relationship between the two.

IF has such a wonderful and supportive constellation of authoring tools designed primarily for an audience of fiction writers with minimal programming experience that it tends to attract a lot of people who love and want to make RPGs.

Especially in the IF reddit forums (for twine, inform, general IF) it’s incredibly common to have people asking questions about how to add RPG mechanics and especially RPG combat, so much so that I’ve considered the possibility of writing a twine or inform plugin specifically to help people do so.

I often say to them something like: “Hey, if you have a cool story idea, then why not consider seeing how well you can make your story or adventure work WITHOUT RPG mechanics, and then see if it’s worth sprinkling them in a little later.” I actually feel a little bad for the sea of would-be RPG authors who are dying to make their own RPG and wish the tooling was there for them. Obviously there’s RPG-maker and other tools but it’s stunning how many people love the usability of Twine and want to leverage it to make their own RPGs.

5 Likes

A part of me wants to make a Twine-like RPG that can stand on it’s own without a story. Sacrilege, I know! :wink:

2 Likes

Personally, I take the view that “interactive fiction” sensibly encompasses anything in which you interact with a fictitious world and your choices shape which of multiple outcomes you get. I figure the text/graphics distinction is just an implementation detail. This embraces a huge number of roguelikes and visual novels and CRPGs and other videogames. (I’m not making an argument for changing criteria for the IFComp or for what’s included in the IFDB or anything, just noting that this is what makes sense to me as taxonomy.)

5 Likes