Thank you! This is a worth thread. I have had to place a bookmark since I am to exhausted right at the moment to read it carefully.
Jeff
Thank you! This is a worth thread. I have had to place a bookmark since I am to exhausted right at the moment to read it carefully.
Jeff
That some game may access an external server during its basic game loop is a very specific implementation detail that could be relevant to devising some technical taxonomy, but it doesnât seem promising to me as a basis for categories of interest to players.
Perhaps a more promising angle for the distinction I think youâre getting at is one of coherence: is the gameâs behavior ultimately out of the hands of its creators such that they have limited confidence in their ability to predict its behavior? But that still gets very sticky when you figure in procgen, random elements, and good old-fashioned bugsâŚ
This is horribly rudimentary, but I feel like the main thing that defines âparser gameâ in the collective consciousness, so to speak, is just the fact that⌠you type stuff.
Player types stuff. Game prints response. Player types more stuff.
Thatâs it.
Whatever is happening under the hood might interest a subsection of the audience, but when people in general talk about parser games, theyâre talking about games where you type stuff. The typing of stuff is the âparser aesthetic.â Featuring that aesthetic, even without any underlying model, would be enough for many people to say something is a parser game.
At least, it wouldâve been enough in the past. With the introduction of AI games, more players might care about technical distinctions. (Unless they donât care, and humans and AI get lumped together under the same aesthetic.)
It feels to me like the main issue here is that âparser gameâ is a well-understood quantity, but now there are âlive AIâ tools that can make things that sorta look like parser games, but generally donât have the same affordances (or if they do, theyâre pretty bad at them due to hallucination, indeterminacy, etc).
[I think weâre all leaving aside the case where AI is used to generate assets â code, text, images â for a regular-degular parser game, since of course those are just regular-degular (probably not very good) parser games. But thatâs why Iâm saying âlive AIâ here]
To my mind, the fruitful thing is to think about games that use live-AI as a separate kind of thing â IMO theyâre way more similar to Eliza than Zork, so I tend to call them âchatbot gamesâ. To the extent a chatbot game apes a parser game in certain ways, those similarities I guess can be interesting to dig into, but these fish arenât fowl and itâs probably less confusing to think of them as their own category.
I also feel like this is a kind of unpleasant conversation to be having right now given how things went down with ParserComp, but I might feel differently later on â there have been some reasonably fun chatbot games, after all.
I mean, maybe? Is this a parser game?
"ParserGame" by Philip Riley
Example Location is a room.
After reading a command:
let X be a random number between 1 and 10;
say "[X].";
reject the player's command;
I bring you all a parser:
Sorry, LARPing Diogenes with the plucked chicken there, but itâs hard not to when my fellow fish get into discussing the noted difficulty of categorizing and classifications.
Well, thereâs Nick Montfortâs Amazing Quest, which basically does exactly that. It ignores the playerâs input and prints something at random. Is it a parser game? On a purely literal level, you could argue that itâs not. Itâs not exactly parsing anything, is it? But rigidly defined genre definitions arenât very useful, I think, and itâs certainly a game thatâs in conversation with the parser genre.
Meanwhile, I donât think anyone would (seriously) argue that pinkunzâs The Typing of the Dead is a parser game. Why? Is it because its response to the playerâs input doesnât involve words? Neither does Gent Stickman, and that didnât get kicked out of ParserComp. But Typing of the Dead is a game that was created completely outside the parser game community and any resemblance it has to parser mechanics is a coincidence.
Classification is hard.
I would say so. Iâd also say this is a parser game.
Hereâs the thing. While we can sit here and discuss whatâs a man and whatâs a processed chicken nugget all day, I donât care.
I donât really care at the end of the day if AI parsers are parsers or not.
My objections to LLMs are based on all the baggage that comes with using them and the palpable measurable harm they cause and will likely continue causing. I donât like further legitimizing them and donât have any desire to engage with them.
If someone developed a magic virtual reality machine, like, say the Matrix become real, but it turns out it saves a copy of usersâ minds to a private server, the owners have bought out all politicians and potential regulation, and the entire operation runs solely on unsolicited used car warranties and orphan tears, Iâd prefer a book.
The quality of the Matrix is irrelevant. My objections arenât going to change if they make a more realistic version.
On the other hand, others, I fear, would check Amazon for the current price on orphan tears.
Thatâs not a mismatch on definitions, thatâs a mismatch on values.
Just in case I have ever been unintentionally ambiguous.
Yeah, it isnât a âtype action, print response, type actionâ gameplay loop. The visuals are also dominant. Whenever you have dominant visuals, youâre skirting more towards the âvideo gameâ side of the spectrum. But if someone happened to cook up something similar as an extremely modified Vorple game? Iâd be willing to call it a parser game, technically.
Thatâs the distinction I was trying to make. Technical versus aesthetic. Normally, it feels as though people are referring to the aesthetic. With AI games, the technical stuff may draw more focus.
David Cope, the pioneer of AI musical composition, cautioned us against mistaking randomness for creativity. To paraphrase his definition of creativity, a creative work will make a connection between things which werenât formerly considered actively connected. As you might expect, he was very interested in arguing that creative computer programs could exist (that creativity doesnât come from the human soul somehow).
Current AI chatbots would be generating random text, and not creative text. Itâs random within certain parameters: the output is highly plausible, specifically unremarkable English text, which could plausibly follow whatever text is in its context window. But this is the opposite of creative text, which neednât be grammatical or on topic, and certainly shouldnât be a sort of warmed-over average of whatâs typically associated with the preceding text. It doesnât need to be coherent in the way that the output of an AI chatbot is, but it does need to be coherent in ways that the output of an AI chatbot is not, that is, having a sense of the overall work itâs located in, and having a sense of where that work fits into the broader artistic landscape.
I just want to politely say, Iâm not sure how this contributes to the discussion? Itâs impossible for ChatGPT to have opinions; it doesnât understand the text itâs âreadingâ, and canât possibly âsayâ anything meaningful in response.
Iâve deleted the post so it wonât continue to bother you.
It was less about it bothering me and more about wanting us to consider what kind of conversational norms we want to establish here.
Hmmm, the term âAI-Assisted IFâ feels more like an AI which helps the player to play the game (solve the puzzles), not unlike how a parent can help their child to play a game.
Maybe a term like âAI-parsingâ could represent the idea of an AI understanding what a player is trying to do, and understanding what the world model is, and being able to decide whether that is a valid action or not. Something which helps lower the impedance mismatch between a player and the world model, without giving too much away about the solutions to puzzles.
This would be an interesting and worthwhile AI research topic (unlike the generation side of things), and I donât think anything like that really exists yetâŚ
Yeah, I feel like there are a few very different ways that an LLM could be used in a parser game (which others have also been saying in this thread, but I like making bulleted lists):
Of these, the second one seems the most promising to me (since itâs letting the LLM play to its strengths: analyzing freeform English text), but none of them have really impressed me yet. For the first one, AI-generated prose tends to be singularly unimpressive and unhelpful, and for the third, LLMs are notoriously bad at sticking to a consistent world model (since, well, they donât have any such model under the hood!).
I can easily imagine a language model being trained specifically (for example) to convert any English sentence typed by the player into one of the Inform grammar lines available in a particular game, picking the most appropriate one or erroring out with âthatâs not possible in this gameâ. I donât think this particular thing has been tried yet, but itâs very feasible for the technology. Thatâs what LLMs are best at, analyzing the structure of text based on how people use it, and itâs a strength that could definitely be leveraged more!
I would call that a hybrid AIxClassic IF system or perhaps a Weak (vs Strong) AI IF system.
Or during development, an AI playtester could be very powerful. You donât need a smarter parser if you have a way to check more thoroughly for missing synonyms, unimplemented objects, and so forth. Might be hard to get a general chatbot to limit itself to proper IF commands, but maybe it would be possible to filter the output, or train a smaller LLM for the purpose.
Yeah, having a language model e.g. take your walkthrough, generate a hundred possible synonyms for each command, and see how many of them your parser accepts, would also be a reasonable use case. There are still the ethical and environmental issues, but you could find or train a smaller model for a specific purpose like this rather than using ChatGPT.
I was thinking you could probably even accomplish that with a simpler thesaurus-based program. But the neat thing that a LLM could be trained to do is intuit a large number of obvious commands, just from the gameâs output. Youâd have an infinitely patient, experienced but kind of dumb player to help you check that things which you intended to be obvious werenât actually lateral-thinking puzzles, or check your scenery for unintentianal red herrings.
Just to poke @pinkunz , Iâm going to reiterate my claim that Baba Is You is a parser game. Well, I also happen to think it counts, so itâs not just to poke Dan.