Escape the room bounded/constrained AI stats game

Hey everyone, hope you are having a nice day in those unprecedented times :grin:

I am new to intfiction, but it seems like the perfect place to find people that might be interested in something like this. At the moment there are game systems like Renpy/Twine, with a sandbox environment, with character progression when you go through certain scenes, with maybe minigames, repeatable scenes, stats progression, and random elements. These kind of games need to be crafted by hand piece by piece, so the developer needs to write the Story, curate the images/audio/animations, do the programming of stats/animations/random events, and then manage the timelines of different events.

Then on the other hand you have AI Games that are basically an LLM with some context on top for world setting, and characters. Here I am of course talking about SillyTavern, character-tavern, infiteworlds and things like that. The problem with these is that it is up to the player themselves to show restray in order to enjoy the game, because you can just go “And then I kill the evil darklord with a slap, find the dragonballs, and become supreme emperor of mankind for all eternity” all in one go. Plus the Story tends to be limited, and it doesn’t really feel like a game with an artists vision, it just feels like what it is, an LLM with some flavor.

I was thinking of creating something that bridges the gap between these 2 game worlds, some kind of system very similar to Twine/Renpy, but instead of crafting the dialogue for each sequence, you as the game developer, would create checkpoints, with pre-set dialogue/images/scenes, at specific milestones.

I have created a functioning simple game, as a demo of such a system: github[dot]com/AymanJabr/Escape-the-room-AI-stats-game

This uses Claude Haiku, so it’s a very fast and cheap model to run too. I would love for you to try it out, or at least give me your opinion/ideas about what you would expect this to eventually do.

If there is enough interest, then I will put it online as an application that you can try directly, and transform the system into a game engine, however please do consider that this takes month if not years to properly develop and iterate through, and I want to build it open-source so there will be a lot of back and forth with the community, which adds time.

Ok, took me a few days, but here is the live link to the game itself @jkj_yuio , as requested I made it so that you can chose the model that you want (Anthropic or openAI) @joelburton.

You will need to provide your own Anthropic or OpenAI key, if you could create a new one and then delete it afterwards it would be great, I am not storing keys, and I don’t want to be accused of anything.

Live link => escape-room-fldd[dot]onrender[dot]com

[Wanted to include images, but apparently new users can’t :crying_cat:]

2 Likes

Hello. Looks interesting, but is there a way you could put the demo on a web page?

Yeah, probably should have done that from the start, thanks for the suggestion, appreciate it :flexed_biceps:

It’s interesting, @NoobGames . I got to the end where I escaped. I took at look at the code, as well, and I think you’ve got an interesting structure for other games.

The challenge I found was the responses sometimes felt very much like AI-speak: everything had the same kind of descriptions, of the same length, and never pithy. Perhaps it would help to seed some of the descriptions of the objects in the room directly, so the AI could resemble those?

Similarly, the ghosts replies sometimes drifted toward just metaphysical blather — hardly your fault, since you didn’t write them. But perhaps if the ghost had more of a backstory given to the LLM, it would have a more consistent and interesting narrative voice.

Anyway, it’s an interesting attempt. I don’t think it would be much fun as an actual game at this point, but it’s not an unworthy start. Thanks for sharing!

1 Like

Thank you for taking the time Joel, especially that you got to the end, I truly appreciate it :flexed_biceps:

What you are talking about is THE MAIN challenge with LLM, I want this system to work on fast and cheap models, which means dumb models with limited context window, I am trying to build an example of a potential system for game-designers, if the problem is with the description of the character of a specific NPC, then the system should allow the artist/game-dev to freely change that.

github[dot]com/AymanJabr/Escape-the-room-AI-stats-game/blob/master/stories/escape_room/characters.json => think of everything in these .json files as something that the developer can change.

Right, and I did look at those. I think it’s clever to break out that stuff into easily-understood and editable stuff. That’s a good part!

Perhaps it would help for you to extend the background info for the example game? To see if you can get sharper answers under Haiku?

Something that might be useful is to break out the model as a variable you can pass in in the CLI? I wouldn’t mind spending a few more tokens to have used Sonnet (and it’s also fast), if it produces sharper responses. But right now, you’ve got the model ID hardcoded in a bunch of places.

(I paid ~11 cents to play the game under Haiku. I wouldn’t mind if that was 20 cents under a different model, for example)

Ok, I will do that directly in the server, so: choose Model Provider (openAI or Claude (what most people have)) => Enter API key (gonna need to encrypt this and tell the user to delete it afterwards) => Choose model.

11 cents still feels like a lot to me for a single play through, I don’t know if there is some way to reduce the cost, even with open-source models, they are still expensive to run and I don’t think it will save the player that much. Running it locally on people’s computers/phones is not going to be an option for at least the next 5 years either.

11 cents doesn’t seem bad to me.

Looking through the code, it looks like the NPC interactions are the only places where you’re feeding history into the LLM call, so I imagine the very largest part of the token cost is there. It may be that in a different adventure, that focuses a bit more on exploring a space, doing things with objects, etc., might not go through tokens as fast. This game is almost entirely talking with the ghost.

Yeah, unfortunately I am more of a builder/dev, not that much of an artist :smiling_face_with_tear:. I would love to create another similar test project with someone’s vision of a better game, do you know anyone in the community who already has a story that would fit well in such a system, and that would be interested in adapting it to something like this?

Ok, took me a few days, but here is the live link to the game itself @jkj_yuio , as requested I made it so that you can chose the model that you want (Anthropic or openAI) @joelburton.

Would it be better to start a new Topic, or is that frowned upon?

Live link => escape-room-fldd[dot]onrender[dot]com

https://escape-room-fldd.onrender.com/

(Note: looks like you have to provide an API key, i.e. it’s not free to try out.)

No, I am poor as shit :smiling_face_with_tear:, can’t float the cost of people playing, but this should work well even with Haiku or older cheaper models, it should only be around 10-15 cents per play-through, which can take you anywhere from 15-30 minutes.

I’m not so sure about this. I agree that a phone is probably not an option, but I’ve had some pretty encouraging results with other LLM stuff on my Intel Arc B580. The main issue is the context window I can manage to have locally is much smaller, which may cause the game to degrade as time goes on.

Personally I don’t have an API key anywhere but if there was some ability to run this locally I’d want to try it out at least.

It needs a bit of adjustment to run local models, you have the original code open-source, you can change it relatively easily. If this has interest, I will add it to a game-engine like Renpy or Godot, with the ability to use local models running on a port.

Right now I just want to get this into the hands of people to try it out