I didn’t really see it so much as an “introductory game” jam, even though it did make plenty of those. I saw that as a byproduct of the actual point, a space for IF tutorial innovation and experimentation. Encouraging this open trial of tutorials creates new tools and approaches for other authors to adapt into their own games. If none of these techniques get used anywhere else then where they were trialed, we sort of failed, tbh.
There are many established ways to approach this problem, and certainly even more that haven’t yet been tried. I supported the TALP specifically to help encourage innovation and implementation on this front, as I saw it integral to both solving the issues you laid out above and to outreach and growth of the parser niche as a whole.
These points are all well taken, though I admit I’m personally rather fatalistic about this kind of thing. Most works of parser IF these days are quite short (1-2 hours), such that adding even a 10-minute tutorial section could meaningfully impact the pacing and narrative of a game, as well as representing a nontrivial amount of additional labor. On the flip side, there’s a low likelihood that any particular contemporary parser game will be somebody’s first, it’s a questionable proposition that any individual author will do a better job of explaining things than things like the “how to play IF” postcard and the instructions built into the Emily Short Inform extension do, and – most damningly to me – there was a huge amount of effort expended 10-15 years ago to make parser games more friendly to newcomers, none of which seemed to have much of an impact (those who were around back then, please correct me if I’m wrong!)
I dunno, I’m usually not the Negative Nelly around here. And I think for a game with idiosyncratic mechanics, player aids including a short tutorial can often be helpful. But it’s hard for me to believe that overall, the game is worth the candle.
EDIT: the more optimistic take is that actually, people do seem to come across parser IF all the time – often they’re people who like played Infocom stuff back in the day but never knew folks were still making these games, but often they’re coming to it brand new. But IME these days usually they’re either coming because they saw a recommendation via a forum or general-interest video game site, usually for something high-profile and newbie-friendly like Hadean Lands or Photopia, or because they’re into playing or making choice-based IF and decide to take the plunge and check out the parser side of things, in which case my sense is they’re often sufficiently embedded in the community that they either know, or can find out, what might be good places to start.
To make either pathway work, it’s definitely important that there be a critical mass of good, newcomer-friendly parser IF that explains its mechanics well, and in total will resonate with a broad range of potential players in terms of genre, game mechanics, writing style, etc. But to my mind it doesn’t follow that either pathway would be noticeably more effective if the critical mass were much bigger than it currently is because the large majority of new games had tutorials, is I guess my point, as compared to other efforts aimed at bringing more new folks into the fold.
Also, this is as good of a time as any to make sure this is known here. The third annual TALJ needs a new organizer this year. The previous organizer has too many prior commitments this year, but would still like it to happen if possible. So, if encouraging innovation on this front seems important to you, perhaps join/form a committee to see TALJ #3 through?
Be a shame to drop the jam after being on a roll for a couple years. These sorts of things live and die on consistency.
The ruleset and framework of the jam have been pretty well established at this point, so this would be more like minding the store than building from scratch like SeedComp.
As a friendly bit of advice, having 3-5 people co-organizing is a blessing, tbh. People are all busy at different times, so with 3 or more organizers, someone’s always on the ball.
Excellent! Thank you, Garry! Any other takers out there? More of a factor of the first willing than anything else.
I’m reading up on it. It all sounds pretty straight forward. I’ve checked the dates for all the other comps and it looks like the best time frame is mid-April to end of May for submissions and June for judging/voting. I’ll work on it during the weekend and send you a PM. I’ve also sent a message to @adventuron to get his blessing or otherwise.
Agreed on the timing. Previous runs overlapped with SpringThing’s submission window, so it’d be nice not to do that again. Appreciate you reaching out to Chris. Best to make sure his feelings on the topic haven’t changed since posting on the Discord. Thank you again, Garry.
Writing a tutorial for every game is a horrible, huge PITA. If people ignore the instruction, do you repeat it? Or move on?
It would be amazing to have a short one-room tutorial game that authors could simply stick into their games. Then they can ask players if they want a tutorial. They could tweak it to their own needs or simply let it be. Something standard and freely available.
Zarf had an integrated tutorial extension that I don’t think ever got an official release, and I tried to make an extension inspired by it; you can see it in action in Enigma. The basic idea is to, every turn, prompt an action that’s contextually relevant and hasn’t been tried.
This is the gist of it:
To give the current tutorial:
if initial tutorial is true:
say "[tut]Type commands for your character to carry out in the game world. For example, you can LOOK to look around and get a description of your surroundings.[/tut]";
otherwise if we have not examined and there is another portable thing (called the item) in the location:
let T1 be "[the item]" in upper case;
let T2 be "[item]" in upper case;
say "[tut]Use the EXAMINE command to take a closer look at something. For example, EXAMINE [T1] (or just X [T2] for short).[/tut]";
otherwise if we have not examined:
say "[tut]Use the EXAMINE command to take a closer look at something. For example, EXAMINE ME (or just X ME for short).[/tut]";
otherwise if we have not taken and there is another portable thing (called the item) in the location:
let T be "[the item]" in upper case;
say "[tut]Many puzzles in interactive fiction involve manipulating the objects around you. Try to TAKE [T].[/tut]";
otherwise if we have not taken inventory and something is carried:
say "[tut]To see what you're currently carrying, type INVENTORY (or I for short).[/tut]";
otherwise if we have not gone and there is a tut-viable direction (called the way):
let T be "[way]" in upper case;
say "[tut]You can navigate around the map with compass directions, like GO [T] (or just [T]).[/tut]";
say "[tut]You've got the hang of this now! If you get stuck, remember to EXAMINE anything that stands out, and see what you can do with those things: take and drop them, open and close them, push and pull them, climb them, put things inside them, and so on. If something goes horribly wrong, you can always UNDO and try something else.[/tut]";
now tutorial mode is false.
Before reading a command when tutorial mode is true: give the current tutorial.
For Enigma I added a few specific prompts:
otherwise if we have not examined the lantern:
say "[tut]The lantern seems important. Why not EXAMINE it for clues as well?[/tut]";
otherwise if we have not switched on the lantern:
say "[tut]From the description, it sounds like the lantern can be manipulated. Maybe we can turn it on and get some more light to see by.[/tut]";
otherwise if we have not examined the notebook:
say "[tut]The notebook seems important. Why not EXAMINE it for clues as well?[/tut]";
For a public release, I would turn this into a rulebook, so that authors can add their own rules to it easily. But a lot of my testers hadn’t played parser IF before and seemed to appreciate this.
(Should we split the tutorial discussion off to a separate thread?)
@pinkunz, I’m not sure it should be clear to authors that TALJ games are a resource they could be learning tutorial-writing techniques from. Or at least, I wouldn’t have connected those dots without hearing it from you.
Granted. Here’s one of the goals of the Text Adventure Literacy Project, which I should note is separate from the Jams themselves:
" To teach a new generation the skills required to play classic text adventure (interactive fiction games), and in doing so, encourage a new generation to create their own games."
Creating more games with tutorials is simply a means to an end.
True, but there’s also a low likelihood that a particular newcomer-friendly game will be the game that interested the newcomer enough to make them check the genre out. If you tell somebody they have to meet some preconditions to play a game, they just won’t play it. For the same reason that sequel novels generally have to recap the previous books, games should expect to have to train players.
I’m totally aware that writing tutorials is a PITA (thanks, @AmandaB, for saying it). But actually we shouldn’t focus only on tutorials. Take the “how to play IF” postcard for instance. It would not be a huge extra effort to start a game with a “how to play this game” postcard. Even some lightly edited boilerplate. It’s just that certain customs have developed about how a game should open. Optionally a few introductory paragraphs to set the tone, then a title/copyright page, then the first room of the game. You just don’t make players page through any information on how to play.
In a culture of newcomer accessibility you wouldn’t open a game like that. “The Only Possible Prom Dress” opens with a [press any key to continue] note explaining how players using screen readers can configure the game for playability. That’s because IF has a better culture of screen-reader accessibility than most software. It’s very possible to push IF orientation harder in individual games. Yes, a “how to play” preface impacts pacing, and a tutorial more so. But so does smacking your head against a parser until you give up on the game.
What I’d like to call for is that authors reconsider the aesthetic possibilities of tutorial material, whether that’s a full tutorial sequence or a more modest “how to play” card. Outside IF, sometimes I start up Thief’s tutorial level just to fool around, because I like it. Skyrim’s developers were confident enough in their tutorial sequence to make it unskippable (as far as I know). Galactic Hitchhiker (1980) managed to fit some tutorial material into its intro text. As I recall, Thaumistry could be mentioned here as well.
Also, something that I feel is very important to remember:
Not every IF should required to follow the gameplay formula of “the tutorial game”. IF should be able to wildly innovate, so it makes more sense to have as many games as possible include some tutorial material. Not just because any game could be someone’s first, but also because there is more than just one set of game mechanics possible for a parser game.
Right. And when you get right down to it, what is a puzzle but a carefully structured lesson in that puzzle’s solution? There doesn’t need to be a firm distinction between teaching the player how the game works on the UI level, and how it works on the conceptual level (assuming a puzzle game). A well-designed puzzle game mostly consists of showing the player how to complete the game.
And yes also because you can never be sure whether a game is an “x cabinet” game or a “search behind cabinet game” until it comes up. If you know you’re making one or the other, that’s the kind of thing you can clear up early, even without a formal tutorial.
Or a game with combat mechanics, or stealth mechanics, or vehicle mechanics, etc. If a game goes wildly off the usual gameplay style, it will absolutely need to include a specialized tutorial.
Chris has given his blessing. To quote:
"I’m happy if you would like to take over organisational duties of the TALP 2023.
Let me know if you would like my blessing written somewhere in particular - happy to accomodate."
So, we will make sure that this happens. As I said above, I’ll take a closer look at it this weekend and make some sort of announcement for expressions of interest or whatever in a couple of days. We will certainly stick to the original goal to encourage authors to write a text adventure suitable for newcomers to the genre. I think the rules will be essentially unchanged, but I’ll try to simplify them as best as possible. More soon…
Please let me know what I can do to help. I didn’t intend to simply dump this on you and walk away. I’d like to assist in any way I can.
Also, feel welcome to PM me if you wish.
Of course. I just want to get my thoughts together, check the logistics and what have you, then I’ll PM you. I don’t imagine there to be a lot of work, but it will be good to share thoughts and opinions and also share the workload.
Bumping this thread since I’ve recently had very similar thoughts to those in the OP—
I’m surprised by the overall pessimism in this thread surrounding using a large language model to replace the front-end of parser IF. My impression is that
- doing so is not nearly as challenging technically as many suppose (I would wager that GPT-4 with an appropriate preamble, explaining (minimalist) syntax rules and providing a list of objects currently in scope as well as implemented verbs, already works quite well);
- would have much stronger benefits for accessibility of parser IF than many suppose. Of course people who have been using Infocom-style parsers for years and understand the conventions of the genre with regard to typical phrasings and commonly-implemented verbs won’t see as much benefit as someone totally new to IF.
Even more exciting to me would be a system along the lines sketched by @zarf, where a language model maps natural language input to a set of game actions, conditioned on the current world state, and synthesizes context-appropriate failure messages as needed. (I would not build this system on top of the z-machine but instead seek a more “ML-friendly” representation of game state.) I agree that lack of training data is a significant obstacle but I don’t think a fatal one once fine-tunable general models become widely available.
It’s probably wise to remember that much of this thread predates the moment when AI development truly went vertical.
Still, despite the rapid advance, large language models aren’t quite reliable yet. The newer GPT-4 model, for example, is better at avoiding mistakes than previous models. It now has a 80% chance of giving a correct, plausible and well-grounded response. Unfortunately, that means it still runs a 20% risk of returning a confident answer not supported by its training data, the current situation, or reality.
Despite this, I find it more difficult to dismiss the instances of actual intelligent or seemingly intelligent behaviour in recent AI models.
The trend seems clear. Looking at the progress made by AI today, it’s not far-fetched to imagine them doing well in the role of gamemaster.