Keeping track of success and failure

I can’t seem to find any documentation that will tell me how to determine if an action succeeded. I’d like to do something like this:

[code]The factory is a room.

The getting things done rule is listed instead of the advance time rule in the turn sequence rules.

This is the getting things done rule:
If the action succeeded, increment the turn count.
But this does not compile. Is there a phrase that means “the action succeeded” (or failed?)


This is tricky for a couple of different reasons.

I may be forgetting something, but as far as I can determine there’s no simple built-in way to tell whether an action failed or not. Part of this may be due to complications: what if one action triggered another, or multiple characters are performing actions, or the player combined several actions into one command like TAKE ALL?

One approach is to check what the value of the “reason the action failed” variable is. Try something like this:

Every turn: showme reason the action failed.

Unfortunately, it doesn’t seem that there’s a way to check whether the value of this variable indicates a failed action or not. If the action succeeds, it seems to be set to “procedural rulebook”. I’m not sure why this is, exactly: the default value for rulebooks according to the Index panel should be the “little-used do nothing rule.” At any rate, since procedural rules are not long for the tooth, something like this (which currently more or less* works) seems dangerous:

This is the getting things done rule:
	If reason the action failed is procedural rulebook, increment the turn count.
  • I say more or less because the issue of whether an action succeeded or failed is sometimes an unexpectedly gray area. The library rules that most actions which do nothing, like “jump” (“You jump on the spot, fruitlessly”) are actually failures, even though the text seems to imply the player’s intentions have been carried out. (Waiting, conversely, is counted as successful, even though it likewise does nothing.) Likewise, anything you create a custom successful behavior for with an “instead” rule will be marked as having failed, so something like this would not count as a success:
 Instead of touching the control panel:
     say "It lights up-- success!";
     now control panel is activated.

One solution is to add “rule succeeds” to the end of any rules like this that appear.

Wow, that’s much more complicated than I expected!

I had already thought of using “rule succeeds,” although that might end up just as messy as manually advancing the time for each action.

Perhaps it would be better to wipe out the advance time rule completely and then add something like this:

After time-consuming behavior: increment the turn count; continue the action.

… and then define lots of actions as time-consuming behavior.

I guess that wouldn’t work with Instead rules for “successful” actions at all, though…

Belatedly, part of the trick here is that any given turn might have seen multiple actions happen: the player’s action might have expanded out to several actions (TAKE ALL; EAT with an implicit TAKE; etc.), and one or more NPCs might also have taken action. So while it might seem that this is something that could be tracked unambiguously, it’s actually trickier than it sounds in the general case.

Even if there were an ambiguous idea of what single action happened during the turn (see my previous post), this is probably still not the best approach to the problem, because some failed attempts could take time (“You step out onto the rope bridge, but a quarter of the way across it starts to sway dangerously, sending you back hastily the way you came…”), while some actions trigger subsidiary actions that might also take time. (Are we counting taking something as time-consuming? Should it be just as time-consuming to take something implicitly as to take it explicitly?)

So I recommend having a look at Eric Eve’s Variable Time Control extension, which lets you easily tag actions or pieces of actions with the amount of time they should take, and then adds everything up at the end.

Thanks! That looks useful.

An update for posterity: I believe, all the caveats above in mind, this should more or less work:

To decide whether the action succeeded: (- (reason_the_action_failed==0) -).

The variable gets set to 0, which as of 6G60 is the procedural rulebook; so this may break once procedural rules are withdrawn, but I’m cautiously optimistic that whatever is assigned to 0 to replace it will be a default value.

It shouldn’t be trickier if actions are granularized. The question should be does the player’s action succeed. It’s a yes or a no. If something stopped it from happening, then no. If nothing stopped it from happening, then yes. But you can also ask: “how much of the player’s action succeeded.” Inform’s documentation (confusingly) talks about intent and this would actually be a better model.

So if I did TAKE ALL and I could take four objects in the room but a fifth I was prevented from taking (maybe it’s too heavy), then the action TAKE ALL failed. Parts of it, however, did succeed. But the action, as a whole failed. That kind of granularity should not be hard for a system to recognize. If I try EAT APPLE (when I don’t have it) and this leads to an implicit TAKE action (which then fails), the action EAT APPLE failed. I was not able to eat it because I was not able to take it. There’s no ambiguity there.

If an NPC takes an action, it doesn’t matter unless the action taken prevents the action the player took from completing. Go with EAT APPLE. Let’s say I try but the NPC sees me about to do this and takes the apple before it can be eaten. The the action EAT APPLE has failed. Or maybe I was able to eat a portion of it but then the NPC stopped me. The action thus partially succeeded but was not completed. Or again with TAKE ALL. I start taking everything in the room but the NPC takes the apple before I can get it. In that case TAKE ALL strictly failed – except certain parts of it did succeed. I took some things but could not take others. Again, that level of granularity should not be hard for a system to model.

So it is very unambigious when you model things as “did the action as a whole succeed” and then realize that “as a whole” allows for some elements to succeed while others did not. This also means “success” and “failure” may not be the best ways to model the end states. Perhaps “entirely completed”, “somewhat completed”, and “not completed at all” are more viable. Again more granularity does allow for this and I think Inform seems to make a very poor design decision regarding how “success” and “failure” are modeled. (The documentation at one point even indicates that it’s not quite how you might think of “succeeding” or “failing.” The fact that this had to be documented as such should have told someone it was a poor design idea.)

If you’re offering granularity, this would be on my wish list:

Each action has a “success count” which is reset to zero after reading a command.

Whenever the carry out rulebook is called for an action, that action’s success count is incremented. If you wanted to get fancy, you could also record the last actor carrying out the action.

For even finer granularity, actions could also have an “attempt count” that gets incremented when the check rulebook is called (and/or a reason the action failed property for the last attempt on each particular action).

Perhaps this is something that could be done in an extension.

Save us from more extensions! :slight_smile: Seriously, Inform is way too loaded with extensions. When I first started looking at it and was directed to some extensions, the page on the Inform site that listed these was a bit overwhelming. That was before I started using them. Then I found some extensions conflicted. When I tried to learn how certain techniques were done, I found the different coding styles of different authors really precluded that and thus precluded easy modification.

Something like the notion of how an action “succeeds” or “fails” should be a built-in part of Inform and the idea of any sort of granularity that stem from those concepts should similarly be a part of the built-in part of Inform. It’s too systemic of an aspect in my opinion to rely on an extension being the driver of it. Now, the granularity mechanism itself could be built such that it can be extended. That would be fine. Inform design needs to settle on a notion of what is and is not an extensible mechanism of the platform. Looking at the extensions page as a whole, you can see that from a design perspective it’s all over the place. (Programmatically, I’d say Inform development confuses the distinction between orthogonal and non-orthogonal extensions, with suitable modifications for the language domain. Think Extension Methods in C# versus, say, Epsilon Extension Language. Or maybe just think of the programmatic distinction between extendable vs. extensible.)

In any event, I’m still curious why the poster I quoted would say “it’s actually trickier than it sounds in the general case.” Conceptually it’s not and programmatically it doesn’t have to be. So is that comment made because of a misperception of granularity or because of how granularity seemingly isn’t accounted for in Inform?

I think the goal of Inform is not to be a simulation. In a simulation, granularity is desirable because it increases the accuracy and flexibilty of the simulation. But as with static fiction, attention to irrelevant details in the story world actually takes away from the experience, hence there are a lot of extensions to choose from depending on what aspects of the world are relevant to one particular work. I think a detailed model of every action that goes on during a turn would be a waste of time and memory for at least 95% of all interactive fictions - that’s why I suggested it as an extension.

The quality and compatibility of extensions is of course of utmost concern, and I’ve seen my share of conflicts myself. But for the most part, I’d say I7 extensions are an order of magnitude easier to manage than the old I6 extensions were. I like knowing that the I7 site only contains extensions that have been reviewed by an official team, so I don’t have to choose between 10 different extensions for NPC movement, for example. And if you report conflicts as you discover them, you can help to improve the state of extensions further…

Personally, I like to read the source of extensions and include only the bits that I actually want to use. That’s a lot easier in I7 than with more procedural or object-oriented languages.

The goal of Inform as I understand it is to produce games that are a type of simulation of some sort of model world.

What do you mean by “static” fiction? Do you mean a book? Inform isn’t about producing that. The ‘irrelevant details’ don’t have to be used even if they part of Inform’s built-in set. That’s not an argument for making an extension. Imagine if the .NET library or MFC or the JFC were all treated as individual extensions to the language. It would be madness.

Built-in elements do not detract from the “experience” as long as there are ways to make sure the built-in elements you don’t want to use do not intrude. A good system makes sure of that without having to rely on extensions. Certainly the notion of whether actions succeed or fail would be relevant to all model worlds, even if how the granularity is used would differ.

I think we’re talking about a different aspect of “detailed model.” I was just pointing out that “success” and “failure” do not have to be as ambiguous as some claim (and as Inform seems to treat them).

The whole point of extensions would be to allow you to choose between ten such different NPC movement models if they were all different. That’s why you have extensions. If the goal is to center around an established “official” movement model, then that should be incorporated – but also extensible. If this official team is reviewing the extensions, as you’re saying, I don’t understand why I still find bugs in some (or why some say I shouldn’t use them at all anymore). Or why some don’t work with “use no deprecated features.”

Keeping the thread on track, though, I still don’t see why keeping track of success and failure is considered to be so ambiguous or problematic and I’m surprised more people didn’t question that assumption when it was brought up.

It sounds like TADS 3 might be a better language for you. It’s object-oriented and there’s a lot of world modeling built in.

I’ve always assumed that Inform’s extension model was required to keep the Z-Machine formats viable, by limiting the library size to something that could fit in z5.

Graham Nelson’s “Afterword” article in the IF Theory Book says that Glulx will become the default format soon, and that floating-point number support will be added to the core language - the first such feature that will not work for the Z-Machine format. So there’s some hope that Inform will become more feature-rich out of the box.

That said, he seems to view Inform’s extension ecosystem as a strong advantage:

I would second the recommendation to look at TADS 3. It is better aligned with expectations of a thorough but extensible core library. I’ve found that its reflection capabilities permit a “rules light” style, giving me what feels like the best of both worlds.

The “236” indicates some of the problem right there. Look at that from a new person’s perspective or someone who hasn’t been with Inform 7 since it’s been evolving. Then look at how extensions can clash. Or how different authors have different ways of writing them. Or how extending and using extensions can differ based on what you have to know or the level of Inform knowledge required in some cases. There’s definitely a good argument to be made for systems that can be extended but a community-driven approach like this along with a somewhat seemingly inconsistent approach to what is considered “core” behavior – and thus not an extension – hampers that quite a bit, at least in my opinon.

I did in fact do this, hence my lateless of reply here. Thanks for pointing this out. TADS 3 is indeed a very powerful system and I completely agree with you on the reflection aspects and the fact that it’s an extensible system rather than just one that can be extended. It’s a pity TADS 3 doesn’t have the apparent momentum that Inform 7 does because I do think TADS 3 is ultimately a more robust system with a much more cohesive design philosophy, at least so far as I can discern from implementation alone. One issue I can see with it, in looking over the VM specs, is that it could be difficult to port to various formats. A good example might be a browser-based format, which would seem very simple with Z-machine and not too much more horrible with Glulx.

The last update for TADS seems to have been 5 May 2009. I don’t know if that’s because it’s more “done” than Inform or because active development is not going on any more.

The plus side is that Inform 7 and TADS 3 are so apparently different in so many ways that it does give people a nice compare and contrast as well as a way to choose a tool that works best for them.

That’s too bad - I had the impression that TADS was still going strong. Maybe it needs more passionate, dedicated authors and programmers to give it a breath of life…

Development for TADS 3.1 is underway. That link goes to a series of posts from last April / May covering the planned features, including web publishing capabilities and a new library.

Also, there’s definitely an active community of authors using TADS 3. Just reading through the comments on that series of blog posts, I recognize a lot of the names from their participation in the broader IF scene or from reports on the Gargoyle issue tracker.

Part of the reason TADS has a lower profile on the forums is because it isn’t as attractive to newcomers. There are some technical reasons for that. Lack of browser support is cited as the big drawback these days, but it’s only in the last couple years that Parchment has come along, and it’s been less than a year since Quixe’s release. Before that, it was the absence of a full-featured interpreter on Linux and OS X. The official IDE isn’t as nice, and is still Windows-only.

Even in a world where those technical differences did not exist, Inform 7 would still have more “curb appeal” than TADS 3. It’s designed to be more attractive to writers, and the evidence suggests that it is. However, that distinction blurs somewhat among the initiated; a number of established authors are proficient with both systems.

Much of the buzz here can be attributed to the shortcomings in Inform’s design and documentation. If you took away all the “How do I do this?” and “Why doesn’t this work?” questions, you’d be left with a similar volume of posts in the Inform and TADS areas. It’s not that TADS authors don’t have similar questions, but there you can answer just about anything using the Library Reference Manual, and presumably everyone figures that out eventually or gives up in frustration.

Furthermore, if you look at the ADRIFT forum, you will see just as many novice authors and arguably a richer discussion of game design in theory and practice. Their community is relatively insular and the average quality of their games is somewhat low, but they have a lot of repeat authors, a lot of competitions, and a lot of games published every year. By nearly every non-technical standard, ADRIFT has greater momentum. It only lacks the promotional efforts of community celebrities like Emily Short and Andrew Plotkin.

Hence I see the decision to use Inform or TADS as making a choice to privilege the technical aspects of craft over convenience. As a consequence, implementation quality and portability count for a lot among the authors here. No one wants to put more effort into the programming side, only to be faced with crippling platform-specific bugs on release.

TADS has historically had a very high quality but monolithic reference implementation, and fared rather poorly on the portability metric. Inform is doing better these days, but two or three years ago the picture was different:

  • Git, the preferred Glulx interpreter, languished for several years after Iain Merrick moved on to other things. (His departure also halted development of HyperTADS for OS X, which would have changed the landscape for multimedia TADS.) Missing features and various bugs plagued the interpreter until David Kinder took over maintenance in 2009.

  • Inform’s indexed text feature was implemented for Glulx in a way that depended on Unicode support in the Glk API; as a result, only a handful of the existing Glk implementations can properly run new Inform games. All of the libraries in that list that don’t mention support for the 0.7.0 API are effectively obsolete now. This had the unhappy result of breaking the Glulx games in IF Comp 2008 for many players.

The small number of developers working on infrastructure means there will always be a lag between the desired availability of features and the corresponding implementation. Inform has mostly caught up at the moment, but there’s at least one major shift planned - the promised CSS / HTML extensions to Glk - and essentially no guarantee that everything will come along for the ride.

Excellent post and very informative!

Seriously? There are established authors writing with Inform? If true, that’s impressive. (I assume you don’t mean just authors of interactive fiction games since that wouldn’t be as indicative.) I’m not entirely sure what they would find so accommodating about it but it would be good to hear. If it’s just the natural language gloss, I would think they’re in for a rude awakening as they become more entrenched in detailed implementation.

I would actually be surprised if someone who wanted to write a book or something would be wasting their time with either TADS 3 or Inform 7, to be honest.

Portability is an interesting one. How portable – in terms of platform – do these systems need to be? Are there stats of how many players (as opposed to game writers) are using systems like Linux, OS X, Windows, etc? Are there stats indicating whether people do or don’t prefer playing in browsers (and, if so, is the preference equal for working experiences in IE, Firefox, Chrome, etc)?

As far as implementation quality, I can definitely see that. I’ve often worked with languages that are quite a bit more difficult in a lot of respects – i.e., C++ over C# – because I knew my implementation quality could be better given the possibilities of the one language over the other. Likewise, I can see the portability argument in this same context. I would use Ruby or Python if I had to do something cross-platform rather than rely on, say, C# and Mono. Graphically, I’d use toolkits like wxWidgets (or bindings such as with wxPython). I would only do that if portability mattered, of course. And portability between authoring and portability for those playing can be quite a different beast.

So going with implementation quality – and considering the original nature of this thread – I’m still confused why some people were saying that Inform couldn’t or doesn’t treat “success” and “failure” more granularly or, at the very least, use “success” and “failure” a bit more consistently. From an implementation side on the game player’s perspective, that doesn’t matter as long as the effect works. For a game author I suppose the same could be said to be true. So I wonder if there’s a common set of expectations (from game authors) around how to express concepts.

Adam Cadre has a novel. I assume he’s still in the Inform camp although he’s been away from the scene for a few years. (In Get Lamp he promised that he’d return once he had a year to spare for a new project.)

Jon Ingold has a new Kindle book I’ve been meaning to check out. He’s also had stories published in Interzone. He is an active Inform 7 author.

S. John Ross is a Notable Figure in the tabletop RPG industry. (He also wrote a delicious recipe for General Tso’s Chicken that I discovered years before I played my first IF game.) He uses Inform 7.

Eric Eve has two nonfiction books on New Testament scholarship. He wrote much of the TADS documentation and has done quite a bit in Inform 7 as well.

Jim Aikin is the most widely published author hereabouts, in both fiction and nonfiction. You can read some of his stories here. He’s used Inform 6, TADS 3, and Inform 7.

You can get a rough sense of the player distribution by looking at Gargoyle’s downloads. Both Mac and Linux users are likely under-represented; Mac users because the OS X port is new to this release, and Linux users because many use distribution-specific packages hosted elsewhere. Still, I’d guess it breaks down to about 25% OS X, 10% Linux, 65% Windows.

It seems to be the kiss of death to release a Windows-only game for IF Comp. Such games nearly always finish last. Cross-platform support is arguably more important because of the focus on competitions as the venue of choice for new authors; you aren’t likely to win if 30-40% of potential players can’t play your game.

There’s something of a platform divide in terms of the major IF language implementors: Graham Nelson, Andrew Plotkin and Emily Short use Macs; I assume MJR uses Windows. This matters more on the TADS side now, but it’s been relevant for Inform in the past. (See this discussion, which explains why Glk went from 2000-2005 without adding new features.)

I’m not aware of any statistics. I think most people who play IF now prefer to do it with a desktop interpreter. But there’s been a lot of selective evolution over the years, and anyone who found the desktop offerings unpalatable likely checked out a while ago. On the other hand, when discussing potential new players, I don’t think preferences enter into it - if it’s in the browser, they’ll play it; if not, not.