Why Are There So Little Parser Games Now?

I strongly suspect but don’t have the time to confirm that if we considered all the parser systems (Alan, Adrift, TADS, Hugo, &c) the graph would probably revert more to the mean and the ups and downs would be less extreme. If someone has a better way of generating that dataset than manually making searches on IFDB I’d welcome it.

Quest is a complication though, because not all Quest games are parser games; and because of the (incorrect) assumption that development system necessarily informs whether a work has a parser interface, IFDB doesn’t tag parser games as such. However, I am working from the assumption that Inform is probably the most popular development system overall and thus a representative enough sample for the purposes of pointing out Twine certainly hasn’t killed parser games; it’s very likely 2012 (The year of Howling Dogs) was the most active year for parser development ever, or the second most active.

Another thing to note: before Twine was around, Inform and other “parser” dev systems were regularly being used to write gamebooks. At least one 2001 Inform game that I’m aware of (A Dark and Stormy Entry) is a multiple-choice game and not a parser game. I’m comfortable making the assumption that Twine did put an end to almost all of that.

There are going to be fewer parser games than other types, because parser games take a good deal of time to create and test if an author is serious, and not writing speed-IF.

The games of the type that used to be released by AAA developers ten years ago are now being made by one person and released independently on an iPhone. If one wants a major release that someone will pay $60 for, it requires millions of dollars and person-hours of input. This is what has happened on a smaller scale with IF - ways to develop IF used to have a huge learning curve and require dedicated people to learn the process and write them. Now one can learn enough Inform in a day or so to build an interactable environment pretty easily, and there are easier hypertext/choice based platforms.

It’s possible to turn out a complete Twine game in a day, and if the idea and the writing is good, it can have moderate success. Hypertext choice games often are like potato chips though, you can experience a whole bunch of them in a short period of time. Parser fiction of normal length is more akin to a meatloaf - it’ll be around a while. Not every person who has a game/story in them necessarily needs the components of objects and doors and inventory, so they’re not going to bother with a parser system.

Also very few parser games are released cold on IFDB because they will sink like a stone and garner no attention. If I write a nifty little Twine experience in two days that’s not such a bad thing because it is meant as a quick and disposable experience. If I spend months on a parser game, and I’m not Emily Short or Andrew Plotkin, there’s no way I’m going to waste that by releasing it as a cold IFDB entry.

And that is the answer to my question! :smiley: I see the logic now, haha.

Sorry to derail a bit, but that regression line is entirely inappropriate. Do you really think that, given the trend, there were actually more than 40 Inform games released in 1994?

Even eyeballing it, you can see the trend looks something like this:

(sorry for the hand-drawn line but I don’t have the original data)

Running average with a five-year window makes the most sense for this topic, I’ve found.

(That’s what I used for eblong.com/zarf/pic/tally-ifcomp.png – the thinner lines – or was that four-year? Gotta update that graph.)

Yes, definitely better. It has the problem, though, that it maybe overfits the data, so that blips due to, say, the aforementioned IntroComps have too strong of an influence on the general trend; also, such peaks result in sort of a “delayed reaction”, in which the running average peaks after the actual count like in the early 2000s.

Anyway, it’s a) not terribly important, b) a bit pedantic and c) not relevant to the central theme of the thread, so I’ll resume lurking rather than banging on about it!

Looking at the graph, I see that parser games in the IF Comp are steadily declining for near to 3 years (strange, it was after Taco Fiction got first place. Is this a conspiracy?? :open_mouth: Kidding about that, haha). I hope that we can see more of parser’s face this year!

Not really, though–they declined for two years from 2011 to 2012 and then from 2012 to 2013, but I think there were at least as many parser games in 2014 as in 2013. (Maybe more depending on how you count Caroline.)

I thought there were far less parser games in the comp last year than the previous year, though I can’t remember where I read that. Didn’t the “IF is Dead” thread mention it?

It didn’t count the two Quest games or the TADS game that was released as an .exe. It also didn’t compare the numbers to the year before (at least not in the original post).

I would be more interested in graphs that only included games that are, say, rated 3 or better on IFDB, perhaps also with a minimum number of ratings. That might control for SpeedIF and other circumstances that result in larger numbers of below-par games being released in a given period. Because really, the desire is not “more parser games”, it’s “more good parser games”, right?

Who gets to decide if a game is good, though? Just because people rated it less than 3 doesn’t make something a bad game.

IFDB ratings are very uninformative. What exactly “three stars” means varies wildly by user and the data has mostly tiny, noisy sample sizes. All my games on IFDB have three-star ratings. My Shufflecomp title has a higher rating. I don’t really think all three of those games are of the exact same level of quality, and they involve drastically different levels of effort.

when people give Dinner Bell three stars I get SO MAD and I want to go to their house and say “what about this was a THREE-STAR EXPERIENCE for you and why didn’t you TELL me so I could FIX it” and then I am like “okay calm down be a grown-up”

three stars though

oooooooooh

(I was coming here to agree that star ratings mean different things to different people but I got distracted)

Absolutely, the ratings mean different things to different people, but I would expect that the games that only average 2/5 stars (with say a minimum of 10 voters) would generally be considered not-great games according to the community consensus, no? You can’t draw any deep, solid conclusions from this data collection method, but I still think it would be interesting to see if this changes the shape of the graph by much.

I usually look more to the ratings, but if there are written reviews in the game, of course we will give the reviews more trust! Let’s just put this game called ‘Tapestry’ as an example. I loved the game, but some people gave it 1-2 star ratings on the website. These things revolves around opinions, rather than the real fact whether or not the game is actually good.

Sure, opinions differ, but the average across 32 ratings is 3 1/2, which seems like a solid consensus that the game is pretty good.

I don’t know of any objective measure for “actually good” other than an aggregation of opinions.

Mmmmh… Don’t know, really.
On first quote: my speedIf has averaged 3 1/2 stars. And that is definitely not a “good game”.
Also: the Andromeda games fare barely over the 4s, which means they are almost as good as Lisey, which I would argue against :slight_smile:
I think the object of the discussion – the kind of game itself – should be considered.

Also – hoping this won’t ignite a flame – imo, no: aggregation of opinions is NOT a measure of “actually good”. So many masterpieces have so few fans compared to the average lousy blockbuster, in every medium. It really depends on what “actually good” means. Good for the crowd/for sale? then Ok. Good as in “good for real”… well.

True. Good ratings don’t make a good game. Has anyone checked out the Quest site? There are literally hundreds of games on there with multiple ratings of 5 out of 5 and a good deal are unplayable messes. Some can’t be finished due to bugs, some crash all the time, some have typos in their titles, yet still a lot of people thought they were worth 5 out of 5.

Unless everyone rates games the same way that everyone else does, a game having a good rating means nothing.

Or maybe have a 10/10 rating instead of a 5/5 one. Allow for more nuance.