If there’s a game missing from these search results, it could be that the game has not been added to IFDB, or hasn’t been labeled on IFDB with the authoring system, language, or publication year. Or it could be I’ve made a mistake in the search link–if so, please let me know.
These links are not part of the awards, and I don’t know what all the actual award categories are going to be, and I’m not the organizer. I just decided to do this on my own.
There are games on our 2022 list at CASA that aren’t in IFDB. Looking quickly they mostly seem to be recent translations of old games (which wouldn’t really count) and smaller experimental titles. http://solutionarchive.com/list/year%2C2022/
Well, that was an interesting exercise. I clicked on the first link to see all games published in 2022. The search said there was 378 results (just updated to 379). It’s supposed to be showing all results, but it stops at ‘Overrun’ and there is no Next link to see the rest.
I clicked on Show by Page and it only shows 20 games at a time, so I have to click through 19 pages to see all the games. There is no way to get back to the Showing All page, unless I use the browser’s Back button.
Then I tried doing my own search by publication date. This defaults to sorting by highest rating, but the rating bears little resemblance to the star rating. A game with 4.5 stars may appear on the third page after various 3 and 3.5 star ratings. What the hell is going on there? IFDB must have some weird formula for rating games.
I also noticed that a few games have duplicate or triplicate entries. On further investigation, it looks like they are multi-part games with the same title or the same game in multiple languages. This could be made clearer in the titles.
I noticed games in my database that were missing on CASA and/or IFDB, so I’ll try to add those in the coming days, even if it’s just a skeleton entry that can be expanded later.
As far as I can tell, when sorting by rating IFDB uses a version of the average that’s adjusted somehow for how many individual ratings contributed to it, such that, e.g., a game with a 3.5-star average from 40 ratings is listed ahead of a game with a 4-star average from 10 ratings. It is a little confusing.
It’s an algorithm similar to what IMDB uses. I believe it’s a version of Bayesian averaging, although I don’t remember which formula was eventually settled on. I do know the version we use is very similar to the one used and explained on the IFDB Top 100 page: IFDB Top 100 - Recommended List
The problem with just using average ratings is that a game like counterfeit monkey with an average rating around 4.9 coming from hundreds of voters would come behind all 246 games that have a perfect 5-star rating from 1 or 2 votes. Whatever system is used needs to take into account both ratings and average votes.
To see games with high average ratings, you can add rating:4.5- (or whatever) to your search.
Almost all of the games with multiple 5 star votes and a perfect rating of 5.0 on IFDB right now are from voter manipulation, most from Chooseyourstory when they raided IFDB, but some from people who get their friends to give 5 stars to games.
(For comparison, here is a list of all 14 games released in 2022 with a rating of 4.9 or higher: Search for Games)
Oh, I agree the weighted ratings are more useful! I just meant that, since it’s a little unusual to sort by weighted average but display only the raw average, I can see why someone would look at that and go “wait, is this working right?”
I had put &pg=all in the link to try to get IFDB to show all the results on the same page, but it looks like IFDB shows only the first 250 results. I’ve changed the search link now so that it goes to the usual 20 results per page, so at least people will not be confused about whether they’ve seen all the results. If someone comes up with a way to show more results at a time, I can edit the link again.
I aways preferred the simple average as IFComp does. But this year we have to accept it I guess. Then we have a whole year to discuss that topic.
EDIT: Mathbrush and volounters could still postprocess the results in e.g. excel or a publically readable (not writable) google doc. Mathbrush might need a second pair of eyes before publishing the results. I’m pretty busy but if no one else volounters, I could probably take a look.