So the discussed project is using random internally to influence / determine what choices (storylet links) to display to the end-user for their selection, and they wanted to balance the frequency of those random(ish) determined choices.
One thing that is unclear is if the display order of the “current” list of choices is consistent, or if that is also randomly influence / determined. As that can also be important due to “first / last item selection” bias that can occur in surveys / questionnaires.
eg. if the current randomly determined list of choices consists of “apple, banana, cherry”, are those choices always displayed in that order each time that specific combination is produced. Or does the system randomise the order so that it might display them as “banana, apple, cherry” or “cherry, apple, banana” etc…
In the case of the above type of project I can see how randomly selecting links could help determine the frequency of each choice’s availability, but I would argue that it is the number of test runs preformed that is the critical part in determining that frequency.
However the original question was above testing for (coding?) errors within a SugarCube based project…
…and there was no mention about needing to determine the availability frequency of each potential choice.
So I’m still unsure how randomly selecting links can guarantee a high enough code coverage to find the majority of “broken code” (1) type errors currently in a project, unless the number of test runs preformed is very high.
And I definitely don’t see how such can be used to find any “logic” errors, because such testing generally require knowledge of the expected outcome.
But the project isn’t mine, and I’m not doing the testing, so what I thing doesn’t really matter. 
(1) where “broken code” represents errors types like invalid syntax; using misnamed variables / macros / functions; etc…