Branching from the discussion at Pouring it into, Does the player mean - #21 by otistdog…
OK, calling all Mad Scientists: Let’s fix the inference problem.
Based on my experiments, this is both easier and harder than it sounds. Certain relevant parsing features are not exposed at the I7 level and not easy to represent, but that can be worked around. The bigger problem is trying to impose an intuitive model of the inference process over what’s actually going on, so as to make something easy for authors to use.
@Draconis: You have pretty much written off inference as a nuisance best removed. What would it take to make the subsystem useful to you?
@Zed: You’ve had a long-standing interest in “sanity check rules.” What form would those take, and can they be made applicable here?
Some approaches I’ve tried:
-
Fine-grained inference scoring control
- author can write arbitrary rules to assign inference score bonuses and penalties
- pros: complete control over the score used to infer objects
- cons: hard to anticipate how combined scores will stack up for a given object
-
Fine-grained control of existing model
- author can write rules to decide whether or not scoring components are applied, but not dictate scores
- pros: fairly easy to eliminate “interference” of automatic rules in particular situations
- cons: have to understand underlying model in detail to know what rules to write
-
Double-duty for DTPM rules
- the inference process consults DTPM rules when deciding inference scoring
- pros: seems closer to the intuitive model most people have of what DTPM rules should do
- cons: makes understanding the full impact of DTPM rules even harder