(I hope these posts aren’t distracting. I won’t post too many of these, I promise. I just find it fascinating and I hope others at least find it somewhat interesting.)
So the AI was instructed to craft examples for the manual based on what it learned or knew. Here’s something that was really interesting to me. It suggested that the locations of Zork should be used instead of what the manual shows for room connections and regions. Here’s what it came up with when asked to generate its logic via the Explainable AI approach:
When prompted for the correct way to do things, the AI came up with:
There is a room called West of House.
A room called North of House is north of West of House.
A room called South of House is south of West of House.
I thought this next bit was interesting as well. The AI came up with:
Behind the House is east of North of House and east of South of House.
When asked to explain:
Where it got even more interesting was regions given the above. The AI first tried this:
Outside is a region.
It realized that wasn’t working. When asked why, it responded:
It then used:
Outdoors is a region.
So then it tried to put our above locations in the region:
West of House, South of House, North of House and Behind the House are in the Outdoors.
This didn’t work. The AI didn’t really explain why except to say that it assumed that even though the rooms were now defined explicitly this statement was using directional commands. It ended up coming up with this:
The rooms West of House, South of House, North of House and Behind the House are in the Outdoors.
Which, of course, works. (Although there is still a problem with it! A problem the AI nor the humans, as of yet, seem to have noticed.)
So, again, the context here was the “Explainable AI” approach. The AI was tasked with helping craft examples that would showcase Inform logic with some test cases (for lack of better term) to show what can go wrong and what can go right.
It’s harder to get it to work from the reverse. Meaning, if it finds a solution that works, it tends to stick with it (exploit above explore). But when it hits on a solution that doesn’t work, it definitely does quite a bit more exploring.
Finally, when the AI was queried as to why it made sense to use Zork, the response:
Altogether, not too bad of a response!