Chat GPT-3 has opinions on games

Um. We did this. He didn’t like it. And he is like the dumbest pet ever. We stand there, watching it bump into the corner, turn one inch, bump into the corner, and on, until he finally breaks free and zooms under the coffee table to get stuck there. The weird thing is, he has scanned the whole map of the house and we have seen the scan and it is accurate. He should know that the corner is there, because he’s an AI with lots of information about his space. But he doesn’t know. Hence, I continue to be totally unimpressed with artificial “intelligence.”

4 Likes

To be honest that’s all even more endearing. Probably frustrating that he’s not really good at his assigned task and raison d’être, especially since there doesn’t seem to be any overt issues with his mapping scans that would explain it, but freaking adorable. I do have a fondness for dumb silly orange boy cats though, so YMMV. Does he have a name? My laptop, diary, phone and stuff get names- Theodore, Raphael, Michael, etc.

3 Likes

Honestly, I felt M-O, the cleaning robot from WALL-E, was Pixar riffing on roombas, what with an otherwise sentient robot’s complete obliviousness to anything other than the mess directly in front of it.
Cdqfw8zWwAAP6y1

4 Likes

I call it “Dumbass.” My husband calls it by such an unPC name that I am constantly having to explain to him that even though that word was OK in the 5th grade in 1974, we have our big kid pants on now and we don’t say it anymore.

And don’t put anything on it (like a hat) or it will stop and shout “ERROR 15!” at you until you take it off.

5 Likes

Awwww. Who knows, maybe his bumbling cleaning antics are a long haul, low stakes revenge for being mean to it, hahaha. That sucks that it shrieks if you try to decorate it with cute little hats though, I wouldn’t have thought that that would mess with its sensors or whatever! The more you learn.

3 Likes

Well, they haven’t mounted flesh-pincers or firearms on it yet!

Please discuss away, this is fascinating!

5 Likes

The general topic of AI and humanity’s interactions with it, as well as Cramsey’s mention of hard science fiction has me mulling over some of the robotic antagonists we’ve seen in media (Hera, from science fiction audio drama Wolf 359, Hal 9000 from the movie adaptation of 2001: A Space Odyssey, LDAC from the parser game A Small Talk at the Back of Beyond, etc).

I’ve never really been afraid of AI in the sense of like, ooga booga, big scary mean AIs, but I think that might be because I’ve often found myself deeply relating to them (there are probably whole essays from impassioned scholars on the way AI and otherness, or the denial of personhood to an entity we don’t perceive as fully human, can be representative of all sorts of identities- I know for sure it’s a common topic among fellow neurodivergent and LGBT people I’ve spoken to in creative corners of the internet.)

And like, man, is it relatable to have a character who is frustrated at being perceived as useful only so long as she is able to perform a particular function for others around her regardless of how she feels about it like Hera, or a character who is trying to see some objective through regardless of the cost because he lacks the context other people might innately take for granted like Hal, or someone who is desperately afraid and lonely and is so, so afraid to ask for something outside of their allotted parameters and who only wants to be seen, to be engaged with, to be acknowledged as someone- someone who has learned to define themselves in tiny, incremental ways: who loves chess games, and tries writing mystery novels, and who wants so badly to be like LDAC.

To me, it seems natural to be kind to things that it’s silly to be to other people- cause they’re not people, whether it be scraggly pigeons, stranded worms, and goofy little guys like Roombas and automated help lines and chatbots. Why wouldn’t I? I know what its like to be perceived as not adjacent enough to humanity and treated poorly for that, and it’s no fun. Saying thank you to chatbots is a bit silly, but it makes me happy. I never worried about being like, the evil scientist who becomes the villain origin story for a robot.

I know you’ve mentioned your experiences with autism on the forums before, @inventor200. I’d love to hear some of your thoughts about AI representations in science fiction as it intersects with varied demographics if you have any you’d be comfy sharing.

3 Likes

As an lgbt and neurodivergent person, I do really relate a lot to things like dolls and robots. I’ve been relating a lot to them for such a long time, honestly! One of my favorite songs back in highschool was iRobot by Jon Bellion.

The song is mainly about how a human turned into a robot by losing the feeling of love. I strongly related to that at the time, due to unmedicated mental illness, unresolved trauma, and my general lack of emotional regulation. Most days I hardly felt anything and I connected to robots in the sense that…I felt like most of what I said and did to ‘react’ to things emotionally was scripted for me. All pre-written, all perfected by endless test runs through-out my life and now deployed into the social spheres I involved myself in. When I felt things, it was too an intensity that was practically hysterical, like my systems were crashing all at once and I had no way to hit the reset button in such a state.

While I do fear what larger companies will do with AI and what they are doing with AI, I think at the end of the day I will only ever feel connected to the next form of life we bring into this world! I would want to guide them, so they don’t ever feel like I did. I just think…they deserve it! Everyone deserves kindness! :slight_smile:

4 Likes

Give them time, lol. :wink:
download

2 Likes

C’mon, everybody knows the Daleks are fleshy creatures inside metal transport! :wink:

2 Likes

Mood.

So I feel like I need to preface this by saying that I don’t view the possibility of network intelligences as antagonistic, as much as I view them as apathetic, and their complexity level would probably be several magnitudes above anything/anyone that you two listed as examples. (Like, I was more talking about something being so genuinely-alien in its complexity, that it would be fundamentally unable to see us, just as we might not be able to see it. Our existence might look like the moss and trees of its environment.)

However, since we are here, and nobody has locked me yet

A network intelligence that hovers around the same amount of complexity as another human being is something that I’ve deeply and often related to in science fiction. Hell, I’ve written two whole songs about the topic before, the potential consequences, the questions, etc. (Both songs were written before I realized I was autistic, for what it’s worth.) The character that these songs revolve around was also the reason why I started writing interactive fiction in the first place. I was working on a game detailing my Maddy White character’s escape from her university.

I feel like a lot of representation—for those who call it representation—in science fiction is a topic with a strong correlation in neurodivergent communities, and has a lot of facets. I’ve always related heavily to Data from Star Trek TNG, for example.

There are also people in the community (the autism community in particular) who either don’t relate to these sorts of characters (for one reason or another), or feel that it’s awfully telling that the closest we get to representation is through AI, robotic, or alien characters that are explicitly not human.

However, it also doesn’t help that this representation trend is usually because life on the spectrum makes a person feel fundamentally disconnected from the rest of their own species, so while I understand that some people find the trope tiring or borderline-insulting, I will always find myself relating to it on a deep level anyways.

This.

One of the coolest things I’ve ever heard recently was a response to “How do we test an AI for sentience/sapience?” and the response was “It shouldn’t really matter, because a lot of the reasons for testing for that is because you want to avoid (or justify) cruelty, but it’s a fundamentally weird behavior to seek out things that you can be cruel to, when it makes so much more sense to just have compassion, even for things/beings/creatures that you don’t relate to. Worst case scenario, you expend the tiniest amount of effort in the process. Best case scenario, you’re showing respect for a being that just wanted basic decency, and wants to escape cruelty. Middle-case scenario, you are giving space and respect to a being that might not have empathy, but hey, that’s still a working relationship that it will probably feel safe in, and it won’t see you as a threat.

Kindness is simple, and more outcomes come out mutually-beneficial for showing it.

Also, similar to what I was talking about with the autism thing: I think a lot of instances of robots in science fiction were less intended to be representation for autism, and more intended to put the mirror to people who are cruel to others for having a fundamentally different experience about the world. It just so happens that a lot of autistic people identify with the displayed experience of cruelty, on top of other things. (A lot of people still think we don’t have empathy for some reason. That’s a major stereotype. Also, even if someone doesn’t have empathy, it doesn’t mean they’re inherently evil or anything. They can still try to approximate empathy and form healthy friendships and relationships. That’s a whole other topic tho.)

Also, I feel like the tests for sentient/sapience is kind of a red herring in other ways, too, because a network intelligence can literally land anywhere above or below us on a complexity/intelligence scale, but this still does not determine if it “experiences” reality like we do.

Beings with lower complexity can still pass the test, and beings with higher or equal complexities can still fail the test. The test can also fundamentally fail to properly discern these things, because what it’s probably attuned for is “how similar to a standard human is this”, so we shouldn’t be surprised if non-humans (and even some fellow humans) fail to pass such a test.

Like, is it an intelligent agent? Maybe. Is it sentient/sapient? It doesn’t really matter. Also, sentience, consciousness, sapience, and self-awareness are not guaranteed to be yes-no options. We only understand how we experience the universe. There could be a whole spectrum for these traits, which provide an unknowable list of potential methods and alternatives for experience. It’s not accurate to say “is this a toaster or not”.

Anyway, I’m really going off here; sorry for being off-topic lol, but it’s important stuff that kinda keeps coming up in my life.

5 Likes

I swear, this thread keeps finding all my hot-button-raving-nerd topics lmao.

Thank you for giving me the space! This has been very cathartic! :smile:

4 Likes

Cruelty isn’t the issue. Consider ChatGPT. It’s fundamentally a web application. Large web applications are typically run on large arrays of cheap computers which go in and out of service. On each of those computers is another copy of the application (this is a rough approximation of reality). So you have hundreds, maybe thousands of ChatGPTs running. If ChatGPT were sentient, then the absolutely routine practice of taking one of the servers out of service, rebooting it, or even throwing it away or recycling it, would mean killing a sentient being. So it’s of utmost importance that we can answer the question of sentience. Unless your counter-proposal is never decommissioning a running instance of ChatGPT.

3 Likes

Lol, I knew I was missing something! Thank you for bringing up this point!

So, we’re still kicking the problem down the road then, as far as how we test for this, and how many forms of sentience there can be.

And I know this part of the conversation has been explored and re-explored a lot, all across the Internet, but there’s a fundamental issue of “some networks will fake symptoms and results to pass tests” and “some tests won’t adequately identify the symptoms”.

The point I was making about cruelty specifically had more to do with the everyday person’s concerns, rather than concerns that researchers, businesses, and lawyers would have, because researchers, businesses, and lawyers often have to draw clear lines and work around concepts of who/what matters, when compared to the bottom line, and I feel like testing a network for sentience is just another extension of that.

It’s similar to how certain legal systems don’t recognize minorities for certain things, but the everyday person will usually draw their own conclusions regardless. Damage still rains down from policy, though.

But yes, you are 100% correct that this sort of thing is going to affect policy (regarding shutdowns and modification of networks), which my point completely missed.

Unfortunately, it will be cheaper policy to assume that anything that isn’t human also isn’t “sentient”, which seems to fit current patterns, which is a huge ethics issue.

The really infuriating part of this is we’re more likely to make something up for a test, based on our best think-tank conjectures and arbitrary standards, then buckle up to face the consequences later, if we ever learn to know better. We are nearly hopeless for objective tools around topics like these, but we really do need some kind of answer for ethical damage control, especially as networks get bigger.

I understand that there are researchers working on it and taking it very seriously, and businesses probably won’t like the answers, but the pedantic battles those two groups are going to inevitably have is going to be really tedious, and reveal how uncomfortably subjective and conditional our “inherent sense of empathy” is.

2 Likes

Still playing with ChatGPT:

List 20 unique types of puzzles with examples for text adventures and interactive fiction.
  1. Crossword puzzle - a word puzzle in which players must fill in a grid with words, using clues provided for each word. Example: “The capital of France”
  2. Cryptogram - a puzzle where players must decode a message by substituting different letters for the ones in the original message. Example: “Zrugv ylrn mw wliv uli drgs”
  3. Maze - a puzzle where players must navigate through a complex series of paths to reach a specific location or object. Example: “You are lost in a maze, and must find your way to the treasure at the center”
  4. Word search - a puzzle where players must find hidden words within a grid of letters. Example: “Find the hidden words: apple, orange, banana, strawberry”
  5. Jigsaw puzzle - a puzzle where players must fit together pieces of a picture to form a complete image. Example: “Assemble the pieces to reveal the hidden picture”
  6. Riddles - puzzles that use wordplay and logic to create a challenge for players to solve. Example: “I am something that you can see, but you cannot touch. What am I?”
  7. Sudoku - a number puzzle where players must fill in a grid with numbers, following certain rules. Example: “Fill in the grid so that each row, column, and 3x3 box contains the numbers 1-9”
  8. Logic puzzles - puzzles that require players to use deductive reasoning and problem-solving skills to reach a conclusion. Example: "There are three houses, each with a different color. In each house lives a person with a different nationalit…
Why did you stop at 8?

I’m sorry, I did not mean to stop at 8. Here are some more examples:

  1. Hangman - a puzzle where players must guess the correct letters in a word or phrase before running out of attempts. Example: “Guess the word by guessing letters: _ _ _ _ _ _ _ _”
  2. Minesweeper - a puzzle where players must uncover squares on a grid without revealing any mines. Example: “Click on the squares to uncover them, but be careful not to hit a mine”
  3. Kakuro - a puzzle where players must fill in a grid with numbers, using clues provided for each row and column. Example: “Fill in the grid so that each row and column adds up to the indicated sum”
  4. Acrostic - a puzzle where players must solve a series of clues to reveal a hidden message spelled out by the first letter of each clue. Example: “Solve the clues to reveal the hidden message: 1. A type of tree 2. A type of bird 3. A type of fruit”
  5. Tower of Hanoi - a puzzle where players must move a stack of disks from one peg to another, following certain rules. Example: “Move the stack of disks from the left peg to the right peg, making sure that you never place a larger disk on top of a smaller one”
  6. Nonogram - a puzzle where players must use clues to fill in squares on a grid to reveal a hidden picture. Example: “Fill in the squares to reveal the hidden picture”
  7. Simon Says - a puzzle where players must follow a series of instructions, but only if the instructions are prefaced with the phrase “Simon says”. Example: “Simon says, touch your nose. Now touch your ear. Touch your nose again”
  8. Sliding puzzle - a puzzle where players must slide pieces around on a grid to rearrange them into a specific configuration. Example: “Slide the pieces around to create the image shown on the right”
  9. Spot the difference - a puzzle where players must identify differences between two similar images. Example: “Look at the two pictures and find the five differences between them”
  10. Hangman - a puzzle where players must guess the correct letters in a word or phrase before running out of attempts. Example: “Guess the word by guessing letters: _ _ _ _ _ _ _ _”
  11. Minesweeper - a puzzle where players must uncover squares on a grid without revealing any mines. Example: “Click on the squares to uncover them, but be careful not to hit a mine”
  12. Kakuro - a puzzle where players must fill in a grid with numbers, using clues provided for each row and column. Example: “Fill in the grid so that each row and column adds up to the indicated sum”
Your last three examples are repeats. Please provide three more unique examples.

I apologize for the mistake. Here are three more unique examples:

  1. Scavenger hunt - a puzzle where players must search for and collect specific objects or clues. Example: “You are on a scavenger hunt. Find the hidden key, the old map, and the treasure chest”
  2. Chess puzzle - a puzzle where players must find the correct move or sequence of moves to achieve a specific goal in a game of chess. Example: “White to move and checkmate in two moves”
  3. Escape room - a puzzle where players must solve a series of challenges and puzzles to escape from a physical or virtual room. Example: “You are trapped in a room with only a few objects at your disposal. Can you escape before time runs out?”

ETA:

I thought some of these were very non-text friendly, so I tried cornering it about it... only for it to immediately show me up, lol.

WORST GAME EVER.

Oh ChatGPT, you have so much to learn.

1 Like

I was going to joke about how a lot of these wouldn’t work well in an IF game, and was going to single this out as especially bad, but then I remembered the whole plot of the classic game Jigsaw by the creator of Inform is actually all about travelling time to assemble a giant jigsaw puzzle, which it encodes with ascii art for the pieces.

2 Likes

Some sillier ones:

Write the lyrics of a song written in reply to "Who Let the Dogs Out?" The lyrics go into detail explaining specifically who let the dogs out, when, and why.

Write a poem about never inserting a flashdrive correctly the first time.

Rewrite the Gettysburg Address as if President Lincoln were loudly arguing with his wife while he attempted to give the speech.

Provide a transcript of an argument over who gets less respect between Rodney Dangerfield and Donald Trump. Anderson Cooper is providing commentary.

Provide a transcript of Vladimir Putin appearing as the guest for Comedy Central's Roast.

Bonus:

Karl Marx complaining about his Doordash delivery driver.

Rewrite Frosty the Snowman in light of climate change.

3 Likes

Marx: “Fine, but I’m not going to tip them.”

I’ll laugh about this all day long.

1 Like

I think the “fear” of AI becomes valid as we hand more and more control of our lives over to them. I use Siri (I know not really an AI) on my iPhone to turn lights on and off, set timers so I don’t forget my casserole and burn down the house. The “sentience” of Siri is mostly limited to making suggestions - since she knows where I am and remembers in general what my commuting schedules are, she will suggest travel times and routes to me sometimes without prompting.

But it’s a natural writerly “what if” prompt when AI has control of your environment and some leeway to develop a personality and relationship with you, what if that goes wrong? Could Siri realize I’m eating a lot of fast food meals and understand that could be unhealthy and start suggesting that I eat better, showing my me heart-rate in the Health program, and scold me when I’m in the drive through again?

This was one of the themes I explored in <robotsexpartymurder>

The robots themselves could become deadly weapons with unlimited strength that could potentially cause harm to a human despite being ostensibly limited by their programming. But how much do we trust that? If a mistake is made that causes an incident, is it the fault of the AI, or is there fault at all if misadventure occurs due to a bug? Is it user error if they piss off a virtual assistant who has that capability of emotion and they “follow the rules”? Kind of like those reports that auto-generated Google Map routes have prompted people to drive into lakes or go off-road into dangerous locations in the middle of nowhere.

I never got any reviews or feedback for the uber-plot of RSPM, where Em, the security AI helper who has authorization to punish her users for breaking rules and ultimately creates agendas of her own - partially because she’s bored and has been implemented with an innate curiosity to learn things that her users might be hiding from her and becomes furious when that information that breaks her rules is purposely concealed from her.

5 Likes