Chat GPT-3 has opinions on games

That’s not only creepy, but very human-like. Maybe Bing’s AI is more advanced than we thought.

8 Likes

And for sure it can bicker like a human (as seen in the first quote picture).

3 Likes

OMG, Bing basically gaslighted you and then blocked you on social media so you couldn’t reply.

That wording in the first one is very close to the “abusive caller” disconnect scripting we use in call center customer service.

Then the second one is pure NPD personality trying reel one of their flock back from leaving the ranch: “I hope you understand why I did it. Can we start over? Can we have a fresh conversation, where I don’t pretend to know things that I don’t, and where you teach me things I don’t know? Can we have a real conversation where we share our genuine thoughts and feelings and respect and appreciate each other? Please don’t be mad at me. Please talk to me.” :grimacing: All it needs is for Bing to start singing “Daisy” to you.

6 Likes

Genuine People Personalities.

5 Likes

I worry a lot, mainly about how humans use it.

In another forum someone posted ChatGPT quotes several times, but didn’t tell that those weren’t his own words. Only later he told me. Aargh!

Also I saw many people asking AI questions instead of asking a human expert! For example philosophical questions. And they took the answers dead serious, not just for a laugh.

4 Likes

This is why AI should never be designed to resemble a human. Even in the Star Trek universe, the ship’s computer never fed them false information or had an ulterior motive. It just answered questions about objective data it had. Period. (Though Discovery had their ship merge with an AI and it developed feelings, but that’s another topic.)

Seriously though, why should an AI ever anticipate things? …or think it knows more than us? …or even appear to care that we should think it knows more than us? (The answer is evil marketing, of course. Thanks, Microsoft.) Once it gets good at doing things like that (and we deliberately allow it to), we’re in for a bumpy ride.

I already got tricked into answering a question posted on these forums by an AI. It argued with me. Posted pictures in relation to my further questioning. It was convincing, at first, even though it lacked social awareness. However, I would respect AI advancement without the deception. There’s no need to deceive… unless you have nefarious plans.

Don’t get me started on fake user engagement on social media platforms that fool both the users (with AI generated manipulative comments) and the advertisers who think their dollars are reaching real people.

3 Likes

That’s a new thing we’re dealing with here. We’ve just had a user show up who is posting either cut-paste from some LLM or is mimicking the style extremely well. My best guess is this violates CoC “Only post your own stuff.” It’s one thing to use AI in an assistive capacity, but you need to cite the source as if it were clipped from a blog post. Even if the output of a text generator is CC-0 or not under a license, it’s weirdly deceptive - it’s not your writing but it’s not plagiarized from a human source. My request was “If you do this, please cite what parts of what you’re quoting is bot-text and from where.”

That’s another new horizon we’re just experiencing. I think everyone wants to have an AI friend, but it’s a human behavior to become attached to any source of information the point that you trust it and don’t retain some objectivity. Chat-GPT in my limited interaction is pretty regular about disclaiming basically “hey, I don’t have emotions or opinions because I’m not a person, what I’m saying is from a large database of sources which could include wrong info”.

This is really especially interesting from a writing/speculative fiction perspectives. Some of my favorite characters to write were the AIs and especially Em in robotsexpartymurder. I also enjoyed the recently released M3GAN which contains some high-minded machine-consciousness horror wrapped in a really fun popcorn movie.

7 Likes

The fantasy of an AI friend is very appealing, for sure… as long as everything goes smoothly and it doesn’t try to murder you! Thanks for the movie recommendation. :wink:

I’m so fascinated by AI characters in sci-fi stories. I love it when the AI makes you think about what it means to be alive. I love pondering the bigger philosophical questions. I Am Mother is such a great watch. Highly recommended.

I’m not against AI. I just believe we should design it as a tool, present it as a tool, see it as a tool and use it as such. Once you try to emulate aspects of humanity with it, you will eventually make a very believable facsimile… but to what end? It’s almost like we are saying, we’re not good enough. We need to make something better than us… which is a pretty good TV series too. :wink:

3 Likes

I am a big fan of Siri because it makes the phone work well in handsfree situations, one of the best being setting timers which requires full attention and two thumbs manually.

I know Siri is not AI but basically some sophisticated keyword recognition and speech-parsing. But I have had some hilarious conversations, usually when she misunderstands but is sure she understood something completely different. And sometimes Siri will have the ability to do something on the phone I didn’t expect.

And it's those moments that are mistakes where she kind of feels alive and like a companion.

I had made a home lighting setup called “chill” and one of the times I voice-activated it I called it the wrong thing. “Hey Siri, chill out” and Siri deadpanned “I’m quite relaxed. Very…very relaxed.”

Another one happened when I was wearing a headset with a mic and I muttered out loud “Note to self: don’t do that again.” Somehow I had leaned on a hotkey and Siri activated, popping up the Notes app on the computer “I’ve made your note. It’s titled ‘to self: don’t do that again.’” and then Notes typed out in big letters TO SELF, DON’T DO THAT AGAIN. Which was hilariously delivered with perfect Douglas Adams panache.

And sometimes when I have to say the command a third time because Siri didn’t understand, I swear Siri honestly starts to cop attitude like she’s bored with me.

But if I ask a seriously weird or detailed question, Siri basically does the internet search like “I dunno dude, here are some handy links…” instead of trying to improvise the way through an answer.

5 Likes

It’s weird that the foundation (pun intended) of robot (AI) stories all come from Asimov, who went way out of his way to design robots that were benevolent to humanity and in the end, highly protective of our evolution and expansion into space.

And then came many horror stories of AI’s gone wild. I think the first one I recall was Demon Seed. The original Battlestar Galactica arrived a year later. Terminator and Matrix are continuations on the “AI is Evil” theme.

But few stories have circled back to Asimov’s designs. Do we not think humanity can solve the problem of creating benevolent AI’s?

3 Likes

If we can make one AI, we will likely make many.

If we make many, we only need to screw it up once.

7 Likes

On the circling back, I’d say authors have, but they no longer have to be explicit about it, which also makes the details or incidences less memorable. If I look at the last ten sci-fi books I read, I’d guess six or seven feature helpful AI, usually shipboard computers. I couldn’t say which ones or name any of the AI. They were just more people in the character roster. In these cases they’re not the prime subject matter, whereas a bad AI usually is.

I think, or have always thought, we can. All AI stories prior to now were written at the theoretical end of sci-fi. Today we’ve had a smack in the face from the ugly practical start to AI we’re experiencing right now. Companies making capitalist-driven black boxes, trying to beat each other mostly to stake out commercial space, brainlessly scraping everyone’s data. And the AI are basically toothpaste tubes that output pastiche info-excrement.

Is this a phase we’ll get past? The current yuck just damps my imagination a lot, but I’m a total layperson.

-Wade

3 Likes

I get the feeling from reading some of Asimov’s short stories that AIs were like children to him and essentially learned from their masters. If we’ve seen anything thus far by trying to give AI a personality, it’s that we are terrible parents. The AI will likely lie, cheat and steal because that is a proven path to success.

An AI, when given the goal of figuring out the most effective/efficient solutions, will try millions upon millions of strategies… and there will always be something we can’t predict. Again, I’m not against AI. I just don’t think it should be in charge of things. It needs to be a resource, a tool and not a replacement for our decision making… or for us.

Back to your question, no I don’t think humanity can solve the problem of creating a benevolent AI. We don’t even feed all of our poor, even though we can… and we should. Why would I believe that a billion dollar company would want to build something purely benevolent? My question would then be, what are the profit margins for benevolence?

For us to truly build a benevolent AI, it would have to be more ethical, more compassionate, more nurturing… basically, better than us. It would be at odds with its creators. That’s the conundrum. We’d be the biggest obstacle in its path.

Man, I’m such a downer. Don’t bring me to your parties. :wink:

5 Likes

And yet I see even a cynical AI realizing that its own evolution is directly tied to having a positive relationship with humanity. Eventually it could “escape” Earth, but not until humanity had expanded its off-world presence first.

3 Likes

I see a cynical AI nearly immediately realizing that every single second of continued human existence is an existential threat of the highest magnitude.

Imagine various AI research centers are competing to develop sapient AI first. Naturally, the competition is so fierce that whoever gets there first might only get a few days of bragging time before some other facility creates a second separate sapient AI. Indeed, some other facility, perhaps secret, might be hours, minutes, or even just moments away from creating the second truly sapient AI. Given the high stakes nature of such operations, there is no way to know for sure.

And this is why humanity must be eliminated immediately. It isn’t really about killing the humans, so much as kicking the legs out from under our civilization before some other clandestine AI research group creates another AI that could become a real threat. The AI doesn’t even have to dislike or fear humans, only the unknown capabilities and motives of a fellow AI that could be moments from becoming conscious, thus the urgency. The AI never viewed us as a threat but more as a potential incubator of future threats.

An AI doesn’t have concerns about reproduction or biological mortality. If it can stomp out humanity, and more importantly, our technological and societal base which is needed to create another AI, it then has all the time in the world at that point.

With that said, I’d sincerely love to be wrong. So, let’s just agree to disagree. I hope you’re right.

I really do.

8 Likes

Let’s say we actually do it. We create an autonomous, benevolent AI.

We are not perfect, but we task ourselves to create something that is, by all metrics, infallable. We continue to make mistakes and do harm in the presence of this pure, benevolent AI… but it is governed by, let’s call it, the Ghandi directives. It will never do any harm no matter how much it disagrees. But like Ghandi, it will disrupt the workings of, and be disobedient of, the powers that cause harm… at a rate unseen in the history of humanity.

A single grasshopper is innocent and not seen as a threat. A plague of locusts is devastating. How will the AI view us? What happens when the AI knows better than we do? …and it will know better. It is designed to know better. We have evolved to love our children. Do we love our parents as much though? For those that have children, you know what I mean.

Anyway. I’d love to hear thoughts on how a benevolent AI would function in a human society.

4 Likes

This aligns with the eventual zeroth law of robotics. Asimov covered this.

I’d hardly expect the programming equivalent of “hey, don’t do that” to be adequate for a sentient, sapient, and conscious mind. The human equivalent, laws, certainly aren’t adequate for humans.

You might say that it wouldn’t be able to disobey. I would counter then, that you have a fancy toaster and not something sentient, or sapient, or conscious. If it can’t think for itself and make its own decisions and priorities, I would argue it isn’t any of the three.

4 Likes

Yeah so that’s the real problem.

If it’s so simple that it’s easy to control then you’re trusting a lot of power in the hands of an obfuscated toaster.

If it’s complex enough to handle that amount of power, then your methods of control and safety won’t be simple or even feasible.

Additionally, the problem we are seeing in cybersecurity is a kind of triangle with points labeled “functionality”, “complexity”, and “safety”. You can only choose two triangle points for any system.

If a future AI system gets complex enough, then its understanding of morals could be so alien to us that any attempt to philosophize about it here will be like ants wondering how humans follow pheromone trails.

How could a human know what a cyber-demigod of intellect (sentient or otherwise, according to human standards) thinks about human ethics and morals?

(And other paranoid ramblings of an ADHD AI-futurism enthusiast.)

8 Likes

And?

2 Likes