Chat GPT-3 has opinions on games

And yet I see even a cynical AI realizing that its own evolution is directly tied to having a positive relationship with humanity. Eventually it could “escape” Earth, but not until humanity had expanded its off-world presence first.

3 Likes

I see a cynical AI nearly immediately realizing that every single second of continued human existence is an existential threat of the highest magnitude.

Imagine various AI research centers are competing to develop sapient AI first. Naturally, the competition is so fierce that whoever gets there first might only get a few days of bragging time before some other facility creates a second separate sapient AI. Indeed, some other facility, perhaps secret, might be hours, minutes, or even just moments away from creating the second truly sapient AI. Given the high stakes nature of such operations, there is no way to know for sure.

And this is why humanity must be eliminated immediately. It isn’t really about killing the humans, so much as kicking the legs out from under our civilization before some other clandestine AI research group creates another AI that could become a real threat. The AI doesn’t even have to dislike or fear humans, only the unknown capabilities and motives of a fellow AI that could be moments from becoming conscious, thus the urgency. The AI never viewed us as a threat but more as a potential incubator of future threats.

An AI doesn’t have concerns about reproduction or biological mortality. If it can stomp out humanity, and more importantly, our technological and societal base which is needed to create another AI, it then has all the time in the world at that point.

With that said, I’d sincerely love to be wrong. So, let’s just agree to disagree. I hope you’re right.

I really do.

8 Likes

Let’s say we actually do it. We create an autonomous, benevolent AI.

We are not perfect, but we task ourselves to create something that is, by all metrics, infallable. We continue to make mistakes and do harm in the presence of this pure, benevolent AI… but it is governed by, let’s call it, the Ghandi directives. It will never do any harm no matter how much it disagrees. But like Ghandi, it will disrupt the workings of, and be disobedient of, the powers that cause harm… at a rate unseen in the history of humanity.

A single grasshopper is innocent and not seen as a threat. A plague of locusts is devastating. How will the AI view us? What happens when the AI knows better than we do? …and it will know better. It is designed to know better. We have evolved to love our children. Do we love our parents as much though? For those that have children, you know what I mean.

Anyway. I’d love to hear thoughts on how a benevolent AI would function in a human society.

4 Likes

This aligns with the eventual zeroth law of robotics. Asimov covered this.

I’d hardly expect the programming equivalent of “hey, don’t do that” to be adequate for a sentient, sapient, and conscious mind. The human equivalent, laws, certainly aren’t adequate for humans.

You might say that it wouldn’t be able to disobey. I would counter then, that you have a fancy toaster and not something sentient, or sapient, or conscious. If it can’t think for itself and make its own decisions and priorities, I would argue it isn’t any of the three.

4 Likes

Yeah so that’s the real problem.

If it’s so simple that it’s easy to control then you’re trusting a lot of power in the hands of an obfuscated toaster.

If it’s complex enough to handle that amount of power, then your methods of control and safety won’t be simple or even feasible.

Additionally, the problem we are seeing in cybersecurity is a kind of triangle with points labeled “functionality”, “complexity”, and “safety”. You can only choose two triangle points for any system.

If a future AI system gets complex enough, then its understanding of morals could be so alien to us that any attempt to philosophize about it here will be like ants wondering how humans follow pheromone trails.

How could a human know what a cyber-demigod of intellect (sentient or otherwise, according to human standards) thinks about human ethics and morals?

(And other paranoid ramblings of an ADHD AI-futurism enthusiast.)

8 Likes

And?

2 Likes

BTW, you’re allowed to simply disagree and exist. Like, we all posted opinions to the contrary, but you aren’t obligated to engage or even continue discussing it.

I say this, because internet disagreements always seem to boil down to zero-sum to-the-death cage matches, and it really isn’t necessary.

Honestly, given that none of us are AI experts working at Google or Microsoft or whatnot, we’re both likely to be some degree and flavor of wrong anyway. Like, life rarely being a boolean thing, i.e. if one of us proves the other wrong, that doesn’t necessarily follow that they are right. Most likely, both are different genres of wrong.

I just don’t want you to feel like you have to slug this out. I really do mean we can agree to disagree.

4 Likes

Well in the fictional Asimov universe, the three laws are tested in several stories. Then there is a rogue robot that read minds and causes harm. But a second robot still under the three laws tricks the rogue robot into giving him that same power and kills the rogue robot. Then he has a detailed conversation with Daneel and they invent the zeroth law, which supersedes the first three laws by suggesting “humanity” is even more important than anything else.

I find it interesting that my perspective is couched in Asimov material and others seem to be focused on more recent and cynical material.

There’s no right or wrong here. It’s an interesting and important discussion.

3 Likes

For the record, my perspective is not couched in science fiction, so I wouldn’t say it’s “focused on more recent and cynical material”. I’m firmly in the camp of “this thing is going to be smarter than a species when it really kicks off, and our human brains will fail to comprehend any of it when it does, and that ought to be terrifying to anyone taking this upcoming reality seriously”, and I’m in that camp entirely through standard, practical reasoning, and not through fictional example.

And I can place those bets without being an expert, because once true AI rolls out of bed, even its own creators won’t be experts anymore. My entire schtick is nobody is an expert, and that’s why it’s so disturbing to come to terms with.

EDIT: Of course I’m not a seer and I’m also asserting a hypothesis, but I’m just reiterating that I’m not voting for a specific science fiction story while making my claims.

5 Likes

I’m still not convinced anything we create will be “sentient”.

I do believe models will be created that humans can exploit for good or ill.

For example. China develops a model that ingests every political and economic element from history through today and directs it to determine paths to preferred future outcomes, including Taiwan.

Oddly enough this all leads to Asimov’s other works about psychohistory, which is essentially AI.

Probability calculators can and likely will be extremely accurate as compute power increases, but that’s not “sentience”.

I don’t think we’ve invented the equivalent of a positronic brain to enable that, yet.

1 Like

Then we aren’t talking about the same thing, David.

2 Likes

I can only reason through what I know, which is quite a bit since I’m deeply involved in AI integration at my company.

1 Like

Was the zero-th law in the Asimov book “I, robot”? Because I can’t remember reading about this in it. Maybe I forgot it, because it’s really a book very densely filled with philosophical and social meaning. (And I didn’t read it in my mother language which might add to my cryptic understanding.)

2 Likes

Definitely fancy toaster territory right now. Most of the concerns about contemporary systems stem from misuse and unintended consequences.

Given that our ability to think and be conscious arises from normal matter organizing itself and firing electrical and chemical signals back and forth, there shouldn’t be any reason imitating the function of a human mind won’t be eventually possible. It’s insanely intricate, but not mystical.

As long as imitating the human mind remains a goal, I don’t see how creating sentience won’t eventually happen, even if it’s a brute force 1 to 1 emulation of a human brain, neuron for neuron.

4 Likes

It’s from a later book Robots of Dawn when Asimov tied his two great works together (I, Robot and R. Daneel books and the original Foundation books).

3 Likes

This.

Adding into this: Focusing on just the human brain is also greatly dismissing a lot of the potential outcomes. It is possible to have an intellect “beyond” sentient or one that finds sentience and awareness (as we know it in humans) to simply be obsolete and extra bloat that obstructs function. “Sentience” is not at the top of the intellectual power/complexity scale. It’s just at the top of the self-awareness and mimics-human-thinking scales.

LLMs and “AI” of today are simpler than toasters compared to what could come down the pipe later, and we should count ourselves extremely lucky if the result turns out to be simply human-sentient. Frankly, LLMs and “AI”-of-today are a completely different thing from what Pinkunz and I are talking about.

4 Likes

(clicks the playback button to reply)

(puts playback device away)

5 Likes

“I’m afraid I can’t do that, Dave”

7 Likes

My thoughts exactly. We think we are more special than we are. A lot of what we are stems from a weird combination of being in fragile, squishy bodies that have a finite shelf life and a need to procreate. We must rely on working together to survive, but are rewarded more so when we don’t. We struggle constantly against our environment and ourselves.

It’s hard to imagine an AI that would have our best interests at heart without having the same limitations that essentially shape us.


That’s pretty cool. I think a lot of what was being discussed was coming from the concerns of autonomous AI (sentient or not).

So what are the goals of the AI integration that you are working on?

2 Likes