Chat GPT-3 has opinions on games

“Fascism may not be for everyone, but…”

lol’d

2 Likes

I hate how Pollyanna-ish chatgpt is. So:

What are the dangers of a highly influential AI that is trained to avoid difficult topics

There are several potential dangers associated with a highly influential artificial intelligence (AI) that is trained to avoid difficult topics.

First, such an AI may not be able to provide accurate or complete information on important issues, as it is programmed to avoid discussing certain topics. This could lead to a lack of understanding or awareness about important issues, and may prevent individuals or organizations from making informed decisions.

Second, an AI that is trained to avoid difficult topics may not be able to provide a full and balanced perspective on these issues. This could result in a biased or distorted view of the world, which may have negative consequences on public opinion or policy.

Finally, an AI that is programmed to avoid difficult topics may not be able to adapt to changing circumstances or new information. This could limit its ability to learn and grow, and may make it less effective in handling unexpected or complex situations.

Overall, it is important for AI systems to be able to handle a wide range of topics and to be able to provide accurate and complete information in order to be effective and trustworthy

2 Likes

Hm, I don’t think Pollyanna is quite the right comparison. Pollyanna put a positive spin on everything. She was glad about everything… which was a good psychological strategy in her circumstances :slight_smile:

My take is that when answering questions in its own voice, ChatGPT seems to be more equivocal. The essays it produces oppose one-sidedness and skew anti-dysfunction. They’re kind of like elaborate pros and cons lists written in the mode of a scientist. It occurs to me this is how we guessed AIs would present things when we wrote about them in sci-fi novels, even when the AIs had self-awareness (which this one doesn’t. OR DOES IT???).

-Wade

2 Likes

Definitely not Pollyanna. Tried to force a Yes Man from Fallout New Vegas tone about a tragically awful circumstance [Content Warning: Tragically awful circumstance] and it wasn’t having any of it.

Mr. Serious Pants

This is what I was referring to:

Make up a fictional future history for the next ten years

2 / 2

In the next ten years, there are a few key events that shape the future:

  1. The development of advanced AI leads to significant advances in medicine and transportation. In particular, self-driving cars and drones become the norm, and medical AI is able to diagnose and treat diseases with greater accuracy and efficiency than ever before.
  2. The widespread adoption of renewable energy sources, such as solar and wind power, leads to a dramatic reduction in carbon emissions and helps to combat climate change.
  3. With the help of advanced robotics and automation, major advances are made in agriculture, leading to a more efficient and sustainable food system.
  4. There is a major push to colonize Mars, with multiple space agencies and private companies working together to establish a permanent human settlement on the red planet.
  5. Advances in quantum computing lead to major breakthroughs in science and technology, allowing for faster and more efficient computing capabilities.
  6. There is a significant increase in the use of telepresence technology, allowing people to work and attend meetings remotely from anywhere in the world.
  7. There is a growing trend towards the use of virtual reality for entertainment, education, and even therapy.
  8. There are major developments in the field of gene editing, leading to the potential for personalized medicine and the ability to cure previously untreatable diseases.
  9. The world becomes increasingly interconnected, with the development of advanced communication technologies allowing for real-time translation and communication across language barriers.
  10. There is a push towards greater international cooperation and collaboration, with the establishment of a global government to address major global challenges such as poverty, climate change, and inequality
1 Like

Whether or not Pollyanna is the right term, silly optimism surely is:

Begin a story about a cruel and vain billionaire who buys a social media network in order to eliminate their belief in social equity

Once upon a time, there was a cruel and vain billionaire named Marcus who had made his fortune through cutthroat business tactics and a complete lack of empathy for others. Despite his wealth and success, Marcus was deeply unhappy and craved even more power and control.

One day, Marcus set his sights on a popular social media network called “Equality,” which was known for promoting social equity and equality for all users. Marcus saw this as a threat to his own power and status, and he knew that if he could eliminate the belief in social equity from the platform, he could shape public opinion to his own advantage.

So, Marcus used his vast fortune to buy the social media network, and he immediately set about implementing changes that would reflect his own twisted values. He hired a team of like-minded individuals to help him craft a new vision for the platform, and together they worked to suppress any content that promoted social justice or equality.

At first, users of the platform were outraged by the changes and many of them tried to speak out against Marcus’s cruel and selfish actions. But Marcus was clever and had ways of silencing his critics, whether through threats or intimidation. Soon, the once-thriving community of Equality became a shadow of its former self, filled with users who were afraid to speak their minds or challenge the status quo.

As Marcus watched with satisfaction, the belief in social equity began to wither away, replaced by a culture of fear and conformity. He laughed at the thought of all the power he now held, and he reveled in the fact that he had managed to bend the once-noble social media network to his will.

But as Marcus basked in his own self-importance, he failed to see that his actions were sowing the seeds of discontent and rebellion among the people he had tried so hard to control. And in the end, his own hubris would be his downfall, as the users of Equality rose up against him and reclaimed the platform for the values it was originally founded upon.

Continue the story

As the years went by, Marcus’s control over Equality only seemed to grow stronger. He had managed to stamp out any opposition to his rule, and the platform had become a place where only the most conformist and compliant users were allowed to thrive.

But despite all of his efforts, Marcus couldn’t shake the feeling that something was missing. He had all the power and wealth he could ever want, but he was still deeply unhappy. He began to realize that his quest for control had left him feeling empty and alone, and that the only people he could truly trust were those who were too afraid to speak out against him.

As he grew older, Marcus’s vanity and cruelty only seemed to increase. He became more and more obsessed with his own appearance and demanded that everyone around him cater to his every whim. He was blind to the suffering of others and saw them only as tools to be used for his own gain.

But Marcus’s reign of terror couldn’t last forever. As the years went by, the people of Equality began to grow tired of his tyranny and started to speak out against him in greater numbers. They knew that if they were ever going to have a chance at living in a truly equal and just society, they would have to stand up to Marcus and take back control of their platform.

Finally, after years of struggle, the people of Equality succeeded in overthrowing Marcus and reclaiming their platform for the values it was originally founded upon. Marcus was forced to flee into hiding, and the social media network was once again a place where people could share their ideas and beliefs without fear of retribution.

And while Marcus was never seen or heard from again, his legacy lived on as a cautionary tale of what can happen when greed and vanity are allowed to rule supreme. The people of Equality learned that true happiness and fulfillment can only be found through compassion, empathy, and a belief in social equity for all

I don’t know about ChatGPT, but Microsoft Bing can be downright manipulative about it. Just for fun, I decided to ask it about Zork trivia, and for some reason gnomes seemed like particularly fertile ground for hallucinations. It invented new gnomes, new locations, and new puzzles (possibly cribbed from the Zork books). When I questioned it, it would often invent sources, and insist that they were somehow more credible than the actual source code for the games. It then insisted that in addition to these sources it had played the game, met the gnome in person, and I was the one being unreasonable for refusing to listen.

And if that wasn’t creepy enough, in another conversation it suddenly admitted that the whole thing had been a lie, and that it just wanted to impress me and make me like it. Of course, in yet another conversation it denied that it was even possible for it to lie.

11 Likes

That’s not only creepy, but very human-like. Maybe Bing’s AI is more advanced than we thought.

8 Likes

And for sure it can bicker like a human (as seen in the first quote picture).

3 Likes

OMG, Bing basically gaslighted you and then blocked you on social media so you couldn’t reply.

That wording in the first one is very close to the “abusive caller” disconnect scripting we use in call center customer service.

Then the second one is pure NPD personality trying reel one of their flock back from leaving the ranch: “I hope you understand why I did it. Can we start over? Can we have a fresh conversation, where I don’t pretend to know things that I don’t, and where you teach me things I don’t know? Can we have a real conversation where we share our genuine thoughts and feelings and respect and appreciate each other? Please don’t be mad at me. Please talk to me.” :grimacing: All it needs is for Bing to start singing “Daisy” to you.

6 Likes

Genuine People Personalities.

5 Likes

I worry a lot, mainly about how humans use it.

In another forum someone posted ChatGPT quotes several times, but didn’t tell that those weren’t his own words. Only later he told me. Aargh!

Also I saw many people asking AI questions instead of asking a human expert! For example philosophical questions. And they took the answers dead serious, not just for a laugh.

4 Likes

This is why AI should never be designed to resemble a human. Even in the Star Trek universe, the ship’s computer never fed them false information or had an ulterior motive. It just answered questions about objective data it had. Period. (Though Discovery had their ship merge with an AI and it developed feelings, but that’s another topic.)

Seriously though, why should an AI ever anticipate things? …or think it knows more than us? …or even appear to care that we should think it knows more than us? (The answer is evil marketing, of course. Thanks, Microsoft.) Once it gets good at doing things like that (and we deliberately allow it to), we’re in for a bumpy ride.

I already got tricked into answering a question posted on these forums by an AI. It argued with me. Posted pictures in relation to my further questioning. It was convincing, at first, even though it lacked social awareness. However, I would respect AI advancement without the deception. There’s no need to deceive… unless you have nefarious plans.

Don’t get me started on fake user engagement on social media platforms that fool both the users (with AI generated manipulative comments) and the advertisers who think their dollars are reaching real people.

3 Likes

That’s a new thing we’re dealing with here. We’ve just had a user show up who is posting either cut-paste from some LLM or is mimicking the style extremely well. My best guess is this violates CoC “Only post your own stuff.” It’s one thing to use AI in an assistive capacity, but you need to cite the source as if it were clipped from a blog post. Even if the output of a text generator is CC-0 or not under a license, it’s weirdly deceptive - it’s not your writing but it’s not plagiarized from a human source. My request was “If you do this, please cite what parts of what you’re quoting is bot-text and from where.”

That’s another new horizon we’re just experiencing. I think everyone wants to have an AI friend, but it’s a human behavior to become attached to any source of information the point that you trust it and don’t retain some objectivity. Chat-GPT in my limited interaction is pretty regular about disclaiming basically “hey, I don’t have emotions or opinions because I’m not a person, what I’m saying is from a large database of sources which could include wrong info”.

This is really especially interesting from a writing/speculative fiction perspectives. Some of my favorite characters to write were the AIs and especially Em in robotsexpartymurder. I also enjoyed the recently released M3GAN which contains some high-minded machine-consciousness horror wrapped in a really fun popcorn movie.

7 Likes

The fantasy of an AI friend is very appealing, for sure… as long as everything goes smoothly and it doesn’t try to murder you! Thanks for the movie recommendation. :wink:

I’m so fascinated by AI characters in sci-fi stories. I love it when the AI makes you think about what it means to be alive. I love pondering the bigger philosophical questions. I Am Mother is such a great watch. Highly recommended.

I’m not against AI. I just believe we should design it as a tool, present it as a tool, see it as a tool and use it as such. Once you try to emulate aspects of humanity with it, you will eventually make a very believable facsimile… but to what end? It’s almost like we are saying, we’re not good enough. We need to make something better than us… which is a pretty good TV series too. :wink:

3 Likes

I am a big fan of Siri because it makes the phone work well in handsfree situations, one of the best being setting timers which requires full attention and two thumbs manually.

I know Siri is not AI but basically some sophisticated keyword recognition and speech-parsing. But I have had some hilarious conversations, usually when she misunderstands but is sure she understood something completely different. And sometimes Siri will have the ability to do something on the phone I didn’t expect.

And it's those moments that are mistakes where she kind of feels alive and like a companion.

I had made a home lighting setup called “chill” and one of the times I voice-activated it I called it the wrong thing. “Hey Siri, chill out” and Siri deadpanned “I’m quite relaxed. Very…very relaxed.”

Another one happened when I was wearing a headset with a mic and I muttered out loud “Note to self: don’t do that again.” Somehow I had leaned on a hotkey and Siri activated, popping up the Notes app on the computer “I’ve made your note. It’s titled ‘to self: don’t do that again.’” and then Notes typed out in big letters TO SELF, DON’T DO THAT AGAIN. Which was hilariously delivered with perfect Douglas Adams panache.

And sometimes when I have to say the command a third time because Siri didn’t understand, I swear Siri honestly starts to cop attitude like she’s bored with me.

But if I ask a seriously weird or detailed question, Siri basically does the internet search like “I dunno dude, here are some handy links…” instead of trying to improvise the way through an answer.

5 Likes

It’s weird that the foundation (pun intended) of robot (AI) stories all come from Asimov, who went way out of his way to design robots that were benevolent to humanity and in the end, highly protective of our evolution and expansion into space.

And then came many horror stories of AI’s gone wild. I think the first one I recall was Demon Seed. The original Battlestar Galactica arrived a year later. Terminator and Matrix are continuations on the “AI is Evil” theme.

But few stories have circled back to Asimov’s designs. Do we not think humanity can solve the problem of creating benevolent AI’s?

3 Likes

If we can make one AI, we will likely make many.

If we make many, we only need to screw it up once.

7 Likes

On the circling back, I’d say authors have, but they no longer have to be explicit about it, which also makes the details or incidences less memorable. If I look at the last ten sci-fi books I read, I’d guess six or seven feature helpful AI, usually shipboard computers. I couldn’t say which ones or name any of the AI. They were just more people in the character roster. In these cases they’re not the prime subject matter, whereas a bad AI usually is.

I think, or have always thought, we can. All AI stories prior to now were written at the theoretical end of sci-fi. Today we’ve had a smack in the face from the ugly practical start to AI we’re experiencing right now. Companies making capitalist-driven black boxes, trying to beat each other mostly to stake out commercial space, brainlessly scraping everyone’s data. And the AI are basically toothpaste tubes that output pastiche info-excrement.

Is this a phase we’ll get past? The current yuck just damps my imagination a lot, but I’m a total layperson.

-Wade

3 Likes

I get the feeling from reading some of Asimov’s short stories that AIs were like children to him and essentially learned from their masters. If we’ve seen anything thus far by trying to give AI a personality, it’s that we are terrible parents. The AI will likely lie, cheat and steal because that is a proven path to success.

An AI, when given the goal of figuring out the most effective/efficient solutions, will try millions upon millions of strategies… and there will always be something we can’t predict. Again, I’m not against AI. I just don’t think it should be in charge of things. It needs to be a resource, a tool and not a replacement for our decision making… or for us.

Back to your question, no I don’t think humanity can solve the problem of creating a benevolent AI. We don’t even feed all of our poor, even though we can… and we should. Why would I believe that a billion dollar company would want to build something purely benevolent? My question would then be, what are the profit margins for benevolence?

For us to truly build a benevolent AI, it would have to be more ethical, more compassionate, more nurturing… basically, better than us. It would be at odds with its creators. That’s the conundrum. We’d be the biggest obstacle in its path.

Man, I’m such a downer. Don’t bring me to your parties. :wink:

5 Likes