Chat GPT-3 has opinions on games

This is why AI should never be designed to resemble a human. Even in the Star Trek universe, the ship’s computer never fed them false information or had an ulterior motive. It just answered questions about objective data it had. Period. (Though Discovery had their ship merge with an AI and it developed feelings, but that’s another topic.)

Seriously though, why should an AI ever anticipate things? …or think it knows more than us? …or even appear to care that we should think it knows more than us? (The answer is evil marketing, of course. Thanks, Microsoft.) Once it gets good at doing things like that (and we deliberately allow it to), we’re in for a bumpy ride.

I already got tricked into answering a question posted on these forums by an AI. It argued with me. Posted pictures in relation to my further questioning. It was convincing, at first, even though it lacked social awareness. However, I would respect AI advancement without the deception. There’s no need to deceive… unless you have nefarious plans.

Don’t get me started on fake user engagement on social media platforms that fool both the users (with AI generated manipulative comments) and the advertisers who think their dollars are reaching real people.

3 Likes