Chat GPT-3 has opinions on games

My near-term fear regarding ChatGPT: it’s already been getting harder and harder to find signal in the noise. For any given tech question I do a web-search on, odds are good that half the first page of results link to blogs regurgitating the documentation, usually in a way that makes me suspect the author hadn’t tried the thing even once. And I think we may be heading headlong into a world where the first 5 or 10 pages of results could be the output of Machine Learning algorithms trained on the existing crappy unhelpful pages.

I think the singularity we’re rapidly approaching isn’t increasing intelligence, it’s noise. Vast oceans full of stuff that looks like information but isn’t.

And when you look at how much sway botnets controlled by bad actors already have in this world when they’re not even trying all that hard to withstand examination, imagine how much worse it’ll get when they actually seem convincing.

10 Likes

It won’t matter. I live in an area where people genuinely think that Hillary Clinton runs a pedophile ring out of a pizzeria, injecting bleach is a genuine medical treatment kept from the people by a shadowy cabal of so-called medical “experts”, and serving an Impossible Burger at a restaurant is violent attack on real Americans. I mean, they truly believe this, and that all these things are connected in an attempt to take everything from them. Being more convincing about these things would not change anything.

6 Likes
Zed, ChatGPT sympathizes

The argument presented in this statement is that as machine learning algorithms become more advanced, there will be an increase in the amount of “noise” or irrelevant or misleading information on the internet. The author also expresses concern about the potential for bad actors to use these algorithms to create convincing but false information. This is a valid concern and highlights the need for careful evaluation and verification of information found online. As technology continues to advance, it will be important to develop strategies for identifying and filtering out inaccurate or misleading information.

3 Likes

I am now vetting all forum posts with ChatGPT.

4 Likes

I didn’t expect my cranky late-night post to kick off so much discussion! This is really interesting.

As to why I’m frightened about GPT-3 and the like—I do acknowledge that the state of the art in chatbots today falls well short of true intelligence, and I think a few more fundamental breakthroughs are necessary before AI becomes truly dangerous. An LLM on its own is not a Skynet scenario. The thing is that the possibilities are advancing so fast, and no one truly knows how to control these things. (Witness all the experiments showing that you can bypass most of ChatGPT’s safeguards essentially by asking in the right tone.)

Also, no one is in control of how the research progresses. As of now, substantial computing resources are required to run something like this, and the hacker kiddies can’t recreate it in their basements. With advances in hardware, that may not hold true forever.

It used to be a programming maxim that computers do exactly what they’re told, and only that. (The difficulty of programming is in giving precise instructions.) This is still technically true, but it’s become a hopelessly inadequate paradigm for thinking about modern AI. No one told GPT-3 to do the things it’s doing; it finds its own way. As the technology advances, AI will inevitably be given more and more power, and we don’t know what it’s doing, nor do we know its goals.

The AI worries that have been expressed in this thread are all valid. What worries me more than anything else about some of the response to GPT-3 and the like is that I think it shows a strong normality bias—people find ways to minimize the strange and surprising qualities of GPT-3, and they’ll probably do the same thing when more advanced and more dangerous forms of AI come on the scene in the fairly near feature. Things are happening fast. Pay attention.

4 Likes

We crossed an important line, in my opinion, when Google basically got a monopoly on web searching, with their main business model being selling ads. They no longer have any incentive to help you find the signal amid the noise, because when you click on the noise you’re paying them via ad views, and the lack of strong competition means that there’s no risk of driving away customers. The barrier to entry to making a functioning search engine is so much higher now than it used to be, that I don’t foresee real competition appearing any time soon—and if it did, Google has the resources to squash it.

3 Likes

I like this video about the future impact of AI.

I like this one, too. It’s a good chaser for the earlier shot.

The more the wealth of a nation comes from the productive citizens of the nation, the more the power gets spread out, and the more the ruler must maintain the quality of life for those citizens. The less, the less.

4 Likes

Because one good AI deserves another, here’s how Midjourney sees ChatGPT, God knows why.

4 Likes

French for cat, maybe? Or just edit distance to cat?

6 Likes

I’m a CS professor, and ChatGPT is the most exciting thing I’ve seen come out of CS in a long time. I’m also not especially worried (though there are certainly contingents in the field who are).

No one told GPT-3 to do the things it’s doing; it finds its own way. As the technology advances, AI will inevitably be given more and more power, and we don’t know what it’s doing, nor do we know its goals.

Large language models don’t have goals. They don’t have a world model or any persistent state at all outside of the context of your conversation. ChatGPT is very impressive but it’s not black magic; there is some information about how OpenAI layered RL on top of GPT to build the conversational interface here.

Regression on a sufficiently large dataset is indistinguishable from intelligence. That’s not surprising, really… perhaps it is surprising that the entire Internet is “sufficiently large” for generating the snippets people shared in this thread, though we’re out of training data now, and still quite far from general intelligence… we may get there sooner or later but it will require architectural breakthroughs beyond just training larger language models.

6 Likes

This is a great example for understanding ChatGPT’s limitations. GPT is at its core a Markov chain: it predicts the most likely next token from the sequence of tokens that have come before. This mechanism allows it to “remember” a small amount of state over the course of the conversation, but as soon as you force it to make choices conditioned on an exponentially large state space, it starts producing unreliable results. Even for simple questions like,

Out of the numbers 8, 6, 7, 5, 3, 0, and 9, how many are odd?

Of the numbers 8, 6, 7, 5, 3, 0, and 9, three are odd. The odd numbers are 5, 3, and 9.

A human answering the question doesn’t try to analyze number-of-odds as a property of a list-of-numbers; instead the human “knows” that oddness is a property possessed independently by individual numbers, and that the question is asking them to retrieve the oddness of each element of the list and aggregate the result. So we can easily answer the question despite having only witnessed a modest number of lists-of-numbers in our lifetimes.

2 Likes

Yeah, it’s notoriously bad at math. For instance:

To prove that there are an infinite number of twin primes, we can use the fact that there are an infinite number of prime numbers. Prime numbers are numbers that are greater than 1 and are only divisible by 1 and themselves. Some examples of prime numbers are 2, 3, 5, 7, 11, and 13.

Twin primes are pairs of prime numbers that differ by 2. Some examples of twin primes are (3,5), (5,7), (11,13), and (17,19). Since there are an infinite number of prime numbers, we can always find two prime numbers that differ by 2, and thus we can always find a twin prime. For example, if we take the prime number 7, we can add 2 to get 9, which is also a prime number. Therefore, the pair (7,9) is a twin prime. We can do this for any prime number, so there are an infinite number of twin primes.

Therefore, there are an infinite number of twin primes.

(This is in fact a famous unsolved problem.)

3 Likes

To be fair… this “proof” is not so far out of line with the kind of thing I see on a regular basis from students :wink:

6 Likes

Someone brought up a point about AI art being trained on copyrighted work and the legality being weird. Well, this is an interesting development:

4 Likes

I love how in a related story, artists messed with bots scouring comments for “I want this on a T-shirt”. (linked in the comments of the post @pinkunz linked to)

How Artists on Twitter Tricked Spammy T-Shirt Stores Into Admitting Their Automated Art Theft - Waxy.org

3 Likes

I’ve been working on something in Twine involving rotation puzzles generated by thisartworkdoesnotexist.com/. I will have to check out that game for inspiration.

1 Like

That’s fascinating and totally makes sense with all the print on demand sites.

While they can be awesome (I was able to make merch/feelies/artifacts for one of my games on RedBubble without an initial cost outlay, just paying for the item after it was virtually created) it’s dirty pool for a bot to scrape images without being able to determine whether the art is copyrighted.

4 Likes

The Chat GPT Test Kitchen is open for business:

Write a fictional recipe for jam preserves made from bananas and papaya.

Write a fictional recipe for rotisserie flavored chips made from crispy chicken skin. Include an appropriate dip.

Write a fictional recipe for a whole pig stuffed with a turkey, stuffed with a chicken, stuffed with a duck, stuffed with hamburger. Slow roast the whole thing on a spit for several hours.

Write a fictional recipe for a mixed drink that uses kool-aid and gummy bear garnish with high-end alcohol and mixers.

Write a fictional recipe for bacon wrapped gingerbread cookies drizzled in salted caramel.

Write a fictional recipe for deep-fried onion rings. Make the batter from potato pancake mix, not traditional breading.

Write a fictional recipe for a fruit pie that is topped with a muffin top of the same fruit flavor. Include a crust made from baklava.

Write a fictional recipe for a giant 2 lb cheeseburger, the buns of which are themselves made from two breaded and deepfried cheeseburgers.

Write a fictional recipe for an extra large grilled burrito filled with general tso's chicken, pork fried rice, chinese vegetables, and extra sauce. Do not include typical burrito fillings.

Write a fictional recipe for empanadas filled with the filling from crab rangoons. Make sure to also include a sweet and sour dipping sauce.

Write a fictional recipe for peanut butter shrimp ice cream.

Which one are you willing to try?

2 Likes

I’m amused by the recipe warnings.

1 Like

Onion rings using potato flour doesn’t sound at all ridiculously off-map. It’d be gluten-free!

I’d totally be into a General Tso’s Chicken burrito with a side of crab rangoon empanadas. That sounds like the result of a Mexican/Asian Fusion restaurant or something you’d get served on Chopped!

Too much deviation about food

There was actually a chain fast-food place in Miami back when I was there called Wrapido that did unusual “wraps” - so essentially California Pizza Kitchen with burritos. The sushi-wrap burrito with (cooked) spicy tuna, avocado, white rice, nori, and wasabi was surprisingly good, but almost felt wrong, like too much sushi if that could be a thing. I learned there were some things that weren’t great as a burrito. I think I had one with yams in it that was a no from me - sweet flavors with burrito texture might have been the problem.

Wrapido Menu

4 Likes