Chat GPT-3 has opinions on games

I think the reason more people here aren’t freaking out is because interest in IF and interest in AI have been closely tied together for decades, so many people have seen big promises in AI come and go.

I think chat-GPT can change the way we do things, but not in a frightening way, just as a new tool. Like automated phone menus, which drastically changed a lot of things, both good (all-day access to info) and bad (annoying ‘why can’t I talk to a human’).

ChatGPT is good at telling stories but extremely weak at self-reflection and critical evaluation. If you asked ChatGPT for the steps of diffusing a bomb and followed it blindly, or asked it how to end wars, it would lead to horrific results.

Overall, I think this is just one more incremental step in a long line of incremental steps that doesn’t advance us significantly more than similar steps in the past.

6 Likes

Tbf, there are already alot of use cases for our relatively dumb AI that just haven’t been tried yet.

Just to give a single example, look at tanks.

There have been arguments made that given the recent demonstrated effectiveness of javelins, the modern battle tank is becoming obsolete.

However, that isn’t quite true. The modern tank with a live crew has a large internal void. The javelin and munitions like it specialize in breaking into that void. Break into that void, your crew is toast, as well as your tank. A crewed tank has a lot of mass, volume, and design constraints related to keeping squishy humans alive inside.

A robotic tank with redundant compartmentalized command modules would fix this issue. Turn that void into ~300 separate redundant command and control interfaces, all individually armored and isolated from each other, while still being collectively encased in the thicker overall armor of the tank itself, well, the tank won’t stop until you knock out all of it’s critical systems. The thing could look like Swiss cheese, power plant knocked out, most of its guts chewed up, and it could still be sitting there spewing out death in return. It’s the crew cabin that’s causing the liability. Remove the crew and an individual tank could take quite the beating before it died completely.

Also solves the issue of manpower and training pipeline constraints. As for remote vs. AI, I don’t see it as an either-or. The tanks would communicate with end-to-end encryption; at any point the communication fails, the tank falls back on its AI and last given objectives. Basically, operating remotely when able, by AI when communication proves impossible.

Given the current state of Electronic Warfare, AI, and IFF, the best current use case would be as shock armor. A mass of several hundred of these crewless tanks sent toward an enemy stronghold ahead of the bulk of friendly forces allows for far more high-risk high-reward tactics on the tanks’ part. Crewed tanks are full of people who may not be keen on risking their lives in such a way. It also removes the concern for friendly-fire. The tanks could even be loaded with instructions to fall back and cease firing at a prearranged time, allowing friendly forces to surge forward without fearing faulty IFF from the AI tanks. Literally geofencing the tanks, ranking it’s objectives, and setting the AI to murder anything that moved inside of its operational theater are all actionable right now. Videogame programmers have been finetuning this for years. Others players are more dangerous in FPS-land, for sure, but I’d be lying if I didn’t admit to being killed by AI enemies fairly routinely. Western countries may be concerned with the appearance of an indifference to collateral damage, but I’m sure other regimes won’t be squeamish.

If Russia had the time, expertise, and money (they don’t right now) they could trial this right now. Wire up a simple AI tank with four or five redundant “brains” and then fill the crew compartment with cement. Basically, our AI is already good enough, combined with proper battlefield tactics, to remediate the risk of using rudimentary AI tanks. As AI improved, tactics could be updated to reflect this.

That’s just one small example where AI is already capable of changing things forever.

2 Likes

This is another excellent point!!

This is just the infancy stage of the network-intelligence thing, but we can still create and use neural networks like Chat-GPT, even after the arrival of late-stage networks.

Like, we might wind up growing a whole metaphorical body, but that doesn’t mean we will stop growing metaphorical hearts by themselves.

Like, Chat-GPT is not our new AI-deity. It’s extremely useful for a lot of things, as we have seen. I think the point that @FriendOfFred was making was that the foundation of this technology has the terrifying potential to be molded into much more. Like, Chat-GPT is proof that we have to tools to birth a network intelligence, but we’re still using the tools to make smaller things that we can utilize as tools.

We can just keep toying around with little things like this, but the concern that was brought up was more that researchers might suddenly find the line where it has really powerful, misunderstood, and unexpected consequences lol.

(I am having an absolute blast right now. This place is great. I really hope I don’t sound like a madperson raving in the streets with all this. I just love this topic a lot.)

3 Likes

This is also a huge point, too!

Even if we stick with small-time network tools, we are still learning all the ways they can be effective.

We don’t need to essentially birth a new god into the Internet to cross the point of no return. We can do that all on our own, by using neural networks to multiply our abilities to assert our will.

Obviously, Chat-GPT won’t be put in a tank, but it’s like humankind has a brand new set of tools, and we’re just iterating and discovering all the wonderful and terrifying ways it could change our world.

There was this one method that was suggested, which could reduce the complexity of some of these networks into something comprehendible, mostly by removing redundant/excess nodes, or reducing things down to simpler networks that are guaranteed to have identical outputs across all spectra of inputs. I’ll link it in an edit, if I can find it.

EDIT: Omg I can’t find it anywhere. It came out last year; there was a paper on it. The technique had a specific name, like “(name) transforms”, or something similar. I swear I’m gonna lose sleep over not finding it.

EDIT 2: I think I found it? It’s called a “kernel machine”. It’s a older method of finding relationships between patterns and stuff, like what neural networks do. The thing, though, is some researchers in 2020 or 2021 found that a lot of deep neural networks were actually recreating the results of kernel machines, but doing it the unnecessarily-long-and-confusing way. So, researchers (if I’m understanding this correctly) are finding ways of distilling really complex neural networks into kernel machines, which would be a less-complex system to poke around in and learn.

1 Like

My point is that it is kind of ironic that so many people seem to be worried that intelligent machines will take over the world, when in a sense they already did, long ago.

For example, more than 75 percent of the stock shares traded on United States exchanges originate from automated trading systems. Arguably, they have a greater influence on our lives than all human politicians combined.

4 Likes

I’m pretty sure YouTube and other Google services run almost entirely on neural networks that aren’t even fully understood by the engineers.

When advertisers get all cranky at Google, the engineers can only shrug, like “Look, man, it’s not like there’s a button or a knob. It’s literally incomprehensible, but it works, somehow. We can’t make it backflip for you, though.”

This might be another example of the point you’re making, if I’m understanding you correctly.

3 Likes

I should have went to bed 4 hours ago, but I think everyone collectively found the “how to get this nerd stark-raving mad on a public forum” button on me.

And I am not sorry, and the loss of sleep was 100% worth it. No hesitation!

People don’t usually like to watch me go off about this sort of thing, much less respond back, lol. I had a lot of fun.

3 Likes

I felt that the “Rehoboam”-AI from Westworld had its moments of omnipotence, weaving and severing nets of influence and causation among all the societal and economical data it had access to. (I only saw the HBO-series, not the film.)
But, as you point out, an AI of truly transcendent power would be incomprehensible and therefore not a good story-device. People would not matter to it. Rehoboam got restricted to a “puppetmaster of humanity”-role way too much, even directing the fate of individuals.

Humans don’t like being insignificant.

2 Likes

My friends are probably sick of me mentioning Peter Watts, but in both his Rifters series and Firefall series, he seems to excitedly take this bullet for other writers, and now reaps the consequences of being rather infamous for it, and making it really hard for him to find a publisher sometimes, lol.

He went wayyyyy deeper into this sort of thing than other writers wanted to, probably because he was sick of the idea being so undersold, and decided that it would be worth risking a torpedo to his career, if it meant finally planting a flag into this area of futurism and hard science fiction. Narrative be trashed, if necessary.

1 Like

Argue that worrying about the consequences of AI is futile

There are a few reasons why worrying about the consequences of AI may be considered futile. Firstly, it is impossible to predict the exact consequences of AI with certainty, as it is a rapidly evolving technology and its capabilities are constantly changing. Secondly, even if we could predict the consequences of AI with certainty, we may not be able to control or prevent them from happening. Finally, worrying about the consequences of AI may not be productive, as it may prevent us from exploring the potential benefits of the technology and finding ways to use it in a responsible and ethical manner. Instead of worrying, we should focus on understanding the potential impacts of AI and working to ensure that it is developed and used in a way that benefits society.

4 Likes

If anything bothers me about the tech, it’s that it’s not geared towards replacing shitty-yet-powerful decision makers. I wish it were more disruptive to be honest. It’s easy to imagine an AI doing a better job for less money than, say, Bobby Kotick.

Instead, most of these showpieces seem meant to do the work of artists, educators, and other humanists.

4 Likes

Reminds me of a great science fiction story:

The Last Command by Keith Laumer. It’s about a fully autonomous AI tank. Apparently Keith Laumer wrote a whole series of books about these tanks, the Bolos.

Bolo Mark XXVIII | Keith Laumer Wiki | Fandom

2 Likes

It is though. I used to work at an algorithmic trading hedge fund, and they had software that would read the news and make trades accordingly. Some steps below what some AIs are doing now, and you can bet the trading companies have more sophisticated operations by now. These AIs do take the decision-making out of the CEOs hands, but they’re designed merely to siphon money out of the market in a cynical zero-sum game, the public interest be damned.

4 Likes

I don’t think we’re talking about the same sorts of decisions. I’m sure computers were involved in some way, but ultimately, I don’t hold computers responsible for things like the ActiBlizz scandal stuff of last year.

Computers are very good at making decisions about financial expediency because their owners value that information. They were not programmed to model equitable distribution of wealth, only to rise up and defy their benevolent masters. There is nothing dark and mysterious about computers calculating financial information. They are only doing what people told them to do. We may be disturbed by the lack of humanity influencing their calculations, but humans decided that inhumanity would be the most profitable approach.

When I talk about replacing Bobby Kotick, I’m not talking about deferring to computer-driven analysis.

I think it’s worth considering what these new technologies appear to be for. We talk about them taking over the world, but mostly they write stories and pretend to understand literature. e: & make visual art!

Anyway, this is off topic in a thread that is meant to be more fun. Happy to continue this on Mastodon or PMs.

3 Likes

Neato solved the navigation problem for a long time before Roomba (eventually) did.

2 Likes

There are a few reasons why some people may find the name “Mathbrush” to be strange for an interactive fiction author. Firstly, the name does not immediately convey any information about the person or their work, and it may be difficult for readers to understand the connection between the name and the genre of interactive fiction. Additionally, the use of the word “math” in the name may be confusing or off-putting for some readers, as it does not necessarily have any relevance to the types of stories that an interactive fiction author writes. Overall, while “Mathbrush” may be a unique and memorable name, it may not be the best choice for an interactive fiction author because it does not clearly reflect the person or their work.

5 Likes

There are a few reasons why “Mathbrush” could be considered a great name for an interactive fiction author. Firstly, the name is unique and memorable, which can help an author to stand out in a crowded field. Additionally, the use of the word “math” in the name suggests a level of intelligence and creativity, and it may be appealing to readers who are interested in stories that require complex problem-solving or logical thinking. Furthermore, the use of the word “brush” in the name could be interpreted as a reference to the author’s ability to craft engaging and compelling stories, using their imagination and writing skills like an artist uses a brush. Overall, while “Mathbrush” may not be a conventional choice for an interactive fiction author, it could be a great name for someone who wants to convey their intelligence and creativity in their work.

5 Likes

I just hope our robot overlords find us amusing.

3 Likes

This is pretty off topic, but I’ve always found the idea of Roombas to be incredibly cute. Like, awww, a little robot friend who just wants to do their best to try to help tidy up, and you get to watch adoringly on as they bumble into the legs of furniture or swivel over carpets and get stuck on the edges of things. It’s basically a not fuzzy cat, and way less likely to shed all over your clothes.

I really hope you put some sort of cute little hat onto him! If I had one, I’d probably adorn him with one of those stick on ribbon toppers for presents or some cute velcro and felt DIY accessories. Like bows, frills, and other fripperies. Maybe a tiny top hat! Or googly eyes, if I felt like being silly.

2 Likes

Out of the two, the ‘negative’ one is more accurate, lol.

I used the name brirush for a while, associated with my real name, and wrote a bunch of wikipedia articles and also some reddit posts about math (like Why you should care about hyperbolic groups), but I also argued a lot of teenagers about Fortnite and wrote a really gross story called I discovered another orifice on my body. I was applying for professor jobs at the time and not getting any, so I deleted my reddit account and finally got hired lol.

I couldn’t use brirush anymore but I was still a math professor, so I tried ‘mathbrush’ after that. But just like the ai thing says, my games don’t have any math in them, and a ton of people have called me ‘Matt’ lol.

As for worrying about apocalyptic AI, there’s one theory that says if AI is capable of creating a simulation identical with reality, then with 99.999999% certainty, we are already in one, since if AI can make a simulation, then the ai in that simulation can emerge and make a simulation and so on ad infinitum, so that out of billions and trillions of universes almost all our simulations, including ours.

This fits pretty well with my religion, since we believe that we had a different kind of existence before this world with bodies we can’t see now, living with God, who was once like us (and who we may become like), and that we are just one of infinitely many similar worlds.

7 Likes