A Large Disclaimer for Silverbacks

While I’ve been working on the worldbuilding for the WIP Silverbacks game, I’ve had some recent reflections that some aspects should be made transparent now, both for the sake of transparency and for taking notes on any following discussion.

The story of Silverbacks takes place in a world made entirely of machines. No humans can be found anywhere. Machine agents perform tasks, and are orchestrated by network intelligences.

My approach to worldbuilding and storytelling has me running the numbers and doing the math, in order to inform how certain things are to be described, how certain dynamics will play out, and what is in the ballpark of feasible vs comedically outrageous.

Additionally, the worldbuilding deals really heavily in a very strong special interest of mine, so it’s not enough for me to pull numbers directly from real-world machines and plop them in extremely incompatible environments.

For this reason, I want to write the software which helps me crunch the numbers and find deeper, comprehensive answers, because exploring this idea is intrinsically fun for me, and I feel there is value in this method of storytelling, despite the disagreement I have typically gotten in the past.

This is where the disclaimer comes in.

In the story, neural networks—guided by genetic algorithms—are used for informing the majority of administration decisions, which then go on to shape a lot of design and logistics.

The equations and formulas that I had been using are very rough approximations which gave me anywhere from 25% to 65% accuracy, compared to posted results. If I want that to be higher, through available tools, I would be shelling out hundreds of dollars, or spending years of my life making the FOSS version of the industry software for myself. A lot of other paid computation software alternatives (with more accessible models) also didn’t factor in the variables I need to account for.

So, I am writing—for myself—a neural network system that uses a genetic algorithm to augment the equations I’ve been using, and intuitively skew the results to fit trends and patterns found in available open-source engineering results. It’s not going to be precise or reliable enough for real-world engineering, but it will boost the accuracy of the computations to something that I find more useful and engaging from an authenticity standpoint.

I intend to use this system to compute the engineering stats of various speculative machines, by iterating on physics simulations. Again, I am writing these systems myself, because I want to enforce a specialized framework. Also, it’s a fun coding challenge.

Part of the reason why I’m doing this is because the network’s method of design iteration would be something I’d be doing by hand anyway, by entering in different numbers over and over and over and over and over again, into a massive spreadsheet with a list of test cases, until all the metrics came back as close to 100% as possible, which would then be fed into a second spreadsheet, where the process would repeat all over again, but with an extra magnitude of complexity.

However, I hope you can understand that if I continued this process by hand with glorified guess-and-check, this project would take approximately 20 years of burnout, while only a few months of that would have been spent writing the story. It would be like refusing to use a calculator while designing a submarine. I feel like this isn’t where the technology is a problem, and no human would want to spend that much time repetitively doing this mind-numbing (and honestly disorienting) multi-stage formula iteration, chasing a mysterious set of output numbers.

So, to be clear:

  1. The story will be of my own creation.
  2. All the prose will be written by me.
  3. All the audio will be recorded, processed, and mixed by me.
  4. The game cover is already done, and was modeled, textured, lit, processed, etc entirely by me.

Neural networks will be designed by me (no pre-existing machine learning API will be used), and it will be training with open-source experiment result numbers, found on both Wikipedia and Engineering Toolbox. Networks will have a minor role in this project, but they do not create game logic, prose, audio, or graphics. They are just helping me compute speculative machine measurements and metrics (like “15 meters” or “160 kilograms”), which I can use as guideposts and numeric details to sprinkle into the story and gameplay.

That’s the extent of its use, and that’s the furthest its extent will be in this project. Creating the story, writing the prose, recording the audio, writing the music, creating the cover art, and writing the game code is fun, and I would prefer to do all of that for myself.

I’ll leave the math computations about these fictional, speculative machines to the network—which I will directly and ethically design for myself.

There’s also a certain amount of whimsy in echoing the in-universe reasons why stuff in the story appears to be a certain way.

That’s all. I’ve put my cards on the table. I understand how this technology works enough to implement it myself, and I am also an interdisciplinary artist who has felt the economic consequences of machine learning being thrown at literally anything a CEO could recklessly point at. Given these insights, I feel confident in my ability to make informed and ethical decisions on how to use this technology as one of many tools for creating speculative science fiction.

If you disagree, you’re always free to reply below or contact me elsewhere.

Anyway, as always: Pay your artists.


To me, this is really interesting, and considering its use and creation, I would more than happily back up this sort of project if I could. I like the idea of realistic measurements to add to the story, and using a bit of homemade neural networks shouldn’t stop you. If it were artwork and prose and didn’t have a really good reason, then that would be another thing*.

* - that wasn’t really directed at you. Also, if you want to argue, please note that every search engine you use has fairly big quantities of neural networks, IIRC.


Yeah, I like to try to model and number-crunch for a lot of aspects of my stories. For example, I planned out the energy/heat cycle of the facility in I Am Prey, and outlined where all the various life support systems, plumbing, and other systems would go.

The problem I’ve been having is the kind of machines I’m trying to run numbers for apparently have not had a lot of as much progress in computer-aided design. It seems a lot of this stuff is just built small-scale and given rigorous scientific testing to get the numbers, which are later plugged into software to help iterate the next design. It doesn’t seem to be possible to be theoretical/speculative for the entire planning process for this industry.

Which means I have to use formulas that provide really rough ballparks, and then shave this down systematically somehow, using processes that are not being explored at all in the irl industry (for precision reasons). This also means that nobody else seems to have published anything on this, because who would create highly-detailed speculative machines, if they don’t intend to build them for real-world industrial purposes? So, for lack of what the industry calls a “practical purpose”, I am forced to swim through places that are not strictly sanctioned or explored in science and engineering, in order to get really close to them, without the risk of needing to build these machines in real life.


Also, because I have already gotten this question twice before:

No, this is not something that can be published to make bank in the relevant industry.

I am factually and mathematically not making something precise enough for the industry to find useful. It’s enough to get pretty close, but not close enough to be useful or applicable in real-world manufacturing.

This is only useful for science fiction purposes.

Some people have been obsessed about turning all of my hobbies into businesses and sources of income. Just let me have fun with autism-driven speculation. Gosh.


This sounds really cool! And I’d be shocked if anyone (at least anyone who’s really familiar with the technology) is pushing to ban neural networks altogether. I think they’re absurdly over-applied today (“we don’t need to understand the problem if we can just throw more GPU-hours at it!”) but they’re incredibly useful for the things they do, and wanting to ban genetic algorithms because of the problems with DALL-E and its ilk seems like a ridiculous overreach.


Um. Why would this be bad? Is it because it’s AI? Like, you’re writing an AI? How cool is that? Why does this need explanation or defense?

I confess that I get lost in all the “Thou shalt nots” today and often need guidance.


I don’t think anyone has a problem with all AI/ML techniques; or if they do, they need to learn to differentiate between types. Doesn’t sound like you’re doing any generative AI, though, so I can’t see any reason one would object.


I largely posted this to follow a transparency policy, because I also believe in that policy. More just “This game used AI in its development process; here’s where it was used, for the awareness of the player”. :grin:


AI is a large subject. I don’t think anyone will complain if you’re using AI for pathfinding, for example. A close subject to it would be AI heuristic for playing board games, such as chess. Another step forward would be a sophisticated Markov Chain for text generation. So, if I’m generating David Copperfield Ghostwriter Project then I will be patterning my MC with his texts.

Is Markov Chain considered AI? It can be. Or it can be considered statistical probability, which is not AI at all.

Here’s another thing: I’ve been researching Artificial Personality subjects. That means synthetic people to inhabit the game worlds. By all definitions, they’re clearly AI. If I cannot have that kind of AI, then the world will be filled with cardboard characters. So boring!

The current AI tech uses lots of computing power, both for data and formulations. However, I can categorically say that it’s not necessary. Depending on the quality of data, a few hundreds should suffice. A game isn’t like real life, thus very simplified. A few dozen cases of study is enough to have a passable behavior. More is better, but a few hundreds should be sufficient to give “good enough” quality.

So, where do we draw the line? Honestly, I don’t know. I know ChatGPT or the likes are frowned upon. But the field of AI is so vast, that there’s no clear defining line that must not be crossed.


Exactly. I still felt it would be useful/informative to outline the specifics and how it contrasts from generative stuff, just in case. ML is a large umbrella.

EDIT: Got sniped by @ramstrong


This. I am fortunate in my use case that I won’t need to find a huge amount of data for this purpose, which means I can use freely-posted example/reference data of a size that other models might consider tiny. Especially because I’m not designing my framework to be super generalized.


It just so happens that I had a machine learning lecture in uni.

Can you give an example data point and the variables you want to predict? Implementing neural network learning yourself is really a last resort, and there are many FOSS libraries out there already like pytorch and tensorflow, I want to see if that’s really necessary for your use case.


Nope! I already let slip too much in the NeoInteractives Discord, and I want to keep Silverbacks unspoiled for the remaining development process~ :grin:

Well, a good part of my use case is having fun, and implementing this is fun, so…

Also I’m nearly done implementing it anyways, and it’s one of those things where ML cannot be allowed to handle the entirety of the computation, and needs to work as a component to augment a hand-written process. I’m also trying to set up the nodes to have specialized roles with connections that have a few tweakable settings. It’s intentionally not the usual setup where nodes and connections are more general-purpose, and it’s up to the weights to sort everything out. I’m dictating what neurons do, and how they connect, and then I’m leaving it up to the genetic algorithm to tune up the exposed variables of the different nodes and connections. This is mostly because I know what variables relate to each other, but I don’t know how much. So I’m building the structure myself, and the algorithm operates within it. Mostly trying to prevent it from using generalized space to find direct connections that go outside the structure of the problem, and overfit in a way that completely breaks everything else.

But that’s less me arguing my case and more explaining it for your curiosity. I would have used the FOSS APIs if I wanted to.


I’m not sure of having understand well the AI/math issue (for me, maths and AI are very different fields, requiring very different hardware (I follow Asimov here, he posited a distinct hardware (his “positronic brain” is clearly distinct to computing hardware…) for AI)) but seems that your disclaimer is out of place; for AI one needs non-deterministic hardware capable of handling abstract concept not reducible to numbers and formulas, so even the most advanced “AI” are actually pseudo-AI, because of the limitations of the underlying hardware.

pseudo-AI can assist in coding, because coding imply, even in IF, dealing with numbers and formulas. so I guess that Joey has done the right thing.

Best regards from Italy,
dott. Piergiorgio.


This is precisely why I hate how machine learning started getting called “AI” by marketing teams.

The structure I’m working with is:

  1. An equation, which only gives me a low-precision approximation.
  2. I add variables to parts of the equation.
  3. These variables are changed by a network of linked operation nodes.
  4. A darwinian trial-and-error attempts to solve the equation using some values for the added variables. If the value does not match the results found in real-world experiments, it changes how the node network finds the added variables.
  5. Attempts which get higher in accuracy share notes and randomly create minor deviations.
  6. This process repeats until the network is able to assign values to the added variables which account for lost precision in the original equation.

If you want some inspiration, look at Kenneth Stanley’s Neuro-Evolution of Augmenting Topologies (NEAT) algorithm (genetic algorithms that can create increasingly complex neural networks to solve problems). His approach solves some of the details that you may run into and isn’t nearly as complicated as it sounds.

Stanley also is keen on novelty and creativity, not your standard “optimize an objective function” approach of ML. These help avoid the local minima problem of a pure optimization problem.


As someone who had to work with an LLM (OpenAI) for my job and got very familiar with how neural networks and embeddings et al work…who is also an artist and writer who can and will shout my hatred of “AI”, its training sets, and how it’s used to shove creative people out of their industries til the cows come home…

I have absolutely 0 problem with this.

The technology isn’t the problem. The innovations and science behind ML are fucking amazing, cutting edge brilliance.

The training sets scraped from copyrighted data, private health information, and more DEEPLY traumatizing /triggering content like ISIS executions and nonconsensual porn are the problem. And the companies paying Kenyans $2/hr to search through said deeply traumatizing training data are the problem. And the massive amounts of computing power for generative AI that have an enormous carbon footprint are the problem. And the countless numbers of artists, translators, writers, gamedevs, programmers, et al getting their labor replaced by plagiarized, sub-par, machine-generated content are the problem. And the academic publications written by LLMs, and the search engines trained on AI degrading reliable digital truth, and the deepfake kidnapping scammers, and and and the list goes on (and I am elucidating this so that other people know these problems-- I am confident that you, Joey, know of them already).

Seeing as your model has NONE of these problems, it’s genuinely fine. The technology is still amazing, and the cat’s not going back in the bag. So if you’re using it ethically (and you are), why not use it?

Excited to see what you do with this!


That’s the best way to do it, imo. Apply human knowledge to break the problem down into pieces before letting the machine learning handle the tuning!

Not many people realize the reason why ChatGPT is considered so world-changing when the GPT underlying it never drew much attention. And it’s because of the human-coded boilerplate over it that turns “continue this text” into “hold a conversation” to tap into humans’ perceptual biases and make us overlook its flaws.


trying to close the political-social side of this debate, I note that the core trouble, is, as Joey pointed earlier,

whose can be applied in general, and more so in people engulfed in what I call “terminal illness of capitalism”, whose is not only greed and profiteering. and the points noted by Aster are symptoms of this illness.

Ethics can’t be enforced, esp. with so-called “common right”, and enforcing ethics by law is an extreme, last-resort measure, so, banning AI’s abuses, if not outright banning AI is to be kept as last measure, implementable and enforceable only with Roman Right.

That said, this specific debate isn’t only OT, but is currently unsolveable, so is best putting it on the shelf, at least for some years.

Best regards from Italy,
dott. Piergiorgio.