Artist(s) sought for silly science-fictionish paranormal parser game

I’ve been writing this game called Bureau of Strange Happenings (BOSH) which has a lot of AI art in it. Not too long ago there was a somewhat contentious and emotional thread on intfiction about AI art. The short version is that some people aren’t too happy about it. So I thought I’d just rip the art out of the game. But I don’t like the game that way. So I went away and sulked for a few months. But recently I’ve really had the urge to work on the game again. So I was hoping I could find someone who would agree to split the byline, reinterpreting what I already have (not copying, I’d want any collaborator to bring their own vision with them.) I’m not clear on what style I’m aiming for, but I’ll post a few of the existing images at the bottom of this post.

If you’re interested DM me. Can’t wait to hear from you!




3 Likes

Just an idea: how about making collages of faces? Since it’s the bureau of STRANGE happenings, putting together some strange faces from magazine pictures seems appropriate. You know, two different eyes, someone else’s mouth on a cut-out head. Totally legit way to art, no drawing skills necessary. Then just scan them.

5 Likes

From the samples, sounds like you want a photo-real style. I could maybe do some renders? Because i love IF to have images (and sound)!

My problem with AI is not a moral one, but i just think it’s not very good! At first glance, it looks realistic, but when you look close you see the faces look like plastic, and there is no skin detail. There are also mistakes.

All the AI art I’ve seen is low-res. So, question to people with paid AI art accounts, can you make hi-res eg 4K and if so, does it have photo real detail? I’d quite like to know. And how much does it cost to churn out 4K AI images?

I’m looking at your “faraji” portrait, here’s one of mine. Now bear in mind this is the “plain jane” render. No face expression and no accessories. Adding these makes the images a lot more natural. But i make the “plain janes” as start points.

It takes a lot of effort to visually create a character. A lot of details go into it.

Here’s a character i created today. She’s not finished, but it’s going well. This is “Zulin Chen” from my forthcoming “spiderman spoof”.

Some people think the hot pink is too much, what do you think?

Edit: you might want to zoom the images.

3 Likes

oh man, those freak me out. :scream: noooooooooooo (please understand I’m screaming in the most respectful way.)

4 Likes

Photorealism is not 100% necessary. But renders are a definite possibility, especially given the potential to use different shaders to get varied rendering styles.

I’d agree that the pictures don’t exactly look realistic, but the amount of detail has increased – this is something I just made a few minutes ago: Midjourney

I have a paid account on Midjourney. They just added resolutions equivalent to 4096x4096. I pay $30/month to make as many as I want (although I only get a certain number of “fast” minutes before my jobs are put in a lower-priority queue). The photo linked above is 4096x4096.

I was assuming above that rendering with different shaders is possible. Is it possible, for instance, to get non-realistic cartoon styles. Or maybe a rotoscoping effect? That might go well with the feel I want to get.

3 Likes

Hey thanks for posting the 4K Midjourney image. This is the first high res I’ve seen from AI.

OK, so this image is very good. The hair and eyebrows are great. The face skin is a bit over-smooth and the eyes are lacking some detail, mostly the irises.

I have been wondering how feasible it is for AI to make the same character with different expressions or different poses. For me, this has been one of the showstoppers for games like IF. Can MJ store the context for your character so you can then ask for different versions of the same character?

Say you wanted to draw the same person smiling?

Regarding renders, once you make your character “puppet”, you can then pose them really easily. Also for different face expressions.

Regarding shaders, this is actually not working for rendering. Well, not that I’ve managed at least. Originally, i thought it was easy to have, say a “toon shader” or a “manga shader”. But actually, you can’t. Attempts I’ve made come out very poor - especially for people. Some backgrounds kind-of work, but that’s it.

There are various filters in Photoshop and others that claim to give certain looks. Mostly my results here have been awful. You wind up with subtle imbalances in the faces that either change the expression or make the eyes go terribly wrong. One bigger than the other, that sort of thing.

Something i forgot to mention earlier, if you want a specific art style for your images (ie not realistic), then AI (or otherwise non-render) is the answer for this. Personally, because i often like to make sci-fi, then realistic renders work better when you have shiny surfaces, glass and emissives.

Not close enough.

Diffusion methods can’t really reproduce images. They don’t model the content in any way, they just know how well their output matches other images with similarly-labeled content. Which is also why their output tends to look a bit too smooth – they tend to average out interesting features, i.e., those not necessarily representative of their subject matter. In the pictures I’ve used I’ve selected a picture I liked, and (after some failed attempts) not tried to make other pictures of the same person.

2 Likes

Thanks for that. It’s impressively close, although you can tell, it’s not quite the same person. The hair is good tho. But the glasses are very slightly a different shape. But I am still impressed how close it is.

An annoying small difference you’d have to fix manually is the shirt collar has buttons in one and not in the other. Not a huge problem, you could edit out the buttons easily in post. Also the lapel button hole.

This is one of my concerns with AI art - how much post editing is needed? Small stuff can be done by non-artists, but anything else would need much more skill.

So, correct me if I’m wrong, but it would be really nice to somehow give an AI result a name, and then make another art based on the name. For example, by calling the character “Larry”, you could say “draw larry riding a bike.” or something.

i’m assuming you can’t (currently) do that. Which is one of my main reasons to conclude AI art is currently not ready for use in real productions.

To be sure, im ready to be convinced otherwise. Rendering is not easy either!

That would be nice, but it would require something substantially different than plain diffusion techniques. A diffusion AI system knows that certain linear functions detect people, but it requires thousands of images to train a system to recognize a certain person, so since we have only the one picture of Larry, it can’t really recognize Larry at all.

That said, it can reproduce celebrities well:

2 Likes

When do you want the art by? I could maybe do something (emphasis on maybe) though it depends a lot on the timeframe. I’m pretty busy for the next few weeks, but if I had more time…

Hi! My timeframe is completely open right now. The game goes back (in some form) nearly two years and I’ll probably spend another year working on it.

I looked at EviscerateThisGirl.com and need to spend more time on it. Seriously impressed.

Also, I have never won at Brogue. I kind of thought nobody did. So impressed there too.

If you’d like to look at it I can send you the current build of the game.

1 Like

Sure, go ahead! I’ll see what I can do in the next [unknown period of time hopefully not longer than a month, who knows]. Also thanks, haha. Winning at Brogue isn’t that much of an accomplishment if you consider how many years I’ve been playing that game for (it’s a lot of years).

I’ve always been fond of variational autoencoders, which can sort of do this…but I’ve never seen a good way to deal with the fuzziness that comes with.

1 Like

I got ChatGPT to give me a very dumbed-down version of the differences between a VAE and a diffusion model. Interesting.

1 Like

I haven’t done much with computer science since starting my PhD program (since now I’m in linguistics first and foremost), but at that point, the two big technologies for image processing were autoencoders and GANs (adversarial networks). All the attention was on GANs because they could make sharper output, but the dirty secret was that 99% of the time, GANs would crash and burn during training (“mode collapse” also sometimes known as “the Helvetica scenario”). Which meant all the real GAN development was restricted to the big companies who could afford to retrain their model a hundred times to get the 1% that actually worked.

I know a lot less about the new generation of diffusion models, but I’ve always liked autoencoders because of how much control you have over the output, and hoped they’d make a comeback. Their big strength is that you can do what we’re talking about here: say “take this image, but make him look down and to the right, and give him glasses” without completely changing the person’s look.

3 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.