Fractal Softworks Forum

Please login or register.

Login with username, password and session length
Advanced search  

News:

Starsector 0.97a is out! (02/02/24); New blog post: Simulator Enhancements (03/13/24)

Author Topic: Working With AI Art Tools  (Read 1089 times)

xenoargh

  • Admiral
  • *****
  • Posts: 5078
  • naively breaking things!
    • View Profile
Working With AI Art Tools
« on: March 03, 2023, 01:53:20 PM »

This last few weeks, when I'm not coding on my game or finishing the particle system design system it spawned from, I've been playing around with AI art tools. I wish I had time to provide a detailed step-by-step guide for this... but much like my last experiments w/ using a 3D process, I don't think I can justify writing out a big document for newbies at this time.  I'm certainly not an expert on this. I don't think anybody really is; this is a fast-moving field, if ever I've seen one. However, most of the "advice" I've seen online is either, "try these prompts, bro" or it's hopelessly technical. I'm hoping this helps people understand how to use these things a bit better.

I'm not nearly as interested in the pure Text-To-Image systems like DAL-I, where people just give the AI long strings of text and hope for the best... as I am in systems that can be guided with image prompts to arrive at specific design outcomes. For visual artists, this is where AI goes from being a novelty toy to becoming a power-tool.

1. Nightcafe has been a really good starting place for this, and it's sorta-free and easy to use. They have multiple generators available and there's lots of prompt guidance.

Advantages: if you've locked down the AI and are sure about the style you want, you can kick out dozens of minor design variations with ease and kitbash / paint from there. Also has the smartest upscaling tech I've seen.

Input (and text prompts, like "spaceship, top view", etc.; this prompt was kept quite "boring" and I limited the AI's "creativity"- described as "noise" in Nightcafe- to get a result that was fairly close to the source material):


Output:


Does the input art have to be that polished? No. You can't quite get away with MS-Paint "drawings" but you can get pretty close. What the tech needs, to do its thing, is largely blocks of color, with some noise, and guides to borders and shapes- airbrush some black lines over rough geometry and put the whole image on black at the end for best results. You can get away with pretty simple stuff, if you aren't concerned about smaller details or are going to fill them in yourself. It'll extrapolate from there. Nightcafe's implementation of Stable Diffusion "likes" images to be pointing upwards for some things, but works better if it's turned 90 degrees to the "right" or "left" for others.

Disadvantages:  Not really free, for serious use. Is Nightcafe worth paying for? For some things, yes.

Nightcafe's support of artistic styles is also less broad than I'd prefer. Output, once you've gotten your prompts set properly, is pretty consistent, in terms of style and tone, but there's definitely a learning curve with all of these tools.



2. AI Composition, localized: If you have a CUDA-capable GPU and you're fairly brave, Easy Diffusion is a nice alternative to using Nightcafe.

Advantages: 100% freeware, access to many more painting styles, different SD models and your queries, data and processing happen locally, under your control. In theory, this gets to evolve w/ Stable Diffusion.

For coders: if you want to mod it, it's under a permissive license, and they take commits. It's much easier to set up than some of the other direct implementations of Stable Diffusion with some sort of UI (I tried a couple of other ones and had all sorts of installer problems and Python problems and... yeah, I don't have time for this, lol).

Disadvantages:  It's just plain harder to use. This feels bleeding-edge and somewhat unpredictable. I eventually got it to produce pretty polished-looking work from concepts, but it was like wrestling an oily alligator at first. It has various little things you can dial in that aren't explained well, but make a big difference.

Biggest issue: try it now, or you may not get to use it. IDK how long it'll last; I get the impression it's largely a one-person show and a great deal of what appears to be going on w/ this scene indicates early adopters are getting burned out as everything moves fast and breaks. There's also a serious issue of trust; it's basically running a bunch of Python, and it wants to talk to the Internet. It could potentially wreck your computer or <other really nasty malware stuff>. How long until it's dangerous or just quits working... IDK, lol. So, caveat emptor!

But it's a good example of having access to Stable Diffusion with good training data at home, and you can feed it different models as they're released. I suspect a more-polished application like this is probably going to arrive this year or so that'll automatically update as models do, etc.

Input (note: I blacked out the background and expanded it a bit). Prompt was fairly "wild"; I wanted to see what weird places the AI might go. IIRC this was "insect spaceship" and a bunch of qualifiers (style notes, etc.). I was hoping for something more organic-feeling than the original, which I kitbashed pretty fast from SS art and my own.


Output:


A more "guided" version, where I didn't let the AI go as "wild" and told it to obey the prompts more:

Input (again, after placing on black):


Output:

This could be worked up into a serviceable piece of pixel-art with a modest time investment. As you can probably see, using "stricter" guidance means that stylistic touches tend to be a bit more consistent; this can be a good or a bad thing, but it's great if you already have a basic idea ("little brown spaceship that vaguely looks like Star Trek") and can put together a simple concept for the AI to chew on.

Just how simple can the input be? Quite simple. Here's a Romulan Bird of Prey schematic I did some very fast sloppy rework on, as the input:


Output:


3. Relight. Basic version: allows users to build and manipulate 3D lights that affect 2D images, with "height" extrapolated. It's not quite fancy enough to cast shadows, but it's very impressive and easy to use. I'm sure there's a Photoshop plugin using this tech... somewhere. If not, there will be, soon; it's pretty incredible and very, very useful for reworking AI-generated images quickly to get the lighting tuned a little more consistently, adding rim lighting and so forth.

The Death Frog, above, has been relit with one white light up front and two pinkish lights set "low". It's not perfect, but it works surprisingly well. It probably works more reliably with scenes that involve humans; their examples were all photography retouching work.

***********************



That said, these are fabulous power tools for anybody who wants to get art built to polish later, or explore design variations before commissioning something from a professional. It will not build perfectly-polished, pixel-art spaceships for Starsector or similar contexts without skilled work at the end, but it's really useful for getting to a starting-place quickly; with the more-typical spacecraft designs like the Nightcafe examples above, I can cut out, blackline and fix areas quickly on a large scale version, then shrink to production size and do pixel-art work, which typically takes much less time if you've already done blacklining, etc.
Logged
Please check out my SS projects :)
Xeno's Mod Pack

Fontanius

  • Ensign
  • *
  • Posts: 3
    • View Profile
Re: Working With AI Art Tools
« Reply #1 on: February 09, 2024, 09:19:05 AM »

Wow, I'm surprised nobody has replied to this yet. I've been following your work with AI art for a while now. Awesome work! I'd be interested in sharing prompts and stuff.
If you'd like, follow me on Night cafe, J03 @Fontanius.
Logged