Would you mind sharing how you generated these? Using existing images to generate new ones in the same artstyle, this is interesting to me. Thank you.
First you need a local installation of Stable Diffusion, I used
Automatic1111 Web UI. It has an automatic installer. Your PC obviously needs to meet the Hardware requirements:
A graphic card, (preferably NVIDIA) with 4GB of Vram as bare minimum (witch will limit you quite a bit in how many images you can generate, I recommend at least 6GB)
16 GB of RAM, 8r GB with pagefile enabled (Loading and changing between models is where it will eat all the ram) .
Then, you need
Severian Void SD model. You need a CKPT file (the model itself) that you put it in the models/Stable-diffusion folder inside your Stable diffusion files. I also HIGHLY recommend the Hypernetwork file he provides to push the extra mile, example of why below, you would put that in models/hypernetworks
If you are here, time for the meat
You can leave these settings this way if you are just looking for the Starsector portraits

Some basic info:
Resolution: 1.5 version of Stable diffusion was trained with 512x512 images, so I highly reccomend you don 't touch that.
Sampling Steps: Number of steps(duh) that the AI will do to generate the image. How many depends on sampling method and type of image you are going for. For this leave it at 30
Sampling method: Is the algorithm used to generate the image, which can potentially lead to different results. For the Starsector model, leave it at Euler A
Using txt2toimg, we use a prompt that looks like this:
Some notes:
starsectorportrait: Always there to trigger the model to go for a Starsector portrait. Otherwise is just a normal SD model.
sabattier effect, brush strokes: These 2 were recommended by Severian Void in order to get to the art style of vanilla Starsector, but they are not required, specially if you want to experiment with a different art style.
Hypernetwork example:
Let's say you did't enable the hypernetwork inside the WebUI Settings as seen here:
With the hypernetwork and the prompt example I generated this Luddic fellow:
Without it, even with the same seed this is generated
If I enable the Hypernetwork:

We have our Luddic warrior again plus a couple of portraits that are also present in my mod, while all portraits are way closer to vanilla Starsector art style and the number of bad portraits have been reduced(for the most part)
From here when it comes to txt2img is all about of experimenting with your prompt.
img2img is better used for iteration on generated images. While it can potentially work with unrelated images or photos, results can look very weird and forced (if not just straight up broken)
Spoiler
Low denoise strengh will look forced, but high denoise can deviate so much you probably won't be happy with the result. Iteration may help

Trying to generate a portrait at 128x128 resolution directly(base game size files)
Omega is not happy with you

It's better to reduce the image size in something like Gimp or any other 3rd party app
In any case you are probably going to generate a lot of images, as most of them will not be mod ready. If you have basic Photoshop/gimp knowledge you can then use that to combine different images to get a good result or finish the job. If you are an artist you can go above and beyond editing the generated image or create a very good base for the AI to finish. Hell, you could TRAIN a new model to generate portraits with your style.