SUBJECT: Questions regarding Settings for Generation of good looking portraits.
Hi to everybody who is reading this,
Thank you Severian Void for creating this model and getting me into Stable Diffusion, I already had a lot of fun.
Now I would like to use it properly, to create the AI portraits for a mod I am working on.
I managed to get Stable Diffusion with Automatic 1111 Web UI going, installed all the files created by you and can now create portraits.
But I am unsure how to proceed and have some technical questions regarding the generation of vanilla looking portraits.
It would be great if you or somebody else could answer them for me.
As I understand It, the nine month old Lora can be used with Stable Diffusion 1.5 or any other the Model one likes the look of.
1. What Model do you recommend using alongside the LORA to get vanilla looking portraits?
2. What does the Hypernetwork "HN_ssportrait_v2_1.5_13431.pt" do and should it be used alongside the LORA or only the older Standalone v2.1 Model?
3. How many Steps, what CFG Scale and what Sampler did you use for your Images?
4. What Model/Settings does Inference API on your Hugging Face repository use?
I really would like to match the quality and the look from the Hugging Face Inference API Images, as I managed to get some usable results through there already (See my pfp). Any tips will be appreciated.
Kind regards
NoSTs
PS: The Mod I am working on aims to Implement Biological AI's into Starsector, basically uplifted animals such as blue Lobsters, Wolfs and Monkeys including story missions with multiple endings.