Welcome to our deep dive into Flux, the latest in AI image generation. This article distills key insights from our recent webinar with Sebastian Kamph.
Whether you're new to AI art or a seasoned pro, this FAQ will help you understand Flux's potential and how to use it on platforms like ThinkDiffusion. Let's explore the most common questions from our webinar, with expert answers and practical tips to get you started.
Flux AI Webinar Replay
Download workflows from the webinar:
Frequently Asked Questions about FLUX AI
How long does it take to generate images with Flux on ThinkDiffusion?
On a high-end machine (48GB GPU), it takes just a few seconds to generate a 1024x1024 image using the Flux Dev FP16 model. There's an initial setup time of a couple minutes to load all the models, but once that's done, each new image generates rapidly.
If you're using a lower-end system, generation times may vary. For more details on optimizing your Flux setup, check out our Flux workflow guide.
Can you use LoRA models with Flux?
Yes, but you need Flux-specific LoRAs - older Stable Diffusion LoRAs won't work. Here's how to use a LoRA in your workflow:
• Add a "Load LoRA" node and connect it after the "Load Diffusion Model" node. The purple model connection then goes into the "ModelSamplingFlux" node.
• Connect the CLIP from the "CLIP Text Encode" node to the LoRA node and connect from the LoRA to CLIP Text Encode (Prompt).
• Select your Flux LoRA in the settings
You can chain multiple LoRAs for more complex effects.
Is there a way to choose whether or not to use the upscaler in the workflow?
Absolutely! It's easy to toggle the upscaler on and off. Just select the upscaler node and press Ctrl+B (or Cmd+B on Mac) to bypass it. The node will turn purple when it's disabled. This is great for quickly generating images without the extra time needed for upscaling.
How does Flux compare to Stable Diffusion?
Flux, being newer, has surpassed Stable Diffusion in several ways:
- Better out-of-the-box generations
- Improved text understanding and generation
- 16-channel VAE (compared to Stable Diffusion's 4-channel), resulting in better details
- Native resolution of 1024x1024
- Improved handling of hands, feet, and other details However, Flux is more resource-intensive and has fewer extensions available due to its newness.
Can we edit the LLM-generated text in the workflow?
In the specific workflow demonstrated, you can't directly edit the LLM output. However, you can:
- Copy the LLM-generated text.
- Disable the LLM node.
- Paste the copied text into a regular text input node. This allows you to modify the prompt as needed before generating the image.
What's the difference between Forge and ComfyUI, and which one should I use?
Both have their strengths:
- Forge: Similar to Automatic1111, better for tasks like inpainting, and easier for beginners or those familiar with Photoshop-style interfaces.
- ComfyUI: More flexible and powerful, allowing for custom workflows and advanced features. Better for complex tasks or creating unique generation pipelines.
Sebastian uses both, depending on the task. For quick edits or inpainting, Forge might be preferable. For custom workflows or advanced techniques, ComfyUI is the go-to option.
How does Flux handle consistent characters across multiple generations?
Flux generally performs better at maintaining consistency compared to older Stable Diffusion models. However, for the best results in character consistency, you might want to explore techniques like using LoRAs trained on specific characters or utilizing img2img workflows with a base character image. Checkout our Flux guide for more details.
Are there ways to make characters look less perfect or idealized?
Yes, you can adjust your prompts to achieve less idealized characters. Try adding descriptors like "imperfect", "natural", "realistic", or specific features like "wrinkles", "blemishes", or "asymmetrical features". Experimenting with negative prompts can also help achieve more natural-looking results.
Remember, the key to mastering Flux AI is experimentation. Don't be afraid to try different workflows, prompts, and settings. With the resources provided in this guide and our Discord community, you're well-equipped to start your journey into the world of advanced AI image generation.
Member discussion