The 4 parameters that control every AI generation
These four parameters shape every AI generation more than anything else. Understanding them saves you time, money, and wasted renders.
When to use this in your workflow
Section titled “When to use this in your workflow”- Dialing in quality vs. cost: Inference steps and guidance scale let you find the sweet spot between output quality and generation cost.
- Reproducing a good result: Save the seed when you get something you like — same seed + same settings = same output.
- Getting controlled variations: Change only the seed to see different interpretations of the same prompt, faster and cheaper than rewriting.
- Preserving your reference in edits: Denoising strength controls how much an image-to-video or image-to-image model changes your source material.
How it works in modelBridge
Section titled “How it works in modelBridge”Guidance scale
Section titled “Guidance scale”What it does: Controls how literally the AI follows your prompt. Think of it like directing an actor — low values give creative freedom, high values demand rigid adherence.
- Low (1–3): Loose interpretation. Good for abstract or experimental visuals.
- Medium (5–7): Balanced. Follows your prompt closely but still makes natural choices. Sweet spot for most work.
- High (10+): Rigid. Often leads to oversaturated colors, harsh artifacts, and unnatural compositions.
Recommended settings:
- Start with the model’s default (usually 5–7)
- Lower it if outputs feel too literal, overcooked, or oversaturated
- Raise it slightly if outputs feel too random or ignore your prompt
Common mistakes:
- Cranking it to 15+ thinking it means “follow my prompt better.” Above 10 usually causes artifacts.
- Adjusting CFG when the real problem is a vague prompt. Fix the prompt first.
Inference steps
Section titled “Inference steps”What it does: Controls how long the AI spends refining its output. Think of it like render passes — more passes means more detail, but with diminishing returns.
Recommended settings:
- 20–30 steps for most models. This is where quality plateaus.
- 1–4 steps for Flux Schnell and other fast/turbo models (they’re designed for it).
- Start with the default (usually 20–28) and only increase if output looks rough.
Common mistakes:
- Setting steps to 50–80 thinking more = better. Quality plateaus at 30 for most models. You’re paying more for no visible improvement.
- Not checking the cost badge — more steps on some models means higher cost.
Rule of thumb: If you can’t see the difference between 25 steps and 50 steps, you’re wasting money on those extra 25.
What it does: Every generation uses a random number (the seed) as its starting point. Same seed + same settings = same result, every time.
Two ways to use it:
- Reproducing results: When you get a generation you like, note the seed. Regenerate the exact same output later, or use it as a starting point with small prompt adjustments.
- Getting variations: Change only the seed (keep everything else identical) to see different interpretations of the same prompt.
Recommended settings:
- -1 (or “random”): The AI picks a new starting point each time. Use this when exploring.
- Specific number: Set one when you want consistency or are iterating on a good result.
Common mistakes:
- Ignoring the seed entirely — then you can never reproduce a good result.
- Changing multiple parameters at once instead of just the seed when you want variations.
Denoising strength
Section titled “Denoising strength”What it does: Only appears on image-to-video and image-to-image models. Controls how much the AI changes your source material.
- 0.0–0.2: Almost identical to your input. Subtle refinements only.
- 0.3–0.5: Noticeable changes while keeping composition and major elements. Good for style transfers and light edits.
- 0.5–0.7: Significant changes. The AI uses your input as a rough guide.
- 0.8–1.0: Your input is mostly ignored. Nearly a fresh generation.
Recommended settings:
- 0.3–0.7 for most editing work
- 0.4 as a starting point for style transfer — adjust by 0.1 in either direction
- Below 0.3 is usually too subtle to notice
- Above 0.7 starts ignoring your reference
Common mistakes:
- Setting strength to 0.9+ and wondering why the output doesn’t match your input. At that level, your reference is basically ignored.
- Not adjusting strength when switching between different types of edits (light touch-up vs. major restyle).
All four parameters appear in the advanced settings panel when you expand a model’s options. modelBridge renders them as sliders with the model’s recommended range already set.
Not every model exposes all four. Text-to-video models typically show steps, guidance scale, and seed. Image-to-video models add denoising strength. Some newer models hide guidance scale entirely because they handle it internally.
Exploration workflow: Start with defaults and a random seed. Once you find a direction you like, lock the seed and fine-tune guidance scale and steps. For image-based workflows, dial in denoising strength last.
See also: Prompting for video editors for how your prompt interacts with these parameters.
Quick answers
Section titled “Quick answers”Should I always max out inference steps? No. Most models plateau at 20–30 steps. Higher step counts cost more and take longer with little visible improvement. Start with the default.
What guidance scale should I use? Start with the model’s default (usually 5–7). Only adjust if outputs feel too literal (lower it) or too random (raise it slightly).
How do I reproduce a result I liked? Use the same seed, same prompt, and same settings. modelBridge shows the seed used for each generation in the output details.
What denoising strength for style transfer? Start at 0.4 and adjust by 0.1 in either direction. Below 0.3 is too subtle. Above 0.7 starts ignoring your reference.
Do all models use these parameters? Most do, but not all. Some newer models manage guidance scale internally. modelBridge only shows parameters that the model actually supports.