OptimizationBETA

CONDUIT

View Source
ComfyUIPythonInference

Assistant

Ask about CONDUIT...

CONDUIT implements speculative generation for diffusion models—a technique borrowed from LLM inference that works surprisingly well for image generation.

The core idea: instead of generating one image and hoping it's good, generate 4 candidates in parallel, evaluate them at regular checkpoints (every 5 denoising steps), and kill the branches that aren't working. By step 15, you're down to 1-2 candidates. By step 25, you've committed to the best one.

Why this works: early denoising steps establish composition and major features. If a candidate has fundamentally broken composition at step 10, it's not going to recover. Killing it early saves compute without losing quality.

The scoring function is the key innovation. We use a lightweight CLIP-based discriminator that evaluates prompt alignment at each checkpoint. It's trained to predict which candidates humans prefer—not which are "better" in some abstract sense, but which match the prompt intent.

The result: roughly 2x speedup with no quality loss on most prompts. On complex prompts with many constraints, the quality actually improves because the scoring function catches constraint violations early.

Features

  • 4-branch parallel generation with early pruning
  • Checkpoint-based evaluation every 5 steps
  • CLIP-based discriminator for prompt alignment scoring
  • Adaptive branch allocation based on prompt complexity
  • Memory-efficient candidate management
  • Compatible with ControlNet and LoRA workflows

Technical Details

  • Requires 2x base VRAM for parallel branches
  • Discriminator adds ~200ms overhead per checkpoint
  • Best results with 30+ step samplers (DDIM, Euler)
  • Branch count configurable (2-8, default 4)
Deterministic Nodes