Generative Lighting
Training AI on analog photography
The challenge with AI-generated imagery isn't capability—it's control. Current diffusion models can produce stunning results, but "stunning" means nothing in production when you need consistent, directable lighting that matches a specific creative vision.
This research focuses on training custom LoRA models using medium-format film references as ground truth. The hypothesis: if we can encode the characteristics of analog lighting—the way Kodak Portra renders skin, how Fuji Velvia handles saturation in shadows—we can create AI tools that respond to lighting direction the way a skilled cinematographer would interpret notes on set.
The goal isn't to replicate film aesthetics superficially. It's to create a controllable system where "warm, soft key with cool fill" produces predictable, reproducible results across batches.
Assistant
Key Points
- —Custom LoRA training on curated film reference library
- —Deterministic lighting response to directional prompts
- —Batch-invariant inference for reproducible lighting generation
- —Integration with existing ComfyUI pipelines