Photoshop’s Generative Fill arrived with significant fanfare and has settled into something more nuanced: a genuinely useful tool in specific workflows, and a genuinely limited one in others. The distinction is worth mapping.
What it gets right: background extension and scene completion in images where the generated content does not need to be scrutinized at high resolution. Removing objects from complex backgrounds — the use case that previously required careful clone stamp work and content-aware fill iterations — is now often a one-step operation that produces usable results. Sky replacement and environmental extension for social media, web, and presentation use scales well. For these applications, Generative Fill is a legitimate time-saving tool that replaced hours of manual compositing work.
What it gets wrong, or more precisely where it falls apart: any situation requiring output at high resolution with physical accuracy. Print-scale images, product photography, architectural visualization — anywhere that the generated pixels will be examined closely — reveal Generative Fill’s reliance on statistical approximation rather than optical accuracy. Surfaces are plausible but not consistent. Generated elements at the edge of selections show tiling artifacts and texture discontinuities that are invisible at web resolution and damaging at print resolution.
The subject generation capability is the weakest point. Generating new subjects or significantly modifying existing ones — the use case most promoted in Adobe’s marketing — produces results that require substantial cleanup in professional contexts. The tool works best when it is filling space the eye does not examine carefully. It struggles when it is the subject of examination.
Used for the right jobs, Generative Fill is a meaningful productivity tool. Used as a replacement for craft in high-stakes work, it surfaces its limitations quickly. The mistake is expecting it to perform uniformly across both contexts.