AI image generation has become extraordinarily capable at producing images that look real. That is not the same thing as producing images that are photographically real, and the gap between those two things is worth understanding precisely.
Photorealism in the traditional sense is about the physics of light: how it wraps around surfaces, reflects, scatters, falls off with distance, interacts with atmospheric conditions. A photograph captures this because a camera is a physical instrument recording physical phenomena. An AI image generator approximates it statistically — it produces pixel arrangements that pattern-match to what real-looking images look like based on its training data. The result can be visually convincing without being optically coherent.
The tells are in the physics. Light sources that are implied by the image but inconsistently implemented across the frame. Reflections that do not correspond to any identifiable source. Shadows cast at angles that no sun or artificial light could produce. Specular highlights on surfaces that do not behave according to the material’s actual reflective properties. For a casual viewer these are invisible. For a photographer or cinematographer trained to read light, they register immediately.
This is not a criticism of AI generation — it is a clarification of what it is. The images are often beautiful, often useful, often more than adequate for many applications. The point is that “photorealistic” as a descriptor is doing a specific technical job that most AI images do not fully perform, even when they pass casual inspection.
The models are improving rapidly on exactly these failure modes. Coherent lighting and physically accurate materials are active research priorities. The gap is narrowing. It has not closed.
When you need real physics, use a camera. When you need a convincing visual, the generator is usually sufficient. Knowing which situation you are in matters.