Would someone mind clarifying why again "deferred" shading is preferable? I understand how the modified "G-buffer" can be used to efficiently/compactly store a lot of useful data, like depth, normals, and other data needed for lighting, but I don't completely see why we would use one rendering approach versus another if the entire scene still needs to be lit and shaded. Does adding z-buffers rendered for each light save work?
unicorn
@amilich Deferred shading allows every output screen pixel to be lit/shaded only once, instead of many times, such as in cases when objects are on top of each other in a scene. The computation used for deferred shading is therefore constant and predictable and is independent of the number of triangles in the scene.
Would someone mind clarifying why again "deferred" shading is preferable? I understand how the modified "G-buffer" can be used to efficiently/compactly store a lot of useful data, like depth, normals, and other data needed for lighting, but I don't completely see why we would use one rendering approach versus another if the entire scene still needs to be lit and shaded. Does adding z-buffers rendered for each light save work?
@amilich Deferred shading allows every output screen pixel to be lit/shaded only once, instead of many times, such as in cases when objects are on top of each other in a scene. The computation used for deferred shading is therefore constant and predictable and is independent of the number of triangles in the scene.