Radiosity

HomePage | Recent changes | View source | Discuss this page | Page history | Log in |

Printable version | Disclaimers | Privacy policy

Radiosity is a rendering algorithm used in 3D computer graphics, and was the first (and most popular) global_illumination method. Radiosity was introduced in 1984 by researchers at Cornell (Goral, Torrance, and Greenberg) in their SIGGRAPH paper "Modeling the interaction of Light Between Diffuse Surfaces".

The basic radiosity method has its basis in the theory of thermal radiation, since radiosity relies on computing the amount of light energy transferred between two surfaces. In order to simplify the algorithm, the radiosity algorithm assumes that this amount is constant across the surfaces; this means that in order to compute an accurate image, geometry in the scene description must be broken down into smaller areas, or patches, which can then be recombined for the final image.

After this break down, the amount of light energy transfer can be computed by using the known reflectivity of the reflecting patch, combined with the form factor of the two patches. This dimensionless quantity is computed from the geometric orientation of two patches, and can be thought of as the fraction of the total possible emitting area of the first patch which is covered by the second patch. Early radiosity methods used a hemicube (an imaginary cube centered upon the first surface to which the second surface was projected) to approximate the form factor. Other techniques have been proposed, including the use of ray_tracing.

Radiosity is very computationally expensive, because ideally form factors must be derived for every possible pair of patches, leading to an exponential increase in computation with added geometry.