In Praxis: Surface Shading
In this installment of In Praxis I’m going to talk about how we shade opaque surfaces. Beware of extremely technical mumbo-jumbo.
Reset has a fully deferred renderer, which means that all information required for lighting is first rendered into a set of screen-space textures (collectively called the G-buffer) and then lighting is applied using only those textures without the need to re-render the geometry. This provides a clean separation between materials and lights and allows for a slightly simpler design than traditional forward rendering. Also more exotic features such as deferred decals can be easily implemented.
Due to our complex shading the G-buffer is fairly fat. It includes depth, normal, color, roughness, specular normal reflectance, specular coverage and metalness. We also have a wet layer on top of everything and we store a separate normal, roughness and amount for it. Finally we have a translucency amount for thin objects like leaves where a portion of the light shining on one side actually leaks through to the other side. These are all encoded to fit into 20 bytes per sample (with 4 bits to spare!) Still, multisample antialiasing is going to be rather expensive with this setup. At e.g. 2560×1600 with 4xAA the G-buffer alone will eat 313 MiB of video memory. Although I personally dislike the recently popular post-processing based antialiasing methods (MLAA, FXAA, SRAA, what have you) we will probably have to provide one of them as an option.
In physically-based shading materials that conduct electricity reflect light quite differently from materials that don’t. In plain English: metals and plastics look different. We wanted to replicate that difference. Also, although in the real world everything is at least a little bit shiny, we also wanted to support materials without any kind of specular sheen at all. This would allow giving artistic emphasis to really dry and rough substances like sand.
The metalness attribute in our G-buffer blends between metallic and non-metallic shading. Metals do not have a diffuse term at all and instead reflect all light specularly. The color of the reflection is the color of the surface, except near grazing angles where the reflection is left unaffected by the surface color.
The specular coverage attribute blends between shiny and matte non-metallic shading. Matte materials have just a diffuse term. Shiny materials blend between diffuse and specular terms according to the angle of incidence. The diffuse terms have the color of the surface while the specular term is unaffected by surface color.
For the diffuse term we use the good old Oren-Nayar BRDF  which supports freely adjustable surface roughness. For the specular term we use the specular portion of a BRDF developed by Kelemen and Szirmay-Kalos . Compared to the more popular Cook-Torrance BRDF  it has the special property of having a closed-form function for importance sampling. For blending between diffuse and specular (non-metals) as well as colored and uncolored specular (metals) we use the good old cheap approximation of the Fresnel equation by Schlick .
Speaking of importance sampling, one final component of significance in our shading are environment mapped glossy specular reflections. Especially since metals are specular-only, not taking reflections into account would just look wrong. Also the typical method of sampling an environment map at a lower mip level to get glossy reflections may look better than nothing, but it does not really look convincing. We take multiple mipmapped samples from our environment map, spread according to the distribution of important directions as defined by the BRDF. This results in much more lifelike reflections. The basic algorithm, albeit using the Phong BRDF, is presented in the GPU Gems 3 article GPU-Based Importance Sampling by Colbert and Křivánek .
That sums up our way of shading surfaces at a high level. Feel free to ask about any details. I’ll try to answer any questions, provided they’re not about any of our secret knowledge. Just kidding. As is always the case, even we are just standing on the shoulders of giants. Future posts about other parts of our tech are also being planned. Stay tuned.
 Nayar, S.K. and Oren, M., Generalization of the Lambertian Model and implications for Machine Vision, International Journal on Computer Vision, Vol. 14, No. 3, pp. 227-251, April 1995.
 Kelemen, C. and Szirmay-Kalos, L., A Microfacet Based Coupled Specular-Matte BRDF Model with Importance Sampling, EUROGRAPHICS 2001.
 Cook, R. and Torrance, K., A Reflectance Model for Computer Graphics, ACM Transactions on Graphics, Vol. 1, No. 1, January 1982.
 Schlick, C., An Inexpensive BRDF Model for Physically-based Rendering, Computer Graphics Forum, Vol. 13, Issue 3, pp. 233-245, August 1994.
 Colbert, M. and Křivánek, J., GPU-Based Importance Sampling, GPU Gems 3, 2007.