In Praxis: Surface Shading

In this installment of In Praxis I’m going to talk about how we shade opaque surfaces. Beware of extremely technical mumbo-jumbo.

Reset has a fully deferred renderer, which means that all information required for lighting is first rendered into a set of screen-space textures (collectively called the G-buffer) and then lighting is applied using only those textures without the need to re-render the geometry. This provides a clean separation between materials and lights and allows for a slightly simpler design than traditional forward rendering. Also more exotic features such as deferred decals can be easily implemented.

Due to our complex shading the G-buffer is fairly fat. It includes depth, normal, color, roughness, specular normal reflectance, specular coverage and metalness. We also have a wet layer on top of everything and we store a separate normal, roughness and amount for it. Finally we have a translucency amount for thin objects like leaves where a portion of the light shining on one side actually leaks through to the other side. These are all encoded to fit into 20 bytes per sample (with 4 bits to spare!) Still, multisample antialiasing is going to be rather expensive with this setup. At e.g. 2560×1600 with 4xAA  the G-buffer alone will eat 313 MiB of video memory. Although I personally dislike the recently popular post-processing based antialiasing methods (MLAA, FXAA, SRAA, what have you) we will probably have to provide one of them as an option.

In physically-based shading materials that conduct electricity reflect light quite differently from materials that don’t. In plain English: metals and plastics look different. We wanted to replicate that difference. Also, although in the real world everything is at least a little bit shiny, we also wanted to support materials without any kind of specular sheen at all. This would allow giving artistic emphasis to really dry and rough substances like sand.

The metalness attribute in our G-buffer blends between metallic and non-metallic shading. Metals do not have a diffuse term at all and instead reflect all light specularly. The color of the reflection is the color of the surface, except near grazing angles where the reflection is left unaffected by the surface color.

The specular coverage attribute blends between shiny and matte non-metallic shading. Matte materials have just a diffuse term. Shiny materials blend between diffuse and specular terms according to the angle of incidence. The diffuse terms have the color of the surface while the specular term is unaffected by surface color.

Matte non-metal, shiny non-metal and metal materials.

For the diffuse term we use the good old Oren-Nayar BRDF [1] which supports freely adjustable surface roughness. For the specular term we use the specular portion of a BRDF developed by Kelemen and Szirmay-Kalos [2]. Compared to the more popular Cook-Torrance BRDF [3] it has the special property of having a closed-form function for importance sampling. For blending between diffuse and specular (non-metals) as well as colored and uncolored specular (metals) we use the good old cheap approximation of the Fresnel equation by Schlick [4].

Speaking of importance sampling, one final component of significance in our shading are environment mapped glossy specular reflections. Especially since metals are specular-only, not taking reflections into account would just look wrong. Also the typical method of sampling an environment map at a lower mip level to get glossy reflections may look better than nothing, but it does not really look convincing. We take multiple mipmapped samples from our environment map, spread according to the distribution of important directions as defined by the BRDF. This results in much more lifelike reflections. The basic algorithm, albeit using the Phong BRDF,  is presented in the GPU Gems 3 article GPU-Based Importance Sampling by Colbert and Křivánek [5].

That sums up our way of shading surfaces at a high level. Feel free to ask about any details. I’ll try to answer any questions, provided they’re not about any of our secret knowledge. Just kidding. As is always the case, even we are just standing on the shoulders of giants. Future posts about other parts of our tech are also being planned. Stay tuned.

 

[1] Nayar, S.K. and Oren, M., Generalization of the Lambertian Model and implications for Machine Vision, International Journal on Computer Vision, Vol. 14, No. 3, pp. 227-251, April 1995.

[2] Kelemen, C. and Szirmay-Kalos, L., A Microfacet Based Coupled Specular-Matte BRDF Model with Importance Sampling, EUROGRAPHICS 2001.

[3] Cook, R. and Torrance, K., A Reflectance Model for Computer Graphics, ACM Transactions on Graphics, Vol. 1, No. 1, January 1982.

[4] Schlick, C., An Inexpensive BRDF Model for Physically-based Rendering, Computer Graphics Forum, Vol. 13, Issue 3, pp. 233-245, August 1994.

[5] Colbert, M. and Křivánek, J., GPU-Based Importance Sampling, GPU Gems 3, 2007.

 

  • Trackback are closed
  • Comments (35)
    • Ricky
    • June 11th, 2012 2:24pm

    Not too sure what half of this means, but it looks like it’s going to be a fantastic game! Keep up the good work.

  1. How are you handling transparent objects? (If at all)

      • Mikko Kallinen
      • June 11th, 2012 5:06pm

      Due to a miraculous coincidence we don’t have transparent surfaces like glass in Reset at all. All windows are actually one-way mirrors. The design does call for particles like smoke, and we do have some ideas for handling the lighting on them. The same ideas might work for glass too, but we’ll have to see.

      • Nice :)

        I’m asking because I’m about to plunge head first into deferred rendering myself and this is one of the biggest hurdles I’ve seen (apart from using very different materials on different objects) – I’d appreciate if you could point me to some resources/white papers dealing with the more common problems of deferred rendering.

          • Mikko Kallinen
          • June 11th, 2012 9:15pm

          Well, there doesn’t seem to be a silver bullet for dealing with transparent stuff in deferred renderers. I guess the typical approach is to render transparent objects using forward rendering, which of course places a limit on the number of lights since you practically have to render everything in a single pass. Then there is order-independent transparency, but the full implementation with per-pixel fragment lists combined with a fat G-buffer is probably unusable due to high memory consumption and an upper bound on the total number of fragments. There are recent approximations to OIT like stochastic transparency and adaptive transparency, but they have their own issues as well. It’s a mess.

          Tile-based forward rendering seems like an interesting approach, but may be challenging to implement on pre-D3D11 hardware.

    • Memz
    • June 11th, 2012 3:28pm

    You guys must be geniuses if you wrote this engine from scratch! Seriously, KUDOS! (it inspired to try my own lol)

    • Grzesiek
    • June 11th, 2012 5:24pm

    Deferred rendering is trendy ;)
    Do you plan Reset to be for Windows only, or for Linux and Mac too?

    • Currently Windows PC only, never know what the future might bring though :)

    • Jemlee
    • June 11th, 2012 9:29pm

    Yay a new blog post! Didn’t understand a word but sounds good!

    • Olly
    • June 12th, 2012 11:25am

    fascinating stuff, and amazing how much it seems your engine’s shaders are beginning to replicate the attributes and physical correctness of shader setups in full CG software packages like the mental ray shaders in Maya. At what point in your workflow do you assign and tweak shaders, is it in Blender during creation, or once you get it inside the game engine?

    Nice work,

    Olly.

      • Mikko Kallinen
      • June 12th, 2012 12:20pm

      Materials are all set up in our editor. We only use the names of materials from the FBX scene and then allow the user to bind an in-engine material to each of the names in the scene.

  2. Wow! Your demo looks very impressive.

    If that’s not a state classified secret, I’d be interested in knowing what G-Buffer organization you used to cram all that stuff in it. ;)

    Regarding the environment maps, if I understood correctly, you’re not using any preintegration at all?! Did you look at tools like AMD CubemapGen that can produce properly filtered cubemap mipchains? How many envmap samples are you taking per-fragment?

    Do you use a single envmap for the whole scene or do you have another methed of choosing envmaps for objects (or even render them at runtime?) I don’t know if you game includes many indoor scenes (which generally make a single envmap look pretty damn bad.)

    Your engine tech is looking sweet. Unorthodox asset production methodology too. Looking forward to this beauty. :)

      • Mikko Kallinen
      • June 12th, 2012 9:35pm

      The G-buffer consists of five textures:

      • D24_UNORM_S8_UINT: regular depth and stencil
      • R10G10B10A2_UNORM: normal in spherical coordinates in RG, translucency in B, alpha is unused
      • R10G10B10A2_UNORM: wet normal in spherical coordinates in RG, wet amount in B, alpha is unused
      • R8G8B8A8_UNORM_SRGB: color in RGB, roughness in A
      • R8G8B8A8_UNORM: specular normal reflectance in R, specular coverage in G, metalness in B, wet roughness in A

      The environment map is indeed just a standard mipmapped cube texture. Preintegration can only support Phong-like circular highlights which don’t stretch anisotropically when viewing the surface near a grazing angle. We currently take 16 samples for the reflection of the surface proper, and another 16 samples for the wet layer (it usually has a different normal and roughness.) The distribution of samples is not very good so it might be possible to do e.g. 8+8 samples with the same quality using a better distribution.

      We have a single environment map which is rendered dynamically from the position of the camera. GTA IV used a similar trick and it works surprisingly well, at least in outdoor scenes and with the camera close to ground level.

  3. How long have you worked on this (the renderer)? Very interesting posts.

      • Mikko Kallinen
      • June 13th, 2012 10:42pm

      A little over a year now. There’s of course been code work on things not strictly related to the renderer, such as the editor, the content pipeline and even some physics and gameplay. None of it happens in parallel because even I am a mere mortal. :)

      • Cool!

        Btw, are you planning on hiring some more programmers / artists later on, or will it just be ‘your project’?

        • Unsure at the moment, it might be just the the two of us for this project, but if we get to move to the next project, definetly more people on it :)

    • GraphiX
    • June 15th, 2012 12:41am

    This is a little late and off topic, but how did you handle the little splashes from raindrops in the demo video?

      • Mikko Kallinen
      • June 15th, 2012 9:31am

      They are just sprites displaying an image sequence. They do react in an approximate way to sun and sky light as well as any dynamic lights.

    • Grzesiek
    • June 21st, 2012 1:43pm

    It looks like my question get lost somewhere. I will ask again: Do you plan to release Reset only for Windows or for Linux and Mac OSX too?

      • Indloon
      • June 21st, 2012 1:48pm

      The engine must use OpenGL rendering API if there is Linux/MacOSX support needed ;D

      • Mikko Kallinen
      • June 21st, 2012 10:20pm

      Only Windows version is in the plans.

        • Indloon
        • June 22nd, 2012 12:29pm

        You used Direct3D for engine?

        It would be easier if you could use OpenGL, takes less time + everything is in shaders.

    • Only Windows in the plans at the moment.

    • the wonderer
    • June 26th, 2012 5:39pm

    maybe a little bit technical, but I would like to know in which language you are working and if there is any framework you may be using, or is everything developed from scratch?

    since in previous posts you mentioned that you don’t have much budget and since it takes too much resources to actually build a AAA title you started by throwing away all previous stuff… it would be very interesting to know more details

    • the wonderer
    • June 27th, 2012 4:39pm

    would you please let us know if you are using a framework or everything started from scratch? which language you are writting this game? C++? C#? just wondering

    thanks! and good luck!

      • Mikko Kallinen
      • July 5th, 2012 2:03pm

      Our language of choice is C++. We use libraries like Boost, DirectX, FBX SDK, FreeImage, FreeType 2, HACD (Hierarchical Approximate Convex Decomposition) and PhysX.

    • Execta
    • June 29th, 2012 9:23pm

    Didn’t understand a thing but I’m happy to hear the project’s alive and doing well. Keep it up, can’t wait for some more media of this stunning game! :)

    • Erick Beebe
    • July 1st, 2012 11:39pm

    I may have missed it, but is the software you’re using to do your work off the shelf ( Lightwave, Blender, etc.) or is iit all custom?

      • Mikko Kallinen
      • July 5th, 2012 2:04pm

      Blender is used for producing geometry and animation. See the first installment of In Praxis for more details.

  4. This looks like a fantastic project first of all! Awesome you’re doing these posts as well, v. interesting!

    Just a quick question, are you using a DirectX implementation or an OpenGL one?

    • victor
    • October 12th, 2012 12:21am

    Hey. Really good work on Reset.
    I’m interested in what kind of maps and values do you guys export from the modelling software?
    Usual things we see if diffuse, normal, specular, metallic, etc. Would you let us know what things are maps and what is realtime computed in the engine ?

    • Hey. We use color, normal, metalness, smoothness, specular normal reflectance and specular coverage maps. This set forms one layer and we use several layers to form the actual surface attributes. On most objects we have a base, dirt, scratch and moss layers. On top of that we have a separate wet layer. We’ll try to find time to write a technical post on the subject.

  5. hey,

    interesting news !

    i wonder why you dont go for pbr as it would massively enhance the material and visual in general style in order to have it look as realistic as possible.

    pbr also enhances the workflow tremendiously while your approach will lead to the “good old” material variations due to changing atmosphere/lighting and thus save lots of texture memory.

    tldr: why dont pro’s like you are use PBR ?

  6. just re-reading that i could be misunderstand:

    PBR saves tex-mem, not the “good old” way to do it.