In Praxis: Atmosphere

Ok so, first the bad news. We’re not ready to show game play stuff yet. It’s taking longer than expected and since it’s a key part in the upcoming crowd funding campaign, we’re putting a lot of effort into it. But the good news is that that we can distract you with pretty pictures πŸ˜€

Oh youΒ weren’tΒ distracted… bummer. Anyway, besides the gameplay, people have been asking us about our volumetric lighting and atmosphere. So we’re going to cover that topic right now.

Why did we choose to elaborately simulate the atmosphere instead of just using sky domes? Two reasons. First, the game has a day and night cycle, so some animation would have been required anyway. Second, we wanted to light the game world with the atmosphere and really make the atmosphere a part of it. We knew that we can’t spend time lighting the game world manually, so this essentially is a procedural approach in creating variable moods and atmospheres.

We made a quick and hasty video showing some of the features of the tech, located at the bottom of end of the article. But before you scroll to it, I’ll let Mikko get serious with the technical stuff.

– Hey Mikko.
– What?
– Take it away.
– What?
– You know, the article…
– What article?
– The volumetric stuff. You know… the volumetric stuff!
– Aaaaaaa, well why didn’t you say so.

The atmosphere in Reset is divided into three distinct parts. Outermost is the clear sky that is free of any weather phenomena. Only the direction of the sun affects how it looks. The colors in the sky are determined by the scattering of light by the particles in the atmosphere.

Rayleigh scattering accounts for particles that are smaller than the wavelength of light, such as gas molecules. Mie scattering takes into account bigger particles such as water vapor and dust. As mentioned in our post In Praxis: Lighting this is implemented using precomputed lookup tables [1], making it extremely inexpensive.

Below that is a layer of clouds. The shapes of the clouds are determined by a procedural function that combines 12 octaves of Perlin noise with thresholding and other math.

We use a tiling 3D texture which has 6 octaves of noise baked into it and sample it at two different scales to get the total of 12. The sky shader marches through that volume along the view rays. The clouds are lit by the sun and the sky above them.

Fully accurate illumination would require taking into account how much light reaches each point inside a cloud from every possible direction, but that is still a bit too expensive for real-time graphics, so we approximate a bit. We take the average radiance of the sky straight above and propagate it down the cloud similar to a directional light. In addition to that we have the direct light from the sun which is naturally handled as a directional light as well. Both lights take into account multiple forward scattering similar to [2]. Sunlight uses something similar to Opacity Shadow Maps [3], while skylight approximates the amount of cloud between the point being shaded and the light source using a dynamically updated height map of the cloud layer. We warp both maps in creative ways to get them to cover the entire sky all the way to the horizon with decent resolution.

Below the clouds is where all weather effects take place. Pure air and variable amounts of fog (Rayleigh and Mie scattering respectively) receive light from the sun through holes in the cloud cover and also from all over the environment in the form of dynamic directional ambient lighting, as described in In Praxis: Lighting.

A 3D texture is warped to fill the view frustum and dynamically updated with the density of air and fog at each texel. Each texel of the resulting volume is illuminated independently into a second 3D texture. Finally the illumination is accumulated into a third 3D texture so that each texel contains the amount of light scattered towards the camera along that direction and up to that distance.

This is equivalent to ray marching, but due to the texture being warped to fit the view frustum, the implementation is as simple as iteratively summing each Z-slice of the 3D texture with the previous slice. A fourth 3D texture with the same mapping contains the amount of light reaching the camera after removing absorbed and out-scattered light. That’s a lot of 3D textures, but they have extremely low resolution. The scattering gets applied on top of the full resolution geometry by sampling the appropriate 3D textures at the screen position and depth of the pixel.

And here is the video. The tech is still work-in-progress with know issues, for example the volume shadow resolution (aliasing) needs some work. The tech is designed to work well close to the sea level, but in the video the camera is lifted almost to the cloud level to show you how the volume works. The tech begins to break down the higher the camera is lifted.

[1] Bruneton, E. and Neyret, F., Precomputed Atmospheric Scattering, Computer Graphics Forum, Volume 27, Issue 4, pages 1079–1086, June 2008.

[2] Harris, M. and Lastra, A., Real-Time Cloud Rendering, Computer Graphics Forum (Eurographics 2001 Proceedings), 20(3):76-84, September 2001.

[3] Kim, T-Y and Neumann, U., Opacity Shadow Maps, Proceedings of the 12th Eurographics Workshop on Rendering Techniques, pages 177-182, 2001.


  • Trackback are closed
  • Comments (66)
    • Fr3ak
    • December 19th, 2012 1:43am

    OMG, these are the most realistic artificially and digitally created clouds I have ever seen – epic! Can’t wait for your next update and of course the final game. People, please spread the this all over the interwebz. Legendary clouds are legenday!

    • Carlos Ballesteros
    • December 19th, 2012 1:53am

    Just, beutifull!
    Looking really good the engine. Seems very promising, keep on the good work guys.

    • Samout
    • December 19th, 2012 1:57am

    I would like to have these in 1920×1080 so I could use those as wallpapers.

  1. Simply amazing after this game is release I’m am definitely going to do a review of it as soon as I get a computer capable of running it on full graphics!

    • heckuva
    • December 19th, 2012 2:08am

    Atmosphere and weather so realistic. Can’t wait you present a main game mechanic

    • Jochem
    • December 19th, 2012 2:36am

    Beats CryEngine on atmospheric lightning imi!
    Great work, your distraction has worked πŸ˜›

    Thanks for the tech explanation, I really like reading it!

    • Jochem
    • December 19th, 2012 2:37am

    Imo* I meant imo, stupid iOS keyboard at midnight …

    • TheAppleFreak
    • December 19th, 2012 7:42am

    I am absolutely blown away by what you guys are able to do with this game. This lighting system looks absolutely insane.

    On a related question, does this game use DirectX or OpenGL?

      • Mikko Kallinen
      • December 19th, 2012 6:45pm

      Thanks for the praise! Reset uses DirectX 11 API and requires DirectX 10.0 or higher hardware.

    • Husam
    • December 19th, 2012 2:28pm

    It’s amazing how people can simulate nature so closely with just algorithms and numbers. And all that done in real-time!

    I’m blown away!

      • Rani
      • December 20th, 2012 6:53pm

      now lets hope that when this game comes out it will set the standard for gfx, and no more skyboxes. What will the game be about?

      amazing work

    • y3o
    • December 19th, 2012 6:05pm

    That’s just stunning. Optical, the game will be one of the best I’ve ever seen. I just hope it’ll run on lower gpus, too.

    • BoyC
    • December 19th, 2012 8:29pm

    Very nice insights, I love reading these πŸ™‚

    What kind of resolutions do you mean by extremely low for the 3D textures? Also I’m guessing their x-y-z resolutions aren’t uniform?

      • Mikko Kallinen
      • December 19th, 2012 8:50pm

      Currently 160x90x128. That’s 160×90 screen resolution and 128 slices into the screen, so pretty low. Then again, it’s over 1.8 megatexels…

  2. Cong. Very good result and insight info. I am trying to do the same thing with Quest3D. I did not started yet with cloud part, But day, night cycling is almost done. So i have a few questions if you mind to answer.

    1. Since i am with D3D9 what are the benefits of using D3D11 instead of D3D9 for this section (Atmosphere lighting and cloud)

    2. Are you sampling and rendering clouds in half or quad resolution and then up sampling it? (Due to the heavy usage of a transparency)

    3. What is your general solution for handling lots of post process RTT? I am using RGBM.

    Kind regards,
    Ali Rahimi.

      • Mikko Kallinen
      • December 20th, 2012 12:50pm

      1. Much higher maximum shader length does come to mind first. Being able to have 3D texture render targets is also nice. It’s been so long since I last used D3D9 that I don’t remember all the limitations.

      2. Clouds still have full resolution. I have some ideas for dropping the lighting resolution without losing the detailed edges but nothing concrete yet.

      3. We just go with 16 bit floating point mostly. The temporary buffers for the FFT bloom and our depth of field effect are 32 bit floating point. I wonder if there was some way to make them work with 16 bit…

    • Eduardo
    • December 20th, 2012 3:23pm

    Freaking amazing! of course still have many things to do but it’s already good work!! Congratulations!!

  3. Thanks a lot Mikko. I still don’t get it. How do you handle transparency? Cause it’s all over the screen and since its all volume slice, we are dealing with a huge amounts of transparency layers which is very expensive in D3D9. So is there any trick for that like using jittering or clipping instead of a trans, or its related to D3D 11? Also it would be great if i could see your mesh of these volume textures.

    Btw our engine in progress.

    Thanks in advance.

      • Mikko Kallinen
      • December 21st, 2012 12:21am

      If by transparency you mean blending, the sky does not use it. It’s a single pixel shader pass on a rough skydome mesh and the output is both the sky above the clouds and the clouds themselves.

  4. Congratulations! Looks great!
    I see you’re having some problems with shadows, as do I (my tech is very similar to yours :
    Do you mind if I contact you so we discuss this or you don’t care? ^^
    Also, are you rendering the clouds in full resolution?
    What’s the framerate and what’s your hardware?


    . Pom .

      • Mikko Kallinen
      • December 29th, 2012 11:29pm

      Hi there and thanks! Your stuff does look similar. I wouldn’t mind discussing the differences and similarities between our methods at all.

      We are currently rendering the clouds in full resolution. I’m hoping to change that at some point. We’re not ready to discuss framerates since everything is work in progress. I can reveal that my development system has a GTX 470, so we’re not completely speed blind. πŸ˜‰

    • bbtt
    • December 20th, 2012 8:06pm

    Hey guys, that’s really beautiful cloud, you are doing a good job!
    I have a question, “Both lights take into account multiple forward scattering similar to [2]”, so you are using pre-calculate multiple forward scattering? if it’s realtime, can you give us more detail about that? thanks.

    • bbtt
    • December 20th, 2012 8:13pm

    Hey guys, that’s really beautiful cloud, you are doing a good job!

    I have a question about cloud lighting, “Both lights take into account multiple forward scattering similar to [2]”, so you are using pre-calculate multiple forward scattering? if it’s realtime, can you give us more detail about that?


    • Ari Rahikkala
    • December 21st, 2012 4:12am

    This looks awesome, but: Have you considered using 4d noise and updating the cloud texture to get clouds that change over time and don’t just get translated across the sky?

      • Mikko Kallinen
      • December 29th, 2012 11:23pm

      It’s not really visible in the video but we can scroll the noise vertically. The cloud layer is shallow enough that with a suitably slow speed it looks like the clouds are changing shape on the fly. 4D noise is rather expensive, especially considering we can now have multiple octaves of 3D noise baked into a 3D texture…

    • c45j
    • December 21st, 2012 5:14am

    who is doing the 3d character animation in this game?

    • Dejay Clayton
    • December 21st, 2012 6:48am

    Found a pic tonight that you could use to validate some of the assumptions behind your technical approach:

    • jake
    • December 21st, 2012 9:28am
      • Mikko Kallinen
      • December 29th, 2012 11:19pm

      We do get the same artifacts with point lights and smooth surfaces as in that UE4 presentation. We don’t use Phong though, so either we have to figure out how to approximate area lights with our BRDF or switch to that simpler approximation…

    • Vincent D.
    • December 22nd, 2012 12:12am

    Just.. Wow…

    • Sebb
    • December 22nd, 2012 3:28am

    Hi Mikko,

    Very very impressive job !
    I am doing (trying to do) almost the same kind of atmospheric rendering but your results are far better than mine. (I think yours could not be better to be honest) πŸ™‚

    I dont understand what you store in the 4th 3D texture. The result of the ocmputation in the texture 2 does not handle absorbed and out-scattered light for each texel ? Does the accumaled light need to be process in that way again ?

    I am not sure to understand why you need a first texture to store air and fog intensity. Why don’t you make your first lighting computation on these values before storing anything ?

    Thanks for sharing all these beatyful things !

      • Mikko Kallinen
      • January 4th, 2013 10:16am


      The third texture contains in-scattering and the fourth the transmittance. The rendered geometry is multiplied with the value from the transmittance texture and the in-scattering is added on top.

      The densities are stored separately to decouple density computation from lighting, so it’s pretty much like deferred rendering for volumetrics. We can splat density from multiple localized sources without caring about lights and then light the whole thing without caring about where exactly the densities came from.

        • Sebb
        • January 6th, 2013 2:49am

        Thank you for your answer.

        So in the 3rd texture, you have your inscattered light integrated for the [camera, texel] segment, ok.
        But what do you store in the second texture in “Bruneton’s language” ? πŸ™‚
        As a matter of fact, what we precompute in his paper is integrated inscattered light for the segment [x, x0].
        In the article, you said “Each texel is illumindated independently”. How you do that ? This is just the inscattered part of the illumation ?

        Thank you again for sharing.

          • Mikko Kallinen
          • January 14th, 2013 11:12am

          Yes, the second texture contains the inscattered light for each particular texel independently of all others.
          The third and fourth texture (accumulated inscattering and transmittance respectively) are computed together from the first two textures as follows:

          accumulated_transmittance[i] = accumulated_transmittance[i-1] * transmittance[i]
          accumulated_inscattering[i] = accumulated_inscattering[i-1] + inscattering[i] * accumulated_transmittance[i-1]

          where i is the Z slice being computed.

            • Sebb
            • January 15th, 2013 3:32pm

            So you have finaly 5 textures ?
            – air intensity/fog
            – transmittance
            – Accumulated_transmittance
            – inscattering
            – Accumelated_inscatteing

            I suppose the intensity is multiplied this way:
            acc_inscattering[i] = acc_inscattering[i-1] + inscattering[i] * acc_transmittance[i-1](<- "-1" ? really ?) * intensity[i]

            I dont know if you use a texture for the transmittance or the analytic formula but the texture we precomputed as Eric Bruneton did in his paper represents transmittance from a point to the infinity toward the viewing vector.
            So, a transmittance for a segment (between two slice texels) would be:
            T[i- 1, i] = T[i – 1] / T[i]
            And the accumulated transmittance for slice i would simply be:
            T[i] = tranmittanceTexture.sample()

            It would explain that you dont really need 2 textures for transmittance but just one : the accumulated one (= the precomputed one)

            But perhaps I am wrong.
            I dont really understand where comes from you nice redish sunlight color at sunset/sunrise while mine is just orange. Perharps I don't understand well your formula or perhaps
            Acc_transmittance[i] = transmittanceTexture.sample(i] * Acc_transmittance[i – 1] and the orange would become red with the accumulation through the distance. But that is not the way I understand the math behind it. πŸ™

            Thank you again.

            • Mikko Kallinen
            • January 17th, 2013 4:56pm

            Transmittance can easily be computed on the fly during accumulation from the density texture so it doesn’t need a separate texture.

            The -1 is correct because we accumulate front-to-back.

            • Sebb
            • January 16th, 2013 10:43am

            2 other small questions:

            – Do you “slice” your frustum linearly in Z for your 3D texture or do you use the inverse transforms for every interpolated points?

            – How do you accumulate efficiently your 3D textures slices ? Pixel or compute shader ?

            • Mikko Kallinen
            • January 17th, 2013 4:51pm

            Linear isn’t good, because then slices near you are too long and slices far away are unnecessarily short. Using the inverse of the projection matrix isn’t good either, because that distribution tends to put way too much detail near the camera and not enough far away. Logarithmic distribution is best as it exactly matches the perspective foreshortening.

            Currently I have a ps_4_0 shader that accumulates 8 slices by rendering to 8 render targets at the same time. A ps_5_0 or cs_5_0 shader could just accumulate all slices in one pass, writing the results to a RWTexture3D.

            • Sebb
            • January 16th, 2013 1:24pm

            I think my problem is that I think with the S table of the Bruneton’s paper. As you make your own discrete integration of the inscattering, I think I should make my calculus with a J table (but we only calculate a temp DeltaJ).

            When you speak of “the inscattered light for each particular texel independently of all others”, I think you don’t speak precompute table S that is already an integration. You speak about “the radiance
            J of light scattered at y”. Right ?

            The inscattering is not a point or a texel thing, it’s a path or segment thing. That’s why I don’t understand all you are kindly trying to explain to me πŸ™‚

            But I WANT to understand ! I spend so much time on this algo…

            • Mikko Kallinen
            • January 17th, 2013 5:01pm

            Sorry, I don’t really speak Bruneton. It’s been a long time since I implemented it.

            Inscattering is indeed not a point thing. The texels in a 3D texture are 3D volumes themselves. In this case when I say “texel” I mean the segment from the front of the texel to the back of the texel.

            • Sebb
            • January 16th, 2013 3:13pm

            After re-reading the Bruneton’s article, I (re)discovered he gives a formula to compute just a slice (~a texel) of inscattering from S (= accumulatedInscattering) with:

            incattering[i] = T(x, xi)AccInscattering[i] – T(x, xi+1)AccIsncattering[i+1]

            So I suppose we can modulate this value by the intensity of air for this pixel and re-accumulate the slices in the final texture.
            But I can’t see why we should multiply each inscattering slice by the accumaltedTransimttance again !? I think it is false but I think this is why you have your nice redish color (orange x orange = redish). So it can become the “right” formula πŸ™‚

            I’ll try this tonight… and I stop spamming (for the moment) πŸ™‚

            • Sebb
            • January 20th, 2013 7:32pm

            Thanks Mikko for all your answers.
            I am starting to have something acceptable but I still have a lot of work ! πŸ™‚

            A last question about this subject : How do you handle the precision issue near the horizon (viewdir and camerapos~=0)?
            Due to the fact of computing “texels” of inscattering and accumulating them instead of a direct computation for a entire segmet [eye, target], the artefacts are far more present. πŸ™

            And thank you again !

    • Alex
    • December 23rd, 2012 5:23am

    I agree! It does look much better than cry engine… I have never seen such a realistic mixture of sunlight+clouds to create shadows and lighting. Looks very nice! As a game development student, I get really inspired by this πŸ˜›

    • Nicolas
    • December 24th, 2012 11:11am

    I Can’t simply imagine all the science concepts that stand behind the picture. Huge.

    • StarGeezer
    • January 11th, 2013 9:27pm

    Just found some 1997-ish quote:

    “Here are a few of the ambitious ideas I have for games to be developed in the hopefully not so far future…

    3D action adventure involving tactical thinking. Would be best based on a Quake-style full 3d engine, which is actually possible on the currently highest end Amigas. Gameplay would mix parts of Marathon, System Shock and the like…
    Need for Speed clone. Also possible on high end machines.
    3D space combat simulator in the vein of X-Wing, but with full texturemapping and Gouraud-shading. Quite possible of course.”

    Good to see you guys stay focused on this project!

    • Ferro
    • January 13th, 2013 5:03pm

    Very looking good engine, beats the Cryengine, I have one question.

    is there’s any real-time volumetric cloud shadows, it’s amazing to have the cloud darkens your screen sometime πŸ™‚

    anyway, keep up the good work πŸ˜€

      • Mikko Kallinen
      • January 14th, 2013 10:57am

      If you look closely at the video, you may see that we do indeed have real-time volumetric cloud shadows. It’s precisely why the lighting behaves so believably that a huge cloud will sometimes seem to darken everything. πŸ™‚

        • Ferro
        • January 14th, 2013 2:33pm

        OMG, this is going to be good! really get inspired by this! you beats Frostbite and rivals Cryengine!

        I have another question, are you already improved the game physic?

        and, I have 2 little suggestion for you on graphic improvements. one is, can you improve the water with adding waves on seas, rivers, shores? make the game’s water more wild! and can you improve the atmosphere by adding winds that randomly triggers (like Oblivion’s Gamebyro did) but more interesting with leaves, or entities blown away a bit?

        anyway keep the awesome work guys, really love it.

    • Gruubal
    • January 15th, 2013 12:51am

    Any chance of you guys uploading the high-quality version of this to

    • epsilon
    • February 2nd, 2013 8:36pm

    Absolutely impressive work:
    Your engine does in realtime for what the ‘offline’ physically based volumetric renderer I am currently developing takes minutes per frame on a 16 core workstation … kudos^2 πŸ™‚

    I have one technical question if you don’t mind:
    How do you render those nice sun-rays/glare effect as seen for example in your ‘vanilla_morning’ render above ?

    Normally I am a bit opposed to such effects as they are often overdone and too “cheesy” for my taste.
    But I like the way how the sun glare/rays are beatuifully natural in your images…. adding just the right amount of visual spice πŸ™‚

    Do you use a texture based approach or something more sophisticated physically/optics based algorithm ?
    Do you also simulate the subtle rotationg motion ofthe sun’s glare/rays during camera motion ?

      • Mikko Kallinen
      • February 6th, 2013 9:58am

      Thanks! The glare effect is implemented using FFT. We have an image representing the glare we want and we transform it into frequency domain using FFT. This is only done once, so the glare pattern is static. Then each frame we transform the rendered image similarly using FFT. The result is then multiplied with the frequency domain glare pattern and that result transformed back using inverse FFT and behold: every pixel in the frame has a glare stamped around it!

    • Xekrix
    • February 14th, 2013 11:44pm

    Man, im living for these days, give me this real time in a RPG game!!!

    • Aaa
    • May 2nd, 2013 11:16pm


    First of all excellent work.

    I am trying a similar thing in my free time and I have a question about the texture warping for the cloud pass : “We warp both maps in creative ways to get them to cover the entire sky all the way to the horizon with decent resolution.”
    How do you prevent the artifacts when the camera is moving if the lightvolume is always different from a frame to an other ?
    Or maybe my version is too naive (computing the projection of the frustum boundaries on the cloud boundaries in the opposite light direction and taking its bounding box)


    • Noffski
    • June 12th, 2013 1:12am

    Absolutely amazing work, I am just starting to take computer science at my university and after reading some of the comments here, although intimidating, demonstrates just how wonderful the use of programming can be πŸ™‚

    I hope to be as creative and algorithm smart as you two someday.

  5. Hi,

    This is amazing!
    I am working on something very similar and I am having trouble with the performance.
    I am curious to know how thick is your cloud layer and what is the sample length. Does your sample size increase linearly as it goes further away from the camera?


    P.S. Here is a link to my videos in case you are interested:

      • Mikko Kallinen
      • November 11th, 2013 12:33pm

      Thanks! Looks like currently our clouds range from 1000m to 3300m in altitude. The shader takes 32 samples along each ray segment within the cloud volume. Those samples are indeed logarithmically distributed, so that they get farther apart in the distance.

      • Thanks for the quick answer. I have one more question though.

        Do you employ any other optimization technique (like empty space/homogeneous space skipping) when marching the ray through the cloud volume or when rendering the opacity depth maps?

    • Brian Ford
    • December 24th, 2013 5:17pm

    Just curious, could a plane or other object in the scene properly fly through the clouds using this technique? I guess that would be the same as a mountain penetrating a low hanging cloud deck.

  6. I thought I recognized the look of the results of Eric Bruneton and Fabrice Neyret paper. Good on you for citing the sources of information on development. Very few people do that and I applaud you for it.

    • justin
    • April 25th, 2018 11:51pm

    I was wondering if you had any updates on this – I’m beginning to research how to make volumetric clouds and your implementation seems really slick. If I asked specific reproduction questions, would you be comfortable with answering?


  7. Liquidating checks, for instance, is a considerable measure less demanding these days than it was previously.