A conceptually simple(r) way to derive exponential shadow maps + sample code

A few months ago, while working on an improved version of exponential shadow maps, I stumbled on a new way to derive ESM equations which looks more simple and intuitive than previous attempts.

There is no need to invoke Markov’s inequality, higher order moments or convolutions. In fact all we have to do is to write the basic percentage closer filtering formula for n equally weighted occluders o_i and a receiverr

\displaystyle\frac{1}{n}\sum_{i=1}^{n}H(o_i-r)

The role of the step function H(x) is to perform a depth test on all occluders, depth test results are then averaged together to obtain a filtered occlusion term. The are many ways to write H(x) and a limit of exponential functions guarantees a fast convergence:

\displaystyle H(o_i-r) = \lim_{k \to +\infty} \frac{e^{ko_i}}{e^{ko_i}+e^{kr}}

We can rewrite the original PCF equation as:

\begin{array}{ccc} \displaystyle\frac{1}{n}\sum_{i=1}^{n}H(o_i-r)&=&\displaystyle\frac{1}{n}\sum_{i=1}^{n}\lim_{k \to +\infty} \frac{e^{ko_i}}{e^{ko_i}+e^{kr}} \\ &=&\displaystyle\lim_{k \to +\infty}\frac{1}{ne^{kr}}\sum_{i=1}^{n}\frac{e^{ko_i}}{e^{k(o_i - r)}+1} \end{array}

If we make the hypothesis that our shadow receiver is planar within the filtering window we are also implicitly assuming that the receiver is the most distant occluder (otherwise it might occlude itself, which can’t happen given our initial hypothesis), thus we have r > o_i.
Armed with this new assumption we observe that the term e^{k(o_i - r)} quickly converges to zero for all occluders:

\begin{array}{ccc} \displaystyle\lim_{k \to +\infty}\frac{1}{ne^{kr}}\sum_{i=1}^{n}\frac{e^{ko_i}}{e^{k(o_i - r)}+1} &\approx&\displaystyle\lim_{k \to +\infty}\frac{1}{ne^{kr}}\sum_{i=1}^{n}e^{ko_i} \\ &\equiv&\displaystyle\lim_{k \to +\infty}\frac{E[e^{ko}]}{e^{kr}} \\ \end{array}

As we already know k controls the sharpness of our step function approximation and can be used to fake soft shadows. Ultimately we can drop the limit and we obtain the ESM occlusion term formula:

\displaystyle \frac{E[e^{ko}]}{e^{kr}}

Exponential shadow maps can be seen as a very good approximation of a PCF filter when all the occluders are located in front of our receiver (no receiver self shadowing within the filtering window). There’s not much else to add, if not that this new derivation clearly shows the limits of this technique and that any future improvements will necessarily be based on a relaxed version of the planar receiver hypothesis.

For unknown reasons some old and buggy ESM test code was distributed with ShaderX6. You can grab here the FxComposer 2.0 sample code that was meant to be originally released with the book.

Advertisements

27 Responses to “A conceptually simple(r) way to derive exponential shadow maps + sample code”

  1. Andrew Lauritzen Says:

    That’s a neat way to derive it actually – I like it 🙂

  2. Damsku Says:

    Why is that so doing a simple gaussian blur gives the very exact same result than doing the blur in logarithmic (log_conv) space? 🙂

  3. Damsku Says:

    Otherwise, I did get ESM to work nicely in my project. But… (of course there is a but :)) how could I keep as soft edges as with an overdarkening factor of 1.f when I need to use a big (60-80.f) factor?
    It somewhat results to the same lost of softness than when using linear step or power with VSM.
    Light leaking got much more violent than with VSM, and that is the reason I need so big factor… A depth bias doesn’t solve the leaking problems enough.
    If there was a way to get much darker shadow inside the shadow itself and conserve a very soft and wide edge/penumbra: Mixing ESM up with VSM would completely get rid of light bleeding at pretty much the cost of an exp(). (Assuming I would use a linear blur so that i could share the same shadowdepth between ESM and VSM :p)

  4. Marco Salvi Says:

    Hi Damsku,

    I need more details in order to be able to answer to your questions.
    When you say that a gaussian blur gives you exactly the same result than doing a blur in log space what do you render in the shadow map? What formula do you use to compute occlusion in both cases?

    Filtering in log space should give you exactly the same results you get when filtering in linear space, and it enables you to work on long distances.

  5. Damsku Says:

    Yep, I was suspecting it would affect on long distance mainly.
    I simply use exp(k * (OccluderDepth – PixelDepth)) to evaluate the occlusion factor and I store the linear depth from the light source in the shadow map before applying the blur.

  6. Marco Salvi Says:

    If you store linear depth you have to use log filtering, a gaussian filter in this case will give you incorrect shadows (and it doesn’t make any mathematical sense as well, so I’m not surprised to hear it doesn’t work 🙂 )

    Do you also multiply your linear depth by some factor? if you do it remember that also PixelDepth has to be multiplied by the same factor.

  7. Damsku Says:

    I did try to multiply my linear depth by a factor and PixelDepth by the same factor, that’s what I incorrectly refered by depthbias in my previous post 🙂 It helps, but both this plus the darkening factor still gives me more light leaking than the VSM does. (Though it completly removed the light bleeding). To get a result close from the VSM, I need a darkening factor around at least 60 which will sacrifice most of the softness of the shadow (still help the edge aliasing artifact quite well though).

    “so I’m not surprised to hear it doesn’t work” <- well, that’s where I am surprised, because it does give quite acceptable results (at least at “short range”)
    Replace your tap filter in LogBox1D_NTaps_PS() by:
    float accum = 0;
    for (int i = 0; i < taps; i++)
    {
    accum += sample[i];
    }
    accum /= (float)taps;
    It will give incorrect but visually acceptable results. I don’t think I would notice it is incorrect if I wouldn’t be aware of the difference between both filters.

  8. Marco Salvi Says:

    Umh.. what’s the difference between light leaking and light bleeding in your case? 🙂
    Keep in mind that with ESM you will have light bleeding on not planar receivers.

    A few screenshots would help me understand what is not working. (you can send them to my email: marcotti at gmail dot com )

  9. Damsku Says:

    Light bleeding being the thing why VSM are hard to use (when you get the penumbra of a shadow inside another shadow), i refer to light leaking as the fact the shadow doesn’t start at the contact point of the caster. 🙂

  10. Andrew Lauritzen Says:

    Just wanted to note that indeed you can combine ESM and VSM -> there’s some information on “exponential variance shadow maps” at the end of the layered VSM paper and more info in the Beyond3D thread. In general it works very well (as you correctly speculated) although uses more memory than stock ESMs.

    You can also use”layering” with ESM if you wish – i.e. partition the light-space depth range into layers and use a single component for an ESM for each. Reconstruction can just select the proper one. It’s pretty cheap on performance although of course it also uses more memory and probably doesn’t scale as well in memory as something more clever.

    To that end, I know Marco’s been fiddling with some more ideas… ready to give away any hints on that stuff yet Marco? 🙂

  11. Damsku Says:

    Thank you Andrew. I did dig in EVSM already. But it seems the requirements are more heavy in the end, precision wise for sure… If I could use R32F targets and if they were filterable in dx9. I would be using EVSM, the result was pretty promising 🙂

    Concerning the ESM, my major issue is that I cannot get the shadows to be as dark at the caster contact point as far from the caster. Which forces me to use a large overdarkening factor/depth scale combinaison and hereby kills part of the softness. It visually just looks like the power function in VSM which reduce light bleeding at the cost of edge softness.

  12. Andrew Lauritzen Says:

    Yep fair enough with respect to the trade-offs. I think EVSM will be pretty interesting in the 1-2 year sort of time frame for games, but it’s certainly a bit resource-heavy for games right now.

    As an aside, 32-bit float formats *are* filterable in DX9 if your card supports it. They just aren’t guaranteed to be… but if you have a G80+ or R600+ you’re laughin’ 🙂

  13. realtimecollisiondetection.net - the blog » Posts and links you should have read Says:

    […] summary (of PCSS, PSSM, and screen-size shadow masks/accumulation buffers) and Marco Salvi’s twist on ESMs and his faster PCF post. Marco also talked about shadows (based off this teaser) at GDC as part of […]

  14. Viik Says:

    Agree with Damsku – it’s definately nice that we are not getting penubra from one caster bleeding to another one, like in VSM, but caster contact point is definately shaded unproperly. Look at this screenshots:

    In both cases i’m using linear depth, gaussian separable 5×5 tap blur with Ln filter and depth is multiplayed by 10 (with smaller value even bigger amount of over darkening factro is needed). I think some thing really important is missing in FX file.

  15. Viik Says:

    I’m taking my words back, everything became fine right after I switched back to back face culling, completely forgot about that. I’m switching from VSM to ESM for now, thank you for you great research!

  16. Zirka Says:

    Hi Marco!

    ESM definitely looks like a very useful technique, and after implementing VSM with cascades, the results in my complex scene were not as good as I need, so I started looking into ESM. I was hoping to check out a demo of ESM from you but the link you posted for the correct ShaderX6 demo is no longer functional. Could you repost it?

    Thanks a bunch!

  17. Jan Helleman Says:

    can any reupload the shaderX6 code, or send it to me? slaaitjuh [AT] gmail [dot] com

  18. Alejandro Martinez Says:

    Hi there!
    Nice job tackling the shadows. As I understand it (I’m still really bad understanding those tricky math equations), the binary shadow test is changed for an exponential evaluation.
    I have some questions:
    – If the stored distance in the shadow map is already exponential how does the shadow occlusion term would be evaluated? something like exp(receiver) – occluder? -> since occluder is already exponential. And how to click in the darkening factor?
    – Any other exponential might work? something like exp2?
    – Looking through some slides of the GDC08 talk you gave, there is an add-on at the end, constant based light bleeding vs distance based light bleeding (which looks GOOD!), what’s the trick behind it? is the darkening factor the one that controls light bleeding or the depth scale factor?

  19. Marco Salvi Says:

    Hi Alejandro,

    Occlusion is computed as exp(occluder – receiver), which you can rewrite as exp(occluder) / exp(receiver). If you render exp(depth) in the shadow map then you simply sample it and divide it by exp(receiver). Other exponential functions work, but they generate occlusion terms that change under translation. The effect may be negligible for practical purposes.

    That pic was generated modulating the scale factor for the receiver as a function of the distance from the light, which looks very nice for some special scene, but it really breaks down in the general case, that’s why I really don’t talk about it in the presentation 🙂

  20. Alejandro Martinez Says:

    Hi Marco,

    My bad, totally forgot some same based exponential Maths 101: in multiplication exponents are added, in divisions are substracted! :p.

    You’re right, for some scenes it just looks great.
    Maybe to make it more general (and make it work with directional lights without faking a finite “position”) the scale factor could be modulated by some function based on the distance between receiver – occluder, instead of the receiver distance to the light, that way when the occluder is close to the receiver it will have sharper shadows than if it is farther away.
    For example: A tree on a plane, the base of the trunk will cast sharp shadows and leafs of the tree will almost fade away.

    The problem I see with this is that overlapping occluders will completely break the effect, as only the most close to the light source will be stored. Maybe changing the depth test to keep only the farthest of the overlapping occluders.

    Of course, this is just crazy thinking not based on anything, and it is not real shadows, maybe for some stylized/NPR shadowing.

    I will immediately play around with this, but would like to know your opinion about its correctness/possibility/doomed to fail? :p. (That way I can spend hours tweaking knowing that THERE IS something “right” at the end)

    I believe that this type of exponential testing may open for some couple of crazy tricks to pull off… whether scene specific or not, shadow map related or not.

  21. Erwin Coumans Says:

    The sample code link is broken. Could you please update it?

    Thanks!

  22. Erwin Coumans Says:

    (never mind, I found the link. I was blind 🙂


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: