A few months ago, while working on an improved version of exponential shadow maps, I stumbled on a new way to derive ESM equations which looks more simple and intuitive than previous attempts.

There is no need to invoke Markov’s inequality, higher order moments or convolutions. In fact all we have to do is to write the basic percentage closer filtering formula for equally weighted occluders and a receiver

The role of the step function is to perform a depth test on all occluders, depth test results are then averaged together to obtain a filtered occlusion term. The are many ways to write and a limit of exponential functions guarantees a fast convergence:

We can rewrite the original PCF equation as:

If we make the hypothesis that our shadow receiver is planar within the filtering window we are also implicitly assuming that the receiver is the most distant occluder (otherwise it might occlude itself, which can’t happen given our initial hypothesis), thus we have .

Armed with this new assumption we observe that the term quickly converges to zero for all occluders:

As we already know controls the sharpness of our step function approximation and can be used to fake soft shadows. Ultimately we can drop the limit and we obtain the ESM occlusion term formula:

Exponential shadow maps can be seen as a very good approximation of a PCF filter when all the occluders are located in front of our receiver (no receiver self shadowing **within **the filtering window). There’s not much else to add, if not that this new derivation clearly shows the limits of this technique and that any future improvements will necessarily be based on a relaxed version of the planar receiver hypothesis.

For unknown reasons some old and buggy ESM test code was distributed with ShaderX6. You can grab **here** the FxComposer 2.0 sample code that was meant to be originally released with the book.

### Like this:

Like Loading...

*Related*

June 15, 2008 at 10:29 am

That’s a neat way to derive it actually – I like it 🙂

June 17, 2008 at 4:09 am

Why is that so doing a simple gaussian blur gives the very exact same result than doing the blur in logarithmic (log_conv) space? 🙂

June 17, 2008 at 7:01 am

Otherwise, I did get ESM to work nicely in my project. But… (of course there is a but :)) how could I keep as soft edges as with an overdarkening factor of 1.f when I need to use a big (60-80.f) factor?

It somewhat results to the same lost of softness than when using linear step or power with VSM.

Light leaking got much more violent than with VSM, and that is the reason I need so big factor… A depth bias doesn’t solve the leaking problems enough.

If there was a way to get much darker shadow inside the shadow itself and conserve a very soft and wide edge/penumbra: Mixing ESM up with VSM would completely get rid of light bleeding at pretty much the cost of an exp(). (Assuming I would use a linear blur so that i could share the same shadowdepth between ESM and VSM :p)

June 17, 2008 at 7:15 am

Hi Damsku,

I need more details in order to be able to answer to your questions.

When you say that a gaussian blur gives you exactly the same result than doing a blur in log space what do you render in the shadow map? What formula do you use to compute occlusion in both cases?

Filtering in log space should give you exactly the same results you get when filtering in linear space, and it enables you to work on long distances.

June 17, 2008 at 7:30 am

Yep, I was suspecting it would affect on long distance mainly.

I simply use exp(k * (OccluderDepth – PixelDepth)) to evaluate the occlusion factor and I store the linear depth from the light source in the shadow map before applying the blur.

June 17, 2008 at 7:44 am

If you store linear depth you have to use log filtering, a gaussian filter in this case will give you incorrect shadows (and it doesn’t make any mathematical sense as well, so I’m not surprised to hear it doesn’t work 🙂 )

Do you also multiply your linear depth by some factor? if you do it remember that also PixelDepth has to be multiplied by the same factor.

June 17, 2008 at 8:03 am

I did try to multiply my linear depth by a factor and PixelDepth by the same factor, that’s what I incorrectly refered by depthbias in my previous post 🙂 It helps, but both this plus the darkening factor still gives me more light leaking than the VSM does. (Though it completly removed the light bleeding). To get a result close from the VSM, I need a darkening factor around at least 60 which will sacrifice most of the softness of the shadow (still help the edge aliasing artifact quite well though).

“so I’m not surprised to hear it doesn’t work” <- well, that’s where I am surprised, because it does give quite acceptable results (at least at “short range”)

Replace your tap filter in LogBox1D_NTaps_PS() by:

float accum = 0;

for (int i = 0; i < taps; i++)

{

accum += sample[i];

}

accum /= (float)taps;

It will give incorrect but visually acceptable results. I don’t think I would notice it is incorrect if I wouldn’t be aware of the difference between both filters.

June 17, 2008 at 8:10 am

Umh.. what’s the difference between light leaking and light bleeding in your case? 🙂

Keep in mind that with ESM you will have light bleeding on not planar receivers.

A few screenshots would help me understand what is not working. (you can send them to my email: marcotti at gmail dot com )

June 17, 2008 at 8:16 am

Light bleeding being the thing why VSM are hard to use (when you get the penumbra of a shadow inside another shadow), i refer to light leaking as the fact the shadow doesn’t start at the contact point of the caster. 🙂

June 18, 2008 at 8:35 pm

Just wanted to note that indeed you can combine ESM and VSM -> there’s some information on “exponential variance shadow maps” at the end of the layered VSM paper and more info in the Beyond3D thread. In general it works very well (as you correctly speculated) although uses more memory than stock ESMs.

You can also use”layering” with ESM if you wish – i.e. partition the light-space depth range into layers and use a single component for an ESM for each. Reconstruction can just select the proper one. It’s pretty cheap on performance although of course it also uses more memory and probably doesn’t scale as well in memory as something more clever.

To that end, I know Marco’s been fiddling with some more ideas… ready to give away any hints on that stuff yet Marco? 🙂

June 19, 2008 at 1:20 am

Thank you Andrew. I did dig in EVSM already. But it seems the requirements are more heavy in the end, precision wise for sure… If I could use R32F targets and if they were filterable in dx9. I would be using EVSM, the result was pretty promising 🙂

Concerning the ESM, my major issue is that I cannot get the shadows to be as dark at the caster contact point as far from the caster. Which forces me to use a large overdarkening factor/depth scale combinaison and hereby kills part of the softness. It visually just looks like the power function in VSM which reduce light bleeding at the cost of edge softness.

June 19, 2008 at 6:32 pm

Yep fair enough with respect to the trade-offs. I think EVSM will be pretty interesting in the 1-2 year sort of time frame for games, but it’s certainly a bit resource-heavy for games right now.

As an aside, 32-bit float formats *are* filterable in DX9 if your card supports it. They just aren’t guaranteed to be… but if you have a G80+ or R600+ you’re laughin’ 🙂

September 2, 2008 at 8:34 am

[…] summary (of PCSS, PSSM, and screen-size shadow masks/accumulation buffers) and Marco Salvi’s twist on ESMs and his faster PCF post. Marco also talked about shadows (based off this teaser) at GDC as part of […]

December 8, 2008 at 6:16 am

Agree with Damsku – it’s definately nice that we are not getting penubra from one caster bleeding to another one, like in VSM, but caster contact point is definately shaded unproperly. Look at this screenshots:

In both cases i’m using linear depth, gaussian separable 5×5 tap blur with Ln filter and depth is multiplayed by 10 (with smaller value even bigger amount of over darkening factro is needed). I think some thing really important is missing in FX file.

December 10, 2008 at 10:13 am

I’m taking my words back, everything became fine right after I switched back to back face culling, completely forgot about that. I’m switching from VSM to ESM for now, thank you for you great research!

May 26, 2009 at 11:11 am

Hi Marco!

ESM definitely looks like a very useful technique, and after implementing VSM with cascades, the results in my complex scene were not as good as I need, so I started looking into ESM. I was hoping to check out a demo of ESM from you but the link you posted for the correct ShaderX6 demo is no longer functional. Could you repost it?

Thanks a bunch!

May 27, 2009 at 1:14 am

Zirka,

I uploaded the ShaderX6 material to another website and I have update the old link with a new one.

Marco

June 9, 2009 at 6:44 pm

can any reupload the shaderX6 code, or send it to me? slaaitjuh [AT] gmail [dot] com

June 15, 2009 at 11:01 pm

Free hosting web sites never work as advertised, I purchased some space and the link to ShaderX6 code should be working now.

July 20, 2011 at 2:52 pm

I did figure out how to do the log space prefiltering. I wrote about it here if anyone else had the same confusion as me:

http://www.olhovsky.com/2011/07/exponential-shadow-map-filtering-in-hlsl/

April 19, 2010 at 7:11 pm

Hi there!

Nice job tackling the shadows. As I understand it (I’m still really bad understanding those tricky math equations), the binary shadow test is changed for an exponential evaluation.

I have some questions:

– If the stored distance in the shadow map is already exponential how does the shadow occlusion term would be evaluated? something like exp(receiver) – occluder? -> since occluder is already exponential. And how to click in the darkening factor?

– Any other exponential might work? something like exp2?

– Looking through some slides of the GDC08 talk you gave, there is an add-on at the end, constant based light bleeding vs distance based light bleeding (which looks GOOD!), what’s the trick behind it? is the darkening factor the one that controls light bleeding or the depth scale factor?

April 28, 2010 at 11:35 pm

Hi Alejandro,

Occlusion is computed as exp(occluder – receiver), which you can rewrite as exp(occluder) / exp(receiver). If you render exp(depth) in the shadow map then you simply sample it and divide it by exp(receiver). Other exponential functions work, but they generate occlusion terms that change under translation. The effect may be negligible for practical purposes.

That pic was generated modulating the scale factor for the receiver as a function of the distance from the light, which looks very nice for some special scene, but it really breaks down in the general case, that’s why I really don’t talk about it in the presentation 🙂

April 29, 2010 at 3:51 pm

Hi Marco,

My bad, totally forgot some same based exponential Maths 101: in multiplication exponents are added, in divisions are substracted! :p.

You’re right, for some scenes it just looks great.

Maybe to make it more general (and make it work with directional lights without faking a finite “position”) the scale factor could be modulated by some function based on the distance between receiver – occluder, instead of the receiver distance to the light, that way when the occluder is close to the receiver it will have sharper shadows than if it is farther away.

For example: A tree on a plane, the base of the trunk will cast sharp shadows and leafs of the tree will almost fade away.

The problem I see with this is that overlapping occluders will completely break the effect, as only the most close to the light source will be stored. Maybe changing the depth test to keep only the farthest of the overlapping occluders.

Of course, this is just crazy thinking not based on anything, and it is not real shadows, maybe for some stylized/NPR shadowing.

I will immediately play around with this, but would like to know your opinion about its correctness/possibility/doomed to fail? :p. (That way I can spend hours tweaking knowing that THERE IS something “right” at the end)

I believe that this type of exponential testing may open for some couple of crazy tricks to pull off… whether scene specific or not, shadow map related or not.

June 8, 2011 at 1:30 pm

The sample code link is broken. Could you please update it?

Thanks!

July 20, 2011 at 3:23 pm

Link fixed!

July 20, 2011 at 3:47 pm

I must be blind, but now I don’t even find the link anymore, where is it 🙂 ?

July 20, 2011 at 3:48 pm

(never mind, I found the link. I was blind 🙂