A (not so) little teaser

When it comes down to real time computer graphics, shadows rendering is one of the hottest topics. Shadow maps are currently the most used shadowing technique: they are reasonably fast, simple to implement (in their basic incarnations at least!) and unlike stencil shadows, they work even with not manifold meshes. Unfortunately shadow maps can look ugly as well, their resolution is never enough and all those aliasing and ‘acne’ problems drive people insane. This is why every year a lot of research goes into new shadowing techniques or into improving current ones. Papers and articles about shadow mapping often divide into two distinct groups:

  1. How to improve virtual shadow maps resolution ( for example all the works about warped projection schemes belong to this category ).
  2. How to improve shadow maps filtering quality/speed for hard edged and/or soft edged shadows

Personally, I find this second class of problems more interesting, cause it is still unknown territory and there is a lot of work to do about it.

A popular shadow maps filtering algorithm is percentage closer filtering, first developed by Pixar and later integrated by NVIDIA on their graphics hardware. It is based on taking a number of samples (the more, the better..) from a shadow map, performing a depth test with each of them against a shadow receiver and then averaging together all the depth test results.The key idea here is that filtering a shadow map, unlike what we do with colour textures, is not about directly filtering a bunch of depth values.Averaging together depth values and performing a single depth test on their mean value wouldn’t make any sense, cause a group of occluders which are not spatially correlated can’t be represented by a single averaged occluder. That’s why to have meaningful results filtering has to happen after depth tests, not before.

PCF is still the most used technique to filter shadow maps but a new algorithm called Variance Shadow Maps (developed by William Donnelly and Andrew Lauritzen) has deservedly attracted a lot of attention. VSM are based on a somewhat different and extremely interesting approach: depth values located around some point over a shadow map are seen as a discrete probability distribution. If we want to know if a receiver is occluded by this group of depth samples we don’t need to perform many depth tests anymore, we just want to know the probability of our receiver to be occluded or not. Constructing on the fly a representation of an unknown (a priori) discrete probability distribution it is basically an impossible task, so Donnelly and Lauritzen’s method doesn’t try to compute the exact probablity of being in shadow.

Chebyshev’s inequality is used instead to calculate an upper bound for this probability. This inequality categorizes a probability distribution using only two numbers: mean and variance, which can be easily computed per shadow map texel. Incidentally what we need to do in order to compute mean and variance is similar to what we do when we filter an image. So this new approach allows for the first time to filter shadow maps in light space as we do with colour images, and also enable us to use GPUs hardwired texture filtering capabilities to filter a variance shadow map.

No wonder VSM triggered a lot of interest (a rapidly growing number of games is using this new techinque) as it has made possible what many, me included, thought was not possible. We can now apply filters to the variance shadow map and work with it as we were working with some colour texture. Here an example of the kind of high quality shadow that is possible to achieve filtering a variance shadow map with a 7×7 taps gaussian filter. In this case filtering has been applied with a separable filter so that we just need to work on 2N samples per shadow map texel instead of NxN samples that PCF would require.

Variance Shadow Maps - 1

Variance Shadow Maps – 7×7 Gaussian Filter

Memory requirements and hardware filtering issues aside VSM only significant flaw is light bleeding/leaking: a problem that manifests itself when shadows casted by occluders that are not relatively close to each other overlap over the same receiver.

For example in this screen shot a triangle (not visibile in the image) located above the teapot is casting a shadow over the rest of the scene, and we can easily tell the shape of this new occluder by the lighter areas that are now inside the shadow casted by the teapot.

Variance Shadow Maps - Light Bleeding

Variance Shadow Maps – Light bleeding caused by a second occluder

Occluders that overlap within a filtering region and that are not close to each other generate high depth variance values in the variance shadow map. High variance deteriorates the quality of the upper bound given by Chebyshev’s inequality, and since this inequality performs a conservative depth test, it can’t simply tell us if a point is in shadow or not. The amount of light leaking into shadows is directly proportional to the filter kernel size; this is what we get using a wider filter:

Variance Shadow Maps - Even More Light Bleeding

Variance Shadow Maps – More light bleeding with a 15×15 Gaussian Filter

A well known work around for this issue is to over darken the occlusion term computed via Chebyshev’s inequality in order to reduce areas affected by light leaks. Over darkening can be achieved via 1D textures look-ups that encode a pre-defined and artist-controlled monotonic decreasing function or we can explicitly use functions such as x^n, n < 1 orx^{1/x}

This method doesn’t guarantee to eliminate light bleeding cause variance can’t really be bound in the general case (think about a mountain casting a shadow over a little town). Over darkening also destroy high frequency information in shadow silhouettes, at an extent that can transform high quality shadows in undefined dark blobs, which can be a nice side-effect in those applications where we are not trying to produce realistic shadows!

When I first found out about variance shadow maps I was so excited about this new way to address shadow maps filtering that I thought there must have been a way to improve this technique in order to reduce light leaks without resorting to over darkening. It is obvious that Chebyshev’s inequality is in some cases failing to produce a good upper bound for an occlusion term cause it has not enough information to work with. The first and the second raw moments are not sufficient to describe a probability distribution, in fact is fairly easy to construct whole families of distributions that generate the same first and second order moments! This is a clear case of bad lossy compression: we discarded too much information and now we realize more data are necessary to describe our depth distribution.

At the beginning the solution seemed so simple to me: two moments are not enough and higher order moments need to be accounted for.How many of them? I didn’t know.. but I also quickly realized that computing higher order moments and deriving any other quantity from them (skewness, kurtosis, etc..) is an extremely difficult task due to numerical issues.

There is also another problem to solve, and not a simple one by any mean. What are we going to do with the extra moments? how do we evaluate the probability of being in shadow? I couldn’t find in statistics and probability theory literature any inequality that could handle an arbitrary number of moments. Moreover even though inequalities that handle three or four moments exist, they are mathematical monsters and we don’t want to evaluate them on a per pixel basis. In the end I decided to give it a go only to find out that this incredibly slow and inaccurate extension to variance shadow maps was only marginally improving light bleeding problems, and in some cases the original technique was looking better anyway due to good numerical stability that was sadly lacking in my own implementation.

Having found myself in a cul de sac I tried something completely different such as defining ‘a priori’ the shape of the (not anymore) unknown depth distribution. The shape of this new object is controlled by a few parameters that can be determined fitting per pixel its raw moments to the moments of the real depth distribution. Unfortunately even in this case is extremely difficult to come up with a function that is flexible enojgh to fit any real world distribution and that at the same is not mathematically too complex (we like real time graphics, don’t we?). I ran experiments with a few different models; the most promising one was a 4 parameters model built upon a mixture of 2 gaussian distributions sharing the same variance but having different mean values. It worked very well when we only have two distinct occluders in the filtering window, but it looked awful everytime something a bit more complex came to play within the filtering window.

Ultimately I had to change direction a third time, going back to a variation of the first method that I previously (and stupidly) discarded without evaluating it. The approach is relatively simple, no monstrous equations involved, and in a very short time I got something up and running:

ESM

A new shadow map filtering method

As you can see this image is quite similar to the its counterpart rendered with variance shadow maps. It does not handle not planar receivers as good as VSM (note the shadow casted by the handle over the teapot) but in only requires two or four bytes per shadow map texel to work, as canonic shadow maps implementations. Mip mapping and filters in shadow map space work fine too, but the most relevant advantage of this new technique is that it’s completely light leaks free at overlapping shadows (and no over darkening is required, it just works!). The only observable light bleeding is uniformly diffused and only depends upon the relative distance between casters and receivers. It is completely controllable via a scalar value, so that we can trade sharper (but still anti-aliased) shadow edges for less light leaks.

ESM - no bleeding

the malicious triangle is back but light leaks are no more!

Evaluating the occlusion term is also very cheap, it just requires a bunch of math ops and a single (at least bilinear filtered) sample from the shadow map, so that this technique is amenable to be implemented even on not very fast GPUs, as long as they support Shader Model 2.0. Although at this point in time I know why it doesn’t always do an amazing job I still have to find a way to fix the remaining problems without incurring in severe performance degradations. It is also easy to derive a ‘dual’ version of the orignal method that does not exhibit any of the previous issues but it also over darken shadows a bit too much.

I will post more about it in the future adding more information (still trying to improve it!) but all the details about it will be available next February when the sixth iteration of the Shader X books series will ship to bookstores.

28 Responses to “A (not so) little teaser”

  1. Jim Tilander Says:

    A nice little teaser I think. And this is also the first time I’ve seen more elaborate comparison pictures from you. How goes the article writing? I have my notes for you, remind me during the week to bring them in, they’re molding in my backpack right now.

    Also, I would love to see some more complex scenes rendered with this new techique ๐Ÿ™‚ Start coding Marco!

  2. marco Says:

    Hello Jim, bring your notes with you tomorrow, pls ๐Ÿ™‚
    Complex scenes will hopefully arrive soon, though I’d like to get the current technique to its full potential first.
    Unfortunately at this time I don’t know if what I’m looking for is mathematically possible or not.
    I guess I have to study some additional math..

  3. Rok Says:

    I was wondering when you were gonna publish this stuff somewhere ๐Ÿ™‚ Looking forward to the full article too.

  4. Joshua Says:

    Nice job Marco! Very well explained and very interesting. It looks promising, and my eyes thank you ๐Ÿ™‚

  5. ShootMyMonkey Says:

    Hmmm… the results you’re getting look strangely similar to some of those that I’m getting with a filtering method I’m playing around with as well, so I’m kinda curious as to whether you’re thinking along the same lines. Still some bleeding, all right, but nothing as bad, and it only seems to show up between occluders near each other and/or receivers their occluders (I’m also seeing the same thing).

    As a hint, the approach I’m playing around with uses some differential filters and is sort of derived from Taylor Series.

  6. marco salvi Says:

    At the moment I have two different ways to derive my method, one is statistical, the other one is using signal theory. Even though the starting points are completely different both methods originate the same formulas. No Taylor series are involved so far, even though a Taylor series can be used to explain why it works/makes sense.
    How many values are you rendering per shadow map pixel?

  7. purpledog Says:

    I’m extremely curious about the improvement you are thinking about.
    Imo, and stricly speaking, there’s no way to improve light-bleeding if you stay with the 2 moments as the information has been lost once and for all.

    Btw this is why I really-really admired what you tried to do to improve the basis in which the distribution function is represented. I’d be tempted to say that this is a great contribution by itself (even if the conclusion is “don’t go there!”).

    Anyway, I can see how a different “estimation strategy” can lead to shadows behaving much better in real world example. To be continued ๐Ÿ™‚

  8. marco salvi Says:

    Hi purpledog,
    I agree with you, two moments are not enough and unfortunately even ten or twenty are not enough! A probability distribution can be reconstructed using its moments (when they exist..), but if you see it as a series expansion so you can say that it converges to the ‘true’ distribution very slowly.
    I guess you’re cooking something as well, care to say a bit more about it? ๐Ÿ™‚

  9. ShootMyMonkey Says:

    I’m rendering 3 values per shadowmap pixel myself — essentially the equivalents of z, z’, and z”. I do progressively smaller filters on the 3 components (9×9 on z, 3×3 on z’, none on z”). I base my evaluation on the difference between a linear relationship with z’ and a quadratic with z”, as you’d expect.

    Difference in display contrasts and gamma settings show some rather interesting discrepancies. I look at your screens as well as my own at home, and I see virtually no light bleeding except when the occluder is right up against the receiver, and that’s pretty mild. At work on those so-called 3000:1 LCDs we have, it’s quite obvious (though not as common or as severe as the regular VSM version).

  10. marco salvi Says:

    That’s interesting, you have a local second order approximation of your depth, but does this eliminate light bleeding when depth has big jumps (e.g. high variance?)

    On the laptop I’m using now I can see some light bleeding on my images that was not visibile when I authored them on my desktop computer. Not a big issue imho cause the amount of bleeding in the image can be tweaked changing a scale parameter, even though making everything too sharp can make shadows around the edges shrink and disappear (this is due to a particular shortcut I took..)
    I’m quite happy with it cause I only render a single depth value per shadow map texel; the algorithm itself is really cheap right now, still working on an extension of it that can fix the remaining problems without making it too slow.
    Dunno if such a thing exists though ๐Ÿ™‚

  11. ShootMyMonkey Says:

    It actually does reduce the issue with large jumps most of the time. In common cases (common for us, anyway). The problems come when one of the components gets much larger than the other. For instance, if you have a large jump between two roughly light-facing polygons, then z’ is large, but z” is small. If you have two relatively close occluders and one is nearly lightfacing, and the nearer (to the light) is cylindrical or spherical, then you have a large z” and a small z’, and you just get sharp shadows.

    Though whether the jump is large or small, I still get light bleeding when the receiver isn’t far from the occluders. One of the nice freebies I get is progressive softening as the receiving point is farther from the light, and that’s without having to construct any mipmaps or have variable filter sizes — obviously it’s bounded in how much softening I get, though. Still, I may still have to tear it down because some of the trouble cases I get might be a little too common for some of our projects. Either that, or I need to mess with how I evaluate z’ and z”…

  12. marco salvi Says:

    Thank SMM, your idea sounds really interesting, I never thought about addressing the filtering problem that way. One last question: does your light bleeding at contacts points depends also upon the relative distance between receiver+occluder and the light? What I mean is: if you move your light away from the scene does the light bleeding get worse (or better..)?

  13. Andrew Lauritzen Says:

    Aww now you’re gonna make me buy ShaderX 6 just to read your chapter too ๐Ÿ˜‰

    Seriously though, it sounds like you’ve done some really cool work on shadow filtering and I’m looking forward to reading about it in detail! I’ve been hoping that someone (smarter than me :D) would take the idea of VSM and similar algorithms and run with them and it looks like you’ve come up with some really useful extensions.

    The extensions that I’m coming up with seem to work pretty well so far as well, but they require storing more data (although only reading back one or two samples still). That said it seems like you’ve actually reduced the amount of data storage/access so I’m really looking forward to reading about how!

    Kudos, and keep up the great work ๐Ÿ™‚

  14. marco salvi Says:

    Hi Andrew,
    you made me buy (GP;))GPU Gems 3, which is very expensive! ๐Ÿ™‚

    Jokes aside thanks again for your work cause it has introduced a new way to address shadow maps filtering and sparkled a lot of interest on this subject.
    Can you say when we will able to know more about your exentsions to VSM?
    At this point I don’t know if I should call my algorithm an extension to vsm or not, cause in some sense it extends that idea, but in another sense is departing from it.
    BTW..I also tried to extend VSM using ‘more of the same’ (seeing variance as a distribution and computing mean and variance of variance itself) but I couldn’t properly put all these data together (and numerical stability was poor as well..)
    Maybe you succeeded in this..

  15. ShootMyMonkey Says:

    I’ve only experimented in simple scenes like yours (at best, a character standing in 2 lights), and in scenes like that, the bleeding behavior is a little weird. If the light is at a shallow angle with respect to the receiver and a not shallow angle with respect to the occluder(s), the bleeding seems to decrease with distance… otherwise, there’s little change. I get the impression that this is due to the relative impact of the filter size.

    I still need to play around with it in more complex scenery and also see what sort of impact resolution has on it. In any case, you’ve piqued my curiosity, so I’m all too interested in trying out what you’ve done. Mine is simply the product of a dream I had about Taylor series and Laplacians and Sobel filters and VSMs and Euler’s Method… yeah, I have weird dreams. You’ve obviously put more thought into yours… I hope. ๐Ÿ˜‰

  16. marco salvi Says:

    SMM, frustration can do miracles! I tried so many different approaches or variants (that didn’t work properly..) that I thought there was nothing to do to improve vsm (light bleeding wise).
    I put a lot of efforts in very complex methods, but simple ideas win most of the time.
    I don’t know if I put more thought than you on this stuff (but I certainly thought a lot about it!).
    The main problem is that sometime you need to mess with very complex stuff to understand where simplicity is ๐Ÿ™‚
    Anyway, I’m really interested in your method and I hope you will publish more details about it soon!

  17. Andrew Lauritzen Says:

    Yeah I’ll be happy to talk about the research I’ve been doing once I’ve fiddled around some more (and fully understood what’s going on). I’m planning to submit a paper (probably to I3D in October) so if that happens I’ll certainly be releasing more info soon after that, and the paper itself if it gets accepted.

    As you mention for your work, the stuff I’ve been doing doesn’t exactly have to be an extension to VSMs (in some ways it can generalize a lot of shadow filtering algorithms), although it is designed to eliminate light bleeding in particular which isn’t necessary for algorithms that don’t have that artifact.

    In any case I’m a fan of anything that makes shadows prettier and faster, so I’m really looking forward to reading your work Marco, as well as yours SMM.

    Cheers!

  18. marco salvi Says:

    It seems that even purpledog is working on something, hopefully he will spill some beans ๐Ÿ™‚

  19. ShootMyMonkey Says:

    Just as an update, I’ve been messing around with my hack a little bit, and I’m seeing it will pretty well need a nice load of thought and experimentation to look like I’d want it to… I’ll have to play around with the interval construction and filtering methods to find something that works. Seems we’re all in the same shoes.

    Scene complexity doesn’t really make much difference in the behavior, and lower shadowmap resolution does decrease bleeding strength, but of course the same bleeding covers larger real estate.

    BTW Marco, if you ever feel the urge to head down to Redwood City and talk shop over a few beers, come on by. This is in NO WAY an attempt to try and get you into a drunken stupor so that you spill secrets more easily… nothing of the sort. ๐Ÿ™‚

  20. purpledog Says:

    I’m between too jobs (well almost) and you can probably guess that from my peaking beyond3d activity. I’ll have one month of “holydays” and two major projects: shadows and eternity 2.

    As soon as I have something to show I’ll give you some hint. That’s more along the line of “warped projection” as you call them.

    I’m manking good progress with eternity 2.
    http://uk.eternityii.com/try-eternity2-online
    I’m open to collaboration here ๐Ÿ™‚

  21. marco salvi Says:

    SMM, thanks for your invitation, I will surely pay a visit to Redwood City sooner or later. (I’m in half-crunch mode at the moment..)

    Purpledog: I had really bad experiences with warped projection schemes and I now think that it would be just better to partition the screen in a few regions and apply regular projections on each area (as ppl do with cascaded shadow maps..)
    Interested to hear about your own take on the subject anyway, any ETA for that? ๐Ÿ™‚

  22. Jim Tilander Says:

    SMM,

    Make sure that you get Marco drunk as well when he is down there, I’ve been trying now for a couple of months but it’s hard. Maybe I’ll give it one more try ๐Ÿ™‚

  23. purpledog Says:

    Well, my take on the subject is exactly like yours, yoda:
    “it would be just better to partition the screen in a few regions and apply regular projections on each area” :-)))))

    The general idea is to code an efficient “deformable” grid” which keep some good property:
    – converge very fast to “area of interest” (a bit like amagnigying glass really)
    – allow holes (or maybe not)
    – stable over time (or maybe not…)

    The “area of interest” are the stuff like the eye can see bits view from the light.

    One big problem comes from the fact that once the grid is done then one has to render the polygon which can span over many cells. The geometry shader can help subdividing and reach a point where only 2×2 cells max are intersected.

    Ok, so that’s the rough idea I’m trying to push further. Sylvain already helped me and I’m hoping to steal more of his time when I’ll have some code written.
    http://www-sop.inria.fr/reves/personnel/Sylvain.Lefebvre/

    As I said, no code has been written yet so that’s not even “a very early stage”, that’s more “non-existent” ๐Ÿ˜ฆ
    Anyway… Along the line of those methods which sneakily try to turn rasterizing into ray-casting ๐Ÿ™‚

    Any feedback more than welcome ofcourse…

  24. ShootMyMonkey Says:

    Jim,

    I’ll do what I can… ๐Ÿ™‚ when you say “hard,” do you mean hard to push him (i.e. he has too much self-control) or hard to get him anywhere towards that point (i.e. he can hold his liquor)? You’re welcome to join in — we can double-team him ;).

  25. Marco Salvi Says:

    to purpledog: I’m very interested in your idea, cause I don’t like warping schemes for shadow map projections (I believe they are all a big waste of time..). What you’re exploring is the way to go imho, ideally we want to make sure the number of pixels needed to sample visibility from a punctiform light converges to the number of pixels we employ to sample the scene from the camera. While you’re working on your idea don’t forget to factor in shadow maps filtering algorithms that work in light space -> give away just a bit of shadow map resolution and leave some space on the image boundaries so that visibility reconstruction filters have full support ๐Ÿ˜‰
    Congratulations for also working with one of the most brilliant young researchers around these days, the stuff he did with Hoppe is pretty amazing.
    Interesting your last comment: the irregular z-buffer paper shows how to turn rasterization into ray-casting/first hit ray tracing but it requires hw support, construction/maintenance of a spatial subdivision structure and it kills every opportunity to have cheap smooth/soft shadows, which means we won’t see it running in realtime anytime soon. So we definitely need something in the middle that leverage on common/fast/cheap rasterizers!

    to SMM: What my friend Jim is trying to say is that I don’t drink. Don’t worry..you can have your beer, I’ll have an orange juice..:)

  26. multisample Says:

    So … anymore info than this little teaser. Its definitely got me intrigued, but you have kept the important info to your self ๐Ÿ™‚ Any idea when we will find out more ?

  27. Time for a brief update « Pixels, Too Many.. Says:

    […] January 23, 2008 — Marco Salvi Four months ago I tried to stimulate your curiosity with this post. Next month I will try to stimulate it even more with this talk at the upcoming Game Developer […]


Leave a comment