Stochastic Rasterization and Deferred Rendering

After discussion with Repi, and reading the recent Decoupled Sampling paper, I’ve been thinking about how deferred rendering techniques can interact with stochastic sampling for defocus and motion blur. We all know that deferred techniques don’t play very nice with MSAA, but the issues are generally solvable for a couple of reasons:

  • The number of samples is usually small (2 or 4, typically)
  • The number of pixels that need multiple samples is relatively low (i.e. only edge pixels)
  • The mapping of shading samples to visibility is straightfoward – the shading sample is the pixel that contains the visibility samples.

Stochastically-sampled defocus and motion blur blow all of these out of the water.

  • To get decent looking blurs, you need a large number of samples.
  • Potentially all pixels need multiple samples (e.g. fast camera movement).
  • The mapping from shading samples to visibility samples is far from trivial.

So, the blunt answer to “how does stochastic sampling interact with deferred techniques” is “it doesn’t.” And by “deferred techniques” I don’t just mean deferred shading. I mean anything that uses existing scene contents to alter the scene outside of the main render pass which aren’t a pure post-process, inlcuding things like deferred fog, soft particles, projected decals and SSAO. The last one’s a bit of a kicker, since there’s no way to do it in anything other than a deferred fashion.

The problem is that once the render target has been resolved to pixels, non-colour values no longer make any sense. This is the exact sample problem as when using MSAA, but everywhere. If a fast-moving foreground object moves across a distant object, then the depth value for pixels covered by the blurred object are somewhere inbetween the 2 objects, which is clearly not usable. Just taking the nearest depth is not a solution, especially in cases where most samples come from the “far” object, as the motion blur will leave a shadow of incorrect shading.

The current solution for this is to perform calculations at sample frequency for pixels that need it. However, this isn’t really feasible in this case, since (a) all pixels need it and (b) there’s a huge number of samples.

I’ve been trying to think up ways around this problem, related to the idea of decoupled sampling. However, I’ve not made much progress, to be perfectly honest. You could try storing a separate render target that is effectively a full screen version of the shading space described in the paper. This would store information for an unblurred/aliased version of the scene, which could be mapped to-and-from the final version in the same way that the paper does as shading time. Firstly, however, that mapping is not trivial, so would have to be stored somewhere during construction. More critically, though, failing the depth test in this unblurred version of the scene does not mean you failed the depth test in the final version. And worse still, the number of fragments from a given pixel in the unblurred version that map to contributing samples in the final version is unbounded — consider n pixel-sized balls all in the same position at time t=0, but spread out in multiple directions over the shutted interval such that they all contribute at least one sample to the final image; there is no theoreitcal limit on n.

So, this implies you need some form of list at each pixel of shading samples that contribute to the final image, and the location of the samples that they contribute to. Which is all beginning to sound (a) complicated (b) messy and (c) slow.

So, assuming someone cleverer than I doesn’t come up with a solution to the problem, this rather unfortunate conclusion seems to suggest that, if we want stochastic sampling for defocus and motion blur, we may need to sacrifice some of the techniques that we have come to rely on in games over the last few years. That, or we’re stuck with post-processed depth of field and motion blur for a while longer, which isn’t a very happy thought.

Advertisements

About Simon

I'm a lead developer at Geomerics working on the integration of Enlighten into various game engines. This blog is for random musing about real-time rendering and games development.
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s