User:Cdecoro/Filtering

From CSWiki
< User:Cdecoro
Revision as of 11:59, 3 October 2008 by Cdecoro (talk | contribs)

Jump to: navigation, search

Density Resampling

A problem we have with the results of density filtering for outlier rejection is that while we are able to remove path samples that lead to excessive error, we are not actually contributing any additional information to pixels that would reduce pixel-to-pixel variance. One approach would be to consider the point samples in path space as merely defining a density function, which itself can be integrated. We can draw additional points from the density function (which correspond to predicted points in path space), evaluate their densities and splat into an image.

While it doesn't look like much for now, this is a result of resampling the density function based on the existing samples.

Resampling.jpg

The challenge is to preserve the hard edges; I got some milage out of manually tweaking the various weights for normals/incident directions/etc, that allowed things like the corners to show up more sharply (admittedly, they're still very soft). My next step is to consider the variance of the neighbors about each sample, and use this anisotropic distance when resampling the density -- I think this is the right point to apply the anisotropic distance, btw, and not after aggregating all the points to a single pixel as before, which ignores the multiple modes.

Averaging Samples

I also implemented the idea that we talked about where the samples are averaged in path-space, before splatting. What you get is that samples are "pulled" across the screen towards similar areas, and the boundaries delineating regions becomes black. What I've shown below is the original image (with 100k samples) and the sample-averaged image for two sizes of screen filter width (the second is wider). Note that the gammas of each are a bit off, just from the way I exported the images. I think an interesting thing to take from this is that it becomes very clear which areas are undersampled, which might guide an adaptive approach.

Averaged original.jpgAveraged 32.jpgAveraged 64.jpg

Confidence

I've tried to come up with some measure of the confidence/error of a given integral/pixel without simply resorting to the sample variance at that pixel, and more specifically, drawing this from the density. One potential approach that I've tried is making an estimate of the x%-confidence interval (i.e. x=95%) at each sample in path-space; when reconstructing a pixel we can aggregate the confidence intervals to give us an idea of the overall confidence of an integral. For example, in the following, the roof has the widest confidence interval (largest error), regions in the penumbra and silhouette of the sphere have high error, as do the regions of the walls and cone that receive the most color bleeding (note the increase in error left-to-right across the cone; this goes down with a less reflective right wall). Currently, this is not too different from what we would get with just using variance, but I like the idea of not being tied to unimodal assumptions; I need to see something like this on the multimodal yellow-blue-lights scene.

Confidence.jpg