Varying Filter Size
I'm using the per-sample density as a measure of confidence, and increasing the splat width for points with low density. It does a pretty good job at reducing moderate noise; note the penumbra below the sphere and the highlights. If you look closely, noise in the (non-highlight) reflections is also significantly reduced. The noise on the roof is only slightly, but still noticably, reduced. While turning up some smoothing parameters can significantly reduce the noise on the roof, it causes the highlights to bleed over the edges. While most edges are well-preserved, in some cases there is blurring, especially in the checkered squares on the upper half of the background (lower down the confidence is higher, so points are better preserved)
Mean Shift Filtering
The three images show the original, followed by outlier rejection, followed by outlier rejection plus shifting.
These two show a comparison before and after meanshift filtering for a section of ceiling to the right of the light.
A problem we have with the results of density filtering for outlier rejection is that while we are able to remove path samples that lead to excessive error, we are not actually contributing any additional information to pixels that would reduce pixel-to-pixel variance. One approach would be to consider the point samples in path space as merely defining a density function, which itself can be integrated. We can draw additional points from the density function (which correspond to predicted points in path space), evaluate their densities and splat into an image.
While it doesn't look like much for now, this is a result of resampling the density function based on the existing samples.
The challenge is to preserve the hard edges; I got some milage out of manually tweaking the various weights for normals/incident directions/etc, that allowed things like the corners to show up more sharply (admittedly, they're still very soft). My next step is to consider the variance of the neighbors about each sample, and use this anisotropic distance when resampling the density -- I think this is the right point to apply the anisotropic distance, btw, and not after aggregating all the points to a single pixel as before, which ignores the multiple modes.
I've tried to come up with some measure of the confidence/error of a given integral/pixel without simply resorting to the sample variance at that pixel, and more specifically, drawing this from the density. One potential approach that I've tried is making an estimate of the x%-confidence interval (i.e. x=95%) at each sample in path-space; when reconstructing a pixel we can aggregate the confidence intervals to give us an idea of the overall confidence of an integral. For example, in the following, the roof has the widest confidence interval (largest error), regions in the penumbra and silhouette of the sphere have high error, as do the regions of the walls and cone that receive the most color bleeding (note the increase in error left-to-right across the cone; this goes down with a less reflective right wall). Currently, this is not too different from what we would get with just using variance, but I like the idea of not being tied to unimodal assumptions; I need to see something like this on the multimodal yellow-blue-lights scene.