A problem we have with the results of density filtering for outlier rejection is that while we are able to remove path samples that lead to excessive error, we are not actually contributing any additional information to pixels that would reduce pixel-to-pixel variance. One approach would be to consider the point samples in path space as merely defining a density function, which itself can be integrated. We can draw additional points from the density function (which correspond to predicted points in path space), evaluate their densities and splat into an image.
While it doesn't look like much for now, this is a result of resampling the density function based on the existing samples.
The challenge is to preserve the hard edges; I got some milage out of manually tweaking the various weights for normals/incident directions/etc, that allowed things like the corners to show up more sharply (admittedly, they're still very soft). My next step is to consider the variance of the neighbors about each sample, and use this anisotropic distance when resampling the density -- I think this is the right point to apply the anisotropic distance, btw, and not after aggregating all the points to a single pixel as before, which ignores the multiple modes.
I also implemented the idea that we talked about where the samples are averaged in path-space, before splatting. What you get is that samples are "pulled" across the screen towards similar areas, and the boundaries delineating regions becomes black. What I've shown below is the original image (with 100k samples) and the sample-averaged image for two sizes of screen filter width (the second is wider). Note that the gammas of each are a bit off, just from the way I exported the images. I think an interesting thing to take from this is that it becomes very clear which areas are undersampled, which might guide an adaptive approach.