Denoising with Kernel Prediction and Asymmetric Loss Functions

Supplementary Materials – Siggraph 2018

Thijs Vogels, Fabrice Rousselle, Brian McWilliams, Gerhard Röthlin, Alex Harvill, David Adler, Mark Meyer, Jan Novák

Disney Research, Pixar, Walt Disney Animation Studios

Temporal Denoising

We show denoising results with various numbers of temporal frames, and compare the results to a high-quality reference and several baselines.


Shot 1

Shot 2

Shot 4

Shot 5

Shot 7

Shot 3

Shot 6

Source-aware Encoder

Source-aware encoders enable straightforward adaptation of a trained model to new content. We train new source-aware encoders for the Tungsten Renderer on a training set built upon publicly available scenes.


Country Kitchen

Curly Hair

The Modern Living Room

Modern Hall

Glass of Water

The Grey and White Room

Asymmetric Loss

We propose asymmetric loss functions that control the trade-of between over-blurring and leaving in residual noise in cases where the network cannot perform well. We show denoised images at multiple levels of the run-time slope parameter λ.


Shot 1

Shot 2

Shot 8

Shot 4

Shot 9

Shot 5

Shot 7

Shot 6

Shot 10

Shot 3

Shot 11

Network Size Experiments

We choose to use a deep network (48 layers) with residual blocks. This combination gives the best results among (12, 24, 48) layers and (with residual blocks, without residual blocks). The following convergence plots illustrate this.

Cars (validation)

Moana (validation)

Source-aware Encoders and a Comparison to the NFOR Denoiser

This is an extension of Fig. 8 in the paper. We plot the DSSIM error of our network relative to NFOR when: 1) training a Tungsten-aware encoder for a pre-trained network with frozen weights (orange line, the original network was trained using Moana and Cars data), and 2) training a Tungsten-specific network from scratch with freshly initialized weights (blue line). We used varying training set sizes (indicated on the horizontal axis), and averaged the error metrics over sampling rates of 32 to 256 spp. For smaller training sets, training a Tungsten-aware encoder for an existing network gives better results than training from scratch. We get more robust results than NFOR in both cases.

Training set size
Training set size
Training set size
Training set size
Training set size
Training set size