Source-aware Encoder / Bitterli The Grey And White Room
Source-aware encoders enable straightforward adaptation of a trained model to new content. We train new source-aware encoders for the Tungsten Renderer on a training set built upon publicly available scenes.
The interactive viewer below compares results from several networks:
- one trained from scratch (random initialization) on the full set of 1200 frames,
- one trained from scratch (random initialization) on a small subset of 75 frames, and
- one for which we just trained a new source-aware (from scratch) encoder for a network previously trained on Moana and Cars, using the 75-frame subset for training.
The results are compared against the NFOR denoiser (Bitterli et al. 2016). In most scenes, just training a new frontend using the small dataset yields similar performance to training from scratch on the full dataset.
Images
Use mouse wheel to zoom in/out, click and drag to pan. Press keys [1], [2], ... to switch between individual images. [f] toggles full screen viewing.

Charts
Relative error, obtained by dividing the reconstruction error by the default path traced rendering error. For each method, we show, from left to right, the improvement at 32 and 128 samples per pixel. For all metrics, lower values are better.
Error metrics
MrSE
32 spp | 256 spp | |
---|---|---|
Input | 0.23303 | 0.03361 |
From scratch (75) | 0.00465 | 0.00125 |
Frontend (75) | 0.00304 | 0.00091 |
From scratch (1200) | 0.00270 | 0.00085 |
NFOR | 0.00692 | 0.00197 |
DSSIM
32 spp | 256 spp | |
---|---|---|
Input | 0.39725 | 0.25114 |
From scratch (75) | 0.02953 | 0.01425 |
Frontend (75) | 0.02238 | 0.01184 |
From scratch (1200) | 0.01974 | 0.01144 |
NFOR | 0.03787 | 0.01993 |