jan novák

Compositional Neural Scene Representations for Shading Inference

Jonathan Granskog, Fabrice Rousselle, Marios Papas, Jan Novák

Transaction on Graphics (Proceedings of SIGGRAPH 2020), vol. 39, no. 4

Our neural image generator takes a compositional neural representation of a scene, which is extracted from three scene observations (left-hand side) and translates a G-buffer (middle) from a classical renderer into a shaded image. The results are temporally stable and feature view-dependent effects such as highlights and reflections.

abstract

We present a technique for adaptively partitioning neural scene representations. Our method disentangles lighting, material, and geometric information yielding a scene representation that preserves the orthogonality of these components, improves interpretability of the model, and allows compositing new scenes by mixing components of existing ones. The proposed adaptive partitioning respects the uneven entropy of individual components and permits compressing the scene representation to lower its memory footprint and potentially reduce the evaluation cost of the model. Furthermore, the partitioned representation enables an in-depth analysis of existing image generators. We compare the flow of information through individual partitions, and by contrasting it to the impact of additional inputs (G-buffer), we are able to identify the roots of undesired visual artifacts, and propose one possible solution to remedy the poor performance. We also demonstrate the benefits of complementing traditional forward renderers by neural representations and synthesis, e.g. to infer expensive shading effects, and show how these could improve production rendering in the future if developed further.

downloads

publication

supplementals

video

citation

video

bibtex

@article{granskog2020,
    author = {Granskog, Jonathan and Rousselle, Fabrice and Papas, Marios and Nov\'{a}k, Jan},
    title = {Compositional Neural Scene Representations for Shading Inference},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH)},
    volume = {39},
    number = {4},
    year = {2020},
    month = jul,
    keywords = {rendering, neural networks, neural scene representations, disentaglement, attribution}
}