jan novák

Neural Scene Graph Rendering

Jonathan Granskog, Till N. Schnabel, Fabrice Rousselle, Jan Novák

Transaction on Graphics (Proceedings of SIGGRAPH 2021), vol. 40, no. 4

Neural Scene Graph Rendering - teaser

Our scene graphs consist of leaf nodes (drawn as rectangles) that contain learned vectors storing geometry (blue) and materials (pink), and interior nodes that hold transformations (hexagons), and combine them into object instances (blue-pink circles). Symbols and indices are explained in Section 4 and 5. The geometry and material vectors, as well as encoding of user-defined transformations and deformations, are learned from data.

abstract

We present a neural scene graph—a modular and controllable representation of scenes with elements that are learned from data. We focus on the forward rendering problem, where the scene graph is provided by the user and references learned elements. The elements correspond to geometry and material definitions of scene objects and constitute the leaves of the graph; we store them as high-dimensional vectors. The position and appearance of scene objects can be adjusted in an artist-friendly manner via familiar transformations, e.g. translation, bending, or color hue shift, which are stored in the inner nodes of the graph. In order to apply a (non-linear) transformation to a learned vector, we adopt the concept of linearizing a problem by lifting it into higher dimensions: we first encode the transformation into a high-dimensional matrix and then apply it by standard matrix-vector multiplication. The transformations are encoded using neural networks. We render the scene graph using a streaming neural renderer, which can handle graphs with a varying number of objects, and thereby facilitates scalability. Our results demonstrate a precise control over the learned object representations in a number of animated 2D and 3D scenes. Despite the limited visual complexity, our work presents a step towards marrying traditional editing mechanisms with learned representations, and towards high-quality, controllable neural rendering.

downloads

publication

supplementals

video

citation

video

bibtex

@article{granskog2021,
    author = {Granskog, Jonathan and Schnabel, Till N. and Rousselle, Fabrice and Nov\'{a}k, Jan},
    title = {Neural Scene Graph Rendering},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH)},
    volume = {40},
    number = {4},
    year = {2021},
    article = {164},
    month = aug,
    keywords = {rendering, neural networks, neural scene representations, modularity, generalization},
    doi = {10.1145/3450626.3459848}
}