Generative Detail Enhancement for Physically Based Materials
Saeed Hadadan, Benedikt Bitterli, Tizian Zeltner, Jan Novák, Fabrice Rousselle, Jacob Munkberg, Jon Hasselgren, Bartlomiej Wronski, Matthias Zwicker
ACM SIGGRAPH 2025 Conference Proceedings

We enhance material definitions of existing 3D assets (a) by applying effects specified by text prompts such as aging, weathering, etc. Conditioned on a set of renderings, we synthesize the corresponding visuals in 2D using a diffusion model building on multi-view visual prompting [Deng et al. 2024] (b). We improve the multi-view consistency of the generator using two key additions—view-correlated noise and attention biasing (c)—that enable succesful inverse-rendering of the visual enhancements back to the original material textures (d).
abstract
We present a tool for enhancing the detail of physically based materials using an off-the-shelf diffusion model and inverse rendering. Our goal is to increase the visual fidelity of existing materials by adding, for instance, signs of wear, aging, and weathering that are tedious to author. To obtain realistic appearance with minimal user effort, we leverage a generative image model trained on a large dataset of natural images. Given the geometry, UV mapping, and basic appearance of an object, we proceed as follows: We render multiple views of the object and use them, together with an appearance-defining text prompt, to condition a diffusion model. The generated details are then backpropagated from the enhanced images to the material parameters via inverse rendering. For inverse rendering to be successful, the generated appearance has to be consistent across all the images. We propose two priors to address the multi-view consistency of the diffusion model. First, we ensure that the noise that seeds the diffusion process is itself consistent across views by integrating it from a view-independent UV space. Second, we enforce spatial consistency by biasing the attention mechanism via a projective constraint so that pixels attend strongly to their corresponding pixel locations in other views. Our approach does not require any training or finetuning of the diffusion model, is agnostic to the used material model, and the enhanced material properties, i.e., 2D PBR textures, can be further edited by artists. We demonstrate prompt-based material edits exhibiting high levels of realism and detail. This project is available at https://generative-detail.github.io.
downloads
publication
supplementals
video
citation
video
bibtex
@article{Hadadan2025GenDetail, title = {Generative Detail Enhancement for Physically Based Materials}, author = {Hadadan, Saeed and Bitterli, Benedikt and Zeltner, Tizian and Nov\'{a}k, Jan and Rousselle, Fabrice and Munkberg, Jacob and Hasselgren, Jon and Wronski, Bartlomiej and Zwicker, Matthias}, journal = {ACM SIGGRAPH 2025 Conference Proceedings}, year = {2025}, volume = {TBD}, number = {TBD}, pages = {TBD}, publisher = {ACM}, address = {New York, NY, USA}, doi = {TBD}, }