Hyppää sisältöön
    • Suomeksi
    • In English
Trepo
  • Suomeksi
  • In English
  • Kirjaudu
Näytä viite 
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Visibility-Based Geometry Pruning of Neural Plenoptic Scene Representations

Freitas, Davi R.; Tabus, Ioan; Guillemot, Christine (2025-10-06)

 
Avaa tiedosto
Visibility-Based_Geometry_Pruning_of_Neural_Plenoptic_Scene_Representations.pdf (9.290Mt)
Lataukset: 



Freitas, Davi R.
Tabus, Ioan
Guillemot, Christine
06.10.2025

IEEE Transactions on Multimedia
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
doi:10.1109/TMM.2025.3618548
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202601302115

Kuvaus

Peer reviewed
Tiivistelmä
The need for more realistic 3D scene representations has fomented the development of models for a wide range of applications. In this context, solutions that attempt to model the light’s behavior through the plenoptic function have provided considerable advancements using neural-based approaches, often presenting a trade-off between rendering time and model sizes. In this work, we propose a pruning framework to reduce the sizes of these models by computing the visibility over the training data, applicable to different 3D scene representations. In particular, we implement first a solution suitable for the 3D Gaussian Splatting, and then we exemplify the solution for the Neural Radiance Fields (NeRF)-style of rendering using PlenOctrees. We show that our pruning solution produces smaller models in terms of the number of elements-be they voxels, points, or Gaussians-with minimal losses in terms of rendering novel views. We further assess our solution by combining it with state-of-the-art (SOTA) compression solutions for both rendering schemes. Results over the NeRF-Synthetic dataset show comparable metrics to the SOTA for PlenOctrees, achieving marginal gains for lower bitrates. For 3DGS, the combination of our pruning method and compression solutions achieves a compression ratio of up to 37.5 times over the uncompressed 3DGS models, with only a 0.5 dB decrease in rendering quality. When compared against other SOTA compression methods, our solution produces models 1.4 times smaller, with less than a 0.1 dB loss over novel views for synthetic data, and models 1.9 times smaller with less than 0.2 dB loss when synthesizing novel views on real-world, outdoor content.
Kokoelmat
  • TUNICRIS-julkaisut [23830]
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste
 

 

Selaa kokoelmaa

TekijätNimekkeetTiedekunta (2019 -)Tiedekunta (- 2018)Tutkinto-ohjelmat ja opintosuunnatAvainsanatJulkaisuajatKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste