signed distance function
An overview of SDF
outline
- What is SDF?
- Mathematical Definition
- Characteristics
- Pros and Cons
- SDF in 3D reconstructions
- NeuS (NeurIPS 2021, Wang et al.)
- Neuralangelo (CVPR 2023, Li et al.)
- SDF in 3D generations
- One-2-3-45 (NeurIPS 2023, Liu et al.)
- BlockFusion (arXiv 2024, Wu et al.)
SDF
Original Definition
Definition in NeuS
SDF: Signed Distance Function
從該點到最近表面的最短距離
SDF
SDF
SDF
Characteristic
Surface
Surface Normal
Eikonal Equation
SDF
Characteristic
SDF
Characteristic
SDF
Pros and Cons
Pros
Cons
- A high precision and continuous surface representation
- Better surface quality
- Easy ray casting and collision detection
- Memory consumption
- Can't directly edit
- Computational intensity
- How to perform volume rendering?
3D reconstruction
Directly use 2D images as supervision
3D reconstruction
- NeuS (NeurIPS 2021, Wang et al.)
- Neuralangelo (CVPR 2023, Li et al.)
NeuS (NeurIPS 2021, Wang et al.)
Author of F2-NeRF
NeuS (NeurIPS 2021, Wang et al.)
Recap: NeRF
MLP
NeuS (NeurIPS 2021, Wang et al.)
Recap: NeRF
MLP
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Recap: NeRF
MLP
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
MLP
Volume Rendering
How to perform volume rendering?
NeuS (NeurIPS 2021, Wang et al.)
MLP
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
MLP
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
- Unbiased: weight function has local maximal value at the surface
- Occlusion-aware: points that are closer to view point should have larger weight
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
logistic density
distribution
Biased!
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
logistic density
distribution
unbiased
but not occlusion-aware
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
Goal: to derive an unbiased and occlusion-aware weight function
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Volume Rendering
NeuS (NeurIPS 2021, Wang et al.)
Loss Terms
NeuS (NeurIPS 2021, Wang et al.)
Results
NeuS (NeurIPS 2021, Wang et al.)
Results
Chamfer Distance
Neuralangelo (CVPR 2023, Li et al.)
Author of InstantNGP
Neuralangelo (CVPR 2023, Li et al.)
Neuralangelo (CVPR 2023, Li et al.)
- SOTA of SDF-based 3D reconstruction
- Improve from NeuS
- MLP outputs SDF and RGB
- Integrate InstantNGP
- Multi-resolution hash grid
- Introduce numerical gradient
Neuralangelo (CVPR 2023, Li et al.)
Recap: InstantNGP
Neuralangelo (CVPR 2023, Li et al.)
Numerical Gradients
Neuralangelo (CVPR 2023, Li et al.)
Numerical Gradients
Only update entries near sample point
By choosing eps, it can "smooth" the output
Neuralangelo (CVPR 2023, Li et al.)
Numerical Gradients
Neuralangelo (CVPR 2023, Li et al.)
Coarse-to-fine
Neuralangelo (CVPR 2023, Li et al.)
Loss Terms
Neuralangelo (CVPR 2023, Li et al.)
Loss Terms
Neuralangelo (CVPR 2023, Li et al.)
Results
Neuralangelo (CVPR 2023, Li et al.)
Results
Neuralangelo (CVPR 2023, Li et al.)
Results
Neuralangelo (CVPR 2023, Li et al.)
Results
3D Generation
-
One-2-3-45 (NeurIPS 2023, Liu et al.)
- 3D object generation
-
BlockFusion (arXiv 2024, Wu et al.)
- 3D scene generation
One-2-3-45 (NeurIPS 2023, Liu et al.)
One-2-3-45 (NeurIPS 2023, Liu et al.)
Goal
One-2-3-45 (NeurIPS 2023, Liu et al.)
Methods
One-2-3-45 (NeurIPS 2023, Liu et al.)
Training
- Datasets
- Objaverse-LVIS
- 46k 3D models in 1156 categories
- RGB-D
- Training Time
- Trained on 2 A10 GPUs
- 6 days
One-2-3-45 (NeurIPS 2023, Liu et al.)
Results
One-2-3-45 (NeurIPS 2023, Liu et al.)
Results
BlockFusion (arXiv 2024, Wu et al.)
BlockFusion (arXiv 2024, Wu et al.)
BlockFusion (arXiv 2024, Wu et al.)
Goal
To generate unbounded 3D scene geometry conditioned on 2D layout
BlockFusion (arXiv 2024, Wu et al.)
Datasets
- Room
- 3DFront and 3D-FUTURE
- City and Village
- Designed by artists
BlockFusion (arXiv 2024, Wu et al.)
Methods
1.
2.
BlockFusion (arXiv 2024, Wu et al.)
Methods - Raw Tri-plane Fitting
A training block
corresponding tri-plane
BlockFusion (arXiv 2024, Wu et al.)
Methods
1.
2.
BlockFusion (arXiv 2024, Wu et al.)
Methods - Latent Tri-plane
BlockFusion (arXiv 2024, Wu et al.)
Methods - Latent Tri-plane
BlockFusion (arXiv 2024, Wu et al.)
Methods
1.
2.
How to generate unbounded scene?
BlockFusion (arXiv 2024, Wu et al.)
Methods - Latent Tri-plane Extrapolation
BlockFusion (arXiv 2024, Wu et al.)
Methods - Latent Tri-plane Extrapolation
BlockFusion (arXiv 2024, Wu et al.)
Time
- NVIDIA V100
- Training
- Tri-plane fitting: 4750 GPU hr.
- Auto-encoder: 768 GPU hr.
- Diffusion: 384 GPU hr.
- Inference
- 6 minutes per block
- Large indoor scene: 3 hr.
BlockFusion (arXiv 2024, Wu et al.)
Results
BlockFusion (arXiv 2024, Wu et al.)
Results
BlockFusion (arXiv 2024, Wu et al.)
Results
BlockFusion (arXiv 2024, Wu et al.)
Results
BlockFusion (arXiv 2024, Wu et al.)
Results
BlockFusion (arXiv 2024, Wu et al.)
Results
Conclusion
- SDF can build high quality surface and also suitable for obtaining accurate surface normal
- We can perform marching algorithm to generate mesh from SDF
- Generative methods based on SDF are still lack
- SDF can only represent geometry, but no texture
Minimal
By jayinnn_nycu
Minimal
- 101