Collect large 3d dataset and build models
Can be:
- 3d alone
- 3d text
- 3d image
Examples:
- 30k samples from thingiverse (3d printing STL model files)
- Fusion360Gallery
- Amazon Berkeley Objects (ABO), a dataset of Amazon products with metadata, catalog images, and 3D models.
- Large Geometric Models Archive
- FaceScape, a large-scale detailed 3D face dataset (application required).
- Redwood 3DScan, more than ten thousand 3D scans of real objects.
- Human3.6M, 3.6 million 3D human poses and corresponding images.
- Semantic3D, a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total.
- SceneNN / ObjectNN, an RGB-D dataset with more than 100 indoor scenes along with RGB-D objects extracted and split into 20 categories.
- 3D-FRONT, a dataset of 3D furnished rooms with layouts and semantics.
- 3D-FUTURE, a dataset of 3D furniture shapes with textures.
- ABC, a collection of one million Computer-Aided Design (CAD) models
- Structured3D, a large-scale photo-realistic dataset containing 3.5K house designs with a variety of ground truth 3D structure annotations.
- ShapeNet, a richly-annotated, large-scale dataset of 3D shapes.
- FixIt!, a dataset that contains about 5k poorly-designed 3D physical objects paired with choices to fix them.
- ModelNet, a comprehensive clean collection of 3D CAD models for objects.
- Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging
- Paper: https://arxiv.org/abs/2105.14021
- Code: https://github.com/compphoto/BoostingMonocularDepth
- Follow up paper: https://arxiv.org/abs/2012.09365
- Follow up code: https://github.com/aim-uofa/AdelaiDepth (although firs code includes it)
- Self-supervised Learning of Depth Inference for Multi-view Stereo
- Sketch2Model - View-Aware 3D Modeling from Single Free-Hand Sketches
- SceneFormer - Indoor Scene Generation with Transformers
- Image2Lego - Customized LEGO Set Generation from Images
- Paper: https://arxiv.org/abs/2108.08477
- Code: π₯
- Neural RGB-D Surface Reconstruction
- Paper: https://arxiv.org/abs/2104.04532
- Code: π₯
- SP-GAN - Sphere-Guided 3D Shape Generation and Manipulation
- Style-based Point Generator with Adversarial Rendering for Point Cloud Completion
- Learning to Stylize Novel Views
- RetrievalFuse - Neural 3D Scene Reconstruction with a Database
- Geometry-Free View Synthesis - Transformers and no 3D Priors
- ShapeFormer - Transformer-based Shape Completion via Sparse Representation
- Paper: https://arxiv.org/abs/2201.10326
- Code: π₯
- Neural Parts - Learning Expressive 3D Shape Abstractions with Invertible Neural Representations
- DeepSDF - Learning Continuous Signed Distance Functions for Shape Representation
- Spline Positional Encoding for Learning 3D Implicit Signed Distance Fields
- Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
- Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image
- Paper: https://openreview.net/forum?id=U8pbd00cCWB
- Code: maskedURL lol
- NeuS - Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
- From data to functa - Your data point is a function
- Paper: https://arxiv.org/abs/2201.12204
- Code: π₯
- Previous work: https://arxiv.org/abs/2102.04776
- Code: https://github.com/EmilienDupont/neural-function-distributions
- Multiresolution Deep Implicit Functions for 3D Shape Representation
- Paper: https://arxiv.org/abs/2109.05591
- Code: π₯
- Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields
- Implicit Neural Representations with Periodic Activation Functions
- Volume Rendering of Neural Implicit Surfaces
- HyperCube - Implicit Field Representations of Voxelized 3D Models
- Convolutional Occupancy Networks
- GANcraft - Unsupervised 3D Neural Rendering of Minecraft Worlds
- ADOP: Approximate Differentiable One-Pixel Point Rendering
- Paper: https://arxiv.org/abs/2110.06635
- Code: https://github.com/darglein/ADOP
- youtube:
- Editing Conditional Radiance Fields
- GIRAFFE - Representing Scenes as Compositional Generative Neural Feature Fields
- NeX - Real-time View Synthesis with Neural Basis Expansion
- Putting NeRF on a Diet - Semantically Consistent Few-Shot View Synthesis
- Unconstrained Scene Generation with Locally Conditioned Radiance Fields
- Zero-Shot Text-Guided Object Generation with Dream Fields
- Diffusion Probabilistic Models for 3D Point Cloud Generation
- 3D Shape Generation and Completion through Point-Voxel Diffusion
- A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion
- MOGAN - Morphologic-structure-aware Generative Learning from a Single Image
- Indoor Scene Generation from a Collection of Semantic-Segmented Depth Images
- ATISS - Autoregressive Transformers for Indoor Scene Synthesis
- Paper: https://arxiv.org/abs/2110.03675
- Code: π₯
- Computer-Aided Design as Language
- Paper: https://arxiv.org/abs/2105.02769
- Code: π₯
- Patch2CAD - Patchwise Embedding Learning for In-the-Wild Shape Retrieval from a Single Image
- Paper: https://arxiv.org/abs/2108.09368
- Code: π₯
- Modeling Artistic Workflows for Image Generation and Editing
- (3d, text)
- (3d, image)
https://github.com/google-research/kubric from uwu1
3d -> text
text -> 3d
3d -> image
image -> 3d