Deep Learning for Vision and Graphics
Seminar: Summer Semester 2021
Continuous Learning of Multimodal Data Streams
University of Tuebingen
Neural Rendering | Modeling Human Appearance | Modeling Human Motion |
Description
The fields of 3D computer vision and graphics have been revolutionized by deep learning. For example, it is now possible to obtain detailed 3D reconstructions of humans and objects from single images, generate photo-realistic renderings of 3D scenes with neural networks, or manipulate and edit videos and images. In this seminar, we will cover the most recent publications and advances in the fields of neural rendering, 3D computer vision, 3D shape reconstruction, and representation learning for 3D shapes.
This is a Master's level course. Since these topics are very complex, following prerequisites are expected:
This is a Master's level course. Since these topics are very complex, following prerequisites are expected:
- Programming skills, knowledge of linear algebra and calculus, numerical optimization, probability theory
- Prior participation in one of: Deep Learning, Probabilistic ML, Statistical ML is required
Tentative Schedule
- May 5, 14:00-18:00
- May 12, 10:00-14:00
- May 19, 10:00-14:00
- May 26, 10:00-14:00
- June 2, 10:00-14:00
- June 9, 10:00-14:00
- June 16, 10:00-14:00
Requirements
- Presentation/ Attendance
- Active participation in the entire event. We have 70% attendance policy for this seminar. You need to attend at least 5 of the 7 sessions.
- Short presentation on a selected topic/paper: 10 minutes talk + 5 min questions
- Long presentation on a selected topic/paper: 20 minutes talk + 10 minutes
- Grading scheme:
- Participation and presentations will be graded.
Topics to be covered
The seminar will cover the following topics. Interested students are encouraged to check out some of the recent advances in these directions.
3D Implicit Shape Representation
3D Implicit Shape Representation
- DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation(paper)
- Occupancy Networks: Learning 3D Reconstruction in Function Space(paper)
- Texture Fields: Learning Texture Representations in Function Space(paper)
- Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion(paper)
- Neural Unsigned Distance Fields for Implicit Function Learning(paper)
- NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis(paper)
- Implicit Neural Representations with Periodic Activation Functions(paper)
- Stereo Radiance Fields (SRF): Learning View Synthesis from Sparse Views of Novel Scenes(paper)
- D-NeRF: Neural Radiance Fields for Dynamic Scenes(paper)
- SMPLpix: Neural Avatars from 3D Human Models(paper)
- Multi-Garment Net: Learning to Dress 3D People from Images(paper)
- PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization(paper)
- TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style (paper)
- SMPLicit: Topology-aware Generative Model for Clothed People (paper)
- Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction (paper)
- SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks( paper)
- On human motion prediction using recurrent neural networks(paper)
- Long-term Human Motion Prediction with Scene Context(paper)
- Neural state machine for character-scene interactions(paper)
- Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors(paper)
Registration
Contact
20.04.2021 UPDATE: The email address was incorrect and has been corrected.
Prof. Dr. Gerard Pons-Moll (mail).
Prof. Dr. Gerard Pons-Moll (mail).