Angela Dai


Understanding the structure and appearance of shapes representing real-world observations is a fundamental challenge in machine perception, with many applications towards content creation, mixed reality, and autonomous navigation and interaction. In this talk, we propose to develop learned neural parametric models to capture geometric and texture priors to represent realistic 3D shapes. We show that such neural parametric models can be used to model a space of shape parts, which can be used to fit to real-world observations from commodity sensors at test time; importantly, this enables scene-aware reasoning in modeling the objects in an environment. We further demonstrate the effectiveness of learned neural parametric models for capturing a realistic appearance manifold of 3D shapes, which is essential for visual consumption. Finally, we demonstrate the effect of a learned parametric texture space for texturing shapes from RGB image queries, without requiring any camera pose alignment or exact geometric matches. These learned parametric models will enable the construction of meaningful shape manifolds for editing and interaction.


Angela Dai is an Assistant Professor at the Technical University of Munich where she leads the 3D AI group. Prof. Dai's research focuses on understanding how the 3D world around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018 and her BSE in computer science from Princeton in 2013. Her research has been recognized through a Eurographics Young Researcher Award, Google Research Scholar Award, ZDB Junior Research Group Award, an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention, as well as a Stanford Graduate Fellowship