Structured Deep Visual Dynamics Models for Robot Manipulation (Talk)
- Arunkumar Byravan (PhD student)
- Robotics and State Estimation (RSE) Lab at the University of Washington
The ability to predict how an environment changes based on forces applied to it is fundamental for a robot to achieve specific goals. Traditionally in robotics, this problem is addressed through the use of pre-specified models or physics simulators, taking advantage of prior knowledge of the problem structure. While these models are general and have broad applicability, they depend on accurate estimation of model parameters such as object shape, mass, friction etc. On the other hand, learning based methods such as Predictive State Representations or more recent deep learning approaches have looked at learning these models directly from raw perceptual information in a model-free manner. These methods operate on raw data without any intermediate parameter estimation, but lack the structure and generality of model-based techniques. In this talk, I will present some work that tries to bridge the gap between these two paradigms by proposing a specific class of deep visual dynamics models (SE3-Nets) that explicitly encode strong physical and 3D geometric priors (specifically, rigid body dynamics) in their structure. As opposed to traditional deep models that reason about dynamics/motion a pixel level, we show that the physical priors implicit in our network architectures enable them to reason about dynamics at the object level - our network learns to identify objects in the scene and to predict rigid body rotation and translation per object. I will present results on applying our deep architectures to two specific problems: 1) Modeling scene dynamics where the task is to predict future depth observations given the current observation and an applied action and 2) Real-time visuomotor control of a Baxter manipulator based only on raw depth data. We show that: 1) Our proposed architectures significantly outperform baseline deep models on dynamics modelling and 2) Our architectures perform comparably or better than baseline models for visuomotor control while operating at camera rates (30Hz) and relying on far less information.
Biography: Arunkumar Byravan is a PhD student in the Robotics and State Estimation (RSE) Lab at the University of Washington, advised by Prof. Dieter Fox. His research focuses on applying machine learning techniques to robotics, mainly for manipulation, learning from demonstration and motion planning. Currently, he is working on integrating physical priors with deep networks, applied to problems such as modeling scene/video dynamics, visuomotor control and robot manipulation. Prior to joining UW, he received his Masters from the University of Pennsylvania in 2011.