Jeannette Bohg
Alumni
Note: Jeannette Bohg has transitioned from the institute (alumni). Explore further information here
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at MPI until September 2017 and remains affiliated as a guest researcher. Her research focuses on perception for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning.
For more details, check out the new group webpage!
Before joining the Autonomous Motion lab in January 2012, Jeannette Bohg was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis on Multi-modal scene understanding for Robotic Grasping was performed under the supervision of Prof. Danica Kragic. She studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively.
Computer Vision Grasping and Manipulation Machine Learning Humanoid Robotics
Real-Time Perception meet Reactive Motion Generation
This video shows the performance of our fully integrated manipulation system that consumes continuous visual feedback on the environment and thereby adapts motion plans online.
Dual Execution of Optimized Contact Interaction Trajectories
This video showcases a method which optimizes trajectories that are in contact with the environment to exploit these constraints for more robust reaching of a given target. It re-plans these trajectories online using force feedback.
The Coordinate Particle Filter - A novel Particle Filter for High-Dimensional Systems
This video showcases a novel variant of the particle filter in which re-sampling is not only done each time-step but also for each dimension of the state vector. Compared to a standard particle filter it yields more robust results the higher the dimension of the state.
Probabilistic Object Tracking using a Range Camera
We show a visual object tracking method that is specifically well suited for tracking during manipulation tasks. These are situations that are characterized by strong occlusions of the object and are therefore challenging for standard tracking methods. By explicitly modelling these occlusions and some clever factorization of the resulting system state, we developed a methods that has been successfully applied in the Phase 2 of the DARPA ARM Challenge.
Robot Arm Pose Estimation through Pixel-Wise Part Classification
Hand-eye coordination is a notorious problem for many robot systems. Even offline calibration may not always provide the final solution or sufficient accuracy for fine manipulation tasks or just simple precision grasps. Being able to estimate the arm pose directly in the image would provide a solution to this problem by providing hand-eye coordination online. We present a frame-by-frame technique for arm pose estimation that does not require an initialization or any information from sensors other than an RGB-D camera.
Task-based Grasp Adaptation
We present the integration of many different modules to allow a robot to infer a task-relevant grasp for a perceived object. It relies on learning techniques to determine the category of an object and the associated grasp given a task. The system is demonstrated on the humanoid robot Armar IIIa at the Karlsruhe Institute for Technology.