Projects using robomimic
Contents
Projects using robomimic#
A list of projects and papers that use robomimic. If you would like to add your work to this list, please send the paper or project information to Ajay Mandlekar (amandlek@cs.stanford.edu).
2023#
Imitating Task and Motion Planning with Visuomotor Transformers. Murtaza Dalal, Ajay Mandlekar*, Caelan Garrett*, Ankur Handa, Ruslan Salakhutdinov, Dieter Fox
Data Quality in Imitation Learning. Suneel Belkhale, Yuchen Cui, Dorsa Sadigh
Coherent Soft Imitation Learning. Joe Watson, Sandy H. Huang, Nicolas Heess
Inverse Preference Learning: Preference-based RL without a Reward Function. Joey Hejna, Dorsa Sadigh
Sequence Modeling is a Robust Contender for Offline Reinforcement Learning. Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang
Diffusion Co-Policy for Synergistic Human-Robot Collaborative Tasks. Eley Ng, Ziang Liu, Monroe Kennedy III
Zero-shot Preference Learning for Offline RL via Optimal Transport. Runze Liu, Yali Du, Fengshuo Bai, Jiafei Lyu, Xiu Li
Seeing the Pose in the Pixels: Learning Pose-Aware Representations in Vision Transformers. Dominick Reilly, Aman Chadha, Srijan Das
Get Back Here: Robust Imitation by Return-to-Distribution Planning. Geoffrey Cideron, Baruch Tabanpour, Sebastian Curi, Sertan Girgin, Leonard Hussenot, Gabriel Dulac-Arnold, Matthieu Geist, Olivier Pietquin, Robert Dadashi
Preference Transformer: Modeling Human Preferences using Transformers for RL. Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
MimicPlay: Long-Horizon Imitation Learning by Watching Human Play. Chen Wang, Linxi Fan, Jiankai Sun, Ruohan Zhang, Li Fei-Fei, Danfei Xu, Yuke Zhu, Anima Anandkumar
Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, Shuran Song
ORBIT: A Unified Simulation Framework for Interactive Robot Learning Environments. Mayank Mittal, Calvin Yu, Qinxi Yu, Jingzhou Liu, Nikita Rudin, David Hoeller, Jia Lin Yuan, Pooria Poorsarvi Tehrani, Ritvik Singh, Yunrong Guo, Hammad Mazhar, Ajay Mandlekar, Buck Babich, Gavriel State, Marco Hutter, Animesh Garg
PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining. Garrett Thomas, Ching-An Cheng, Ricky Loynd, Vibhav Vineet, Mihai Jalobeanu, Andrey Kolobov
Behavior Retrieval: Few-Shot Imitation Learning by Querying Unlabeled Datasets. Maximilian Du, Suraj Nair, Dorsa Sadigh, Chelsea Finn
Mind the Gap: Offline Policy Optimization for Imperfect Rewards. Jianxiong Li, Xiao Hu, Haoran Xu, Jingjing Liu, Xianyuan Zhan, Qing-Shan Jia, Ya-Qin Zhang
2022#
Learning and Retrieval from Prior Data for Skill-based Imitation Learning. Soroush Nasiriany, Tian Gao, Ajay Mandlekar, Yuke Zhu
VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors. Yifeng Zhu, Abhishek Joshi, Peter Stone, Yuke Zhu
Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment. Huihan Liu, Soroush Nasiriany, Lance Zhang, Zhiyao Bao, Yuke Zhu
Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data. Allen Nie, Yannis Flet-Berliac, Deon R. Jordan, William Steenbergen, Emma Brunskill
Eliciting Compatible Demonstrations for Multi-Human Imitation Learning. Kanishk Gandhi, Siddharth Karamcheti, Madeline Liao, Dorsa Sadigh
Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations. Yilun Hao, Ruinan Wang, Zhangjie Cao, Zihan Wang, Yuchen Cui, Dorsa Sadigh
Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL. Wonjoon Goo, Scott Niekum
HEETR: Pretraining for Robotic Manipulation on Heteromodal Data. Garrett Thomas, Andrey Kolobov, Ching-An Cheng, Vibhav Vineet, Mihai Jalobeanu
Translating Robot Skills: Learning Unsupervised Skill Correspondences Across Robots. Tanmay Shankar, Yixin Lin, Aravind Rajeswaran, Vikash Kumar, Stuart Anderson, Jean Oh
Active Predicting Coding: Brain-Inspired Reinforcement Learning for Sparse Reward Robotic Control Problems. Alexander Ororbia, Ankur Mali
Imitation Learning by Estimating Expertise of Demonstrators. Mark Beliaev, Andy Shih, Stefano Ermon, Dorsa Sadigh, Ramtin Pedarsani
2021#
RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning. Sabela Ramos, Sertan Girgin, Léonard Hussenot, Damien Vincent, Hanna Yakubovich, Daniel Toyama, Anita Gergely, Piotr Stanczyk, Raphael Marinier, Jeremiah Harmsen, Olivier Pietquin, Nikola Momchev
Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation. Josiah Wong, Albert Tung, Andrey Kurenkov, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, Roberto Martín-Martín
Generalization Through Hand-Eye Coordination: An Action Space for Learning Spatially-Invariant Visuomotor Control. Chen Wang, Rui Wang, Danfei Xu, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese
Deep Affordance Foresight: Planning Through What Can Be Done in the Future. Danfei Xu, Ajay Mandlekar, Roberto Martín-Martín, Yuke Zhu, Silvio Savarese, Li Fei-Fei
Learning Multi-Arm Manipulation Through Collaborative Teleoperation. Albert Tung, Josiah Wong, Ajay Mandlekar, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese
Learning to Generalize Across Long-Horizon Tasks from Human Demonstrations. Ajay Mandlekar*, Danfei Xu*, Roberto Martín-Martín, Silvio Savarese, Li Fei-Fei
2020#
Human-in-the-Loop Imitation Learning using Remote Teleoperation. Ajay Mandlekar, Danfei Xu, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese
IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data. Ajay Mandlekar, Fabio Ramos, Byron Boots, Silvio Savarese, Li Fei-Fei, Animesh Garg, Dieter Fox