Ajay Sridhar

I am a Computer Science PhD student at Stanford University. My research is supported by the NSF Graduate Research Fellowship. I am interested in building generalizable and robust robot learning systems that continuously improve with experience.

Previously, I was an undergraduate at UC Berkeley, where I worked with Prof. Sergey Levine at the Robotic AI and Learning Lab. I have also worked with Prof. Thomas Dietterich on domain generalization techniques in computer vision.

Email  /  Google Scholar  /  Github  /  CV  /  Twitter

profile photo
Publications

LeLaN: Learning A Language-Conditioned Navigation Policy from In-the-Wild Videos
Noriaki Hirose, Catherine Glossop, Ajay Sridhar, Dhruv Shah, Oier Mees, Sergey Levine
Conference on Robot Learning (CoRL), 2024
arXiv / Website / Code

LeLaN is a language-conditioned navigation policy that learns from in-the-wild videos to enable natural language navigation commands for mobile robots.

SELFI: Autonomous Self-improvement with Reinforcement Learning for Social Navigation
Noriaki Hirose, Dhruv Shah, Kyle Stachowicz, Ajay Sridhar, Sergey Levine
Conference on Robot Learning (CoRL), 2024 (Oral)
arXiv / Summary Video

SELFI is an online reinforcement learning approach for fine-tuning control policies trained with model-based learning. We combine the objective used during model-based learning with a Q-value function learned online.

NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration
Ajay Sridhar, Dhruv Shah, Catherine Glossop, Sergey Levine
ICRA, 2024 (Best Conference Paper Award)
CoRL 2023 Workshop on Pre-Training for Robot Learning, 2023 (Oral)
NeurIPS 2023 Workshop on Foundation Models for Decision Making, 2023 (Oral)
arXiv / Summary Video / Code

NoMaD is a novel architecture for robotic navigation in previously unseen environments that uses a unified diffusion policy to jointly represent exploratory task-agnostic behavior and goal-directed task-specific behavior.

ViNT: A Foundation Model for Visual Navigation
Dhruv Shah*, Ajay Sridhar*, Nitish Dashora*, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine
Conference on Robot Learning (CoRL), 2023 (Oral & Live Demonstration)
Bay Area Machine Learning Symposium (BayLearn), 2023 (Oral)
arXiv / Summary Video / Code

ViNT is a flexible Transformer-based model for visual navigation that can be efficiently adapated to a variety of downstream navigational tasks.

SACSoN: Scalable Autonomous Control for Social Navigation
Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine
IEEE Robotics and Automation Letters (RA-L), 2023
Conference on Robot Learning (CoRL), 2023 (Live Demonstration)
arXiv / Summary Video / Dataset

SACSoN is vision-based navigation policy that learns socially unobtrusive behavior in human-occupied spaces through continual learning.

GNM: A General Navigation Model to Drive Any Robot
Dhruv Shah*, Ajay Sridhar*, Noriaki Hirose, Sergey Levine
International Conference on Robotics and Automation (ICRA), 2023
arXiv / Summary Video / Code / Media Coverage

GNM is vision-based navigation policy trained with a simple goal-reaching objective on a cross-embodiment navigation dataset. It exhibits positive transfer, outperforming specialist models trained on singular embodiment datasets, and generalizes to new robots.

ExAug: Robot-Conditioned Navigation Policies via Geometric Experience Augmentation
Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine
International Conference on Robotics and Automation (ICRA), 2023
arXiv / Summary Video

ExAug is a vision-based navigation policy that learns to control robots with varying camera types, camera placements, robot sizes, and velocity constraints by applying a novel geometric-aware objective to view augmented data.

Teaching
cs188 Undergraduate Student Instructor, CS188 Spring 2024
Undergraduate Student Instructor, CS188 Fall 2023
Undergraduate Student Instructor, CS188 Spring 2023
Undergraduate Student Instructor, CS188 Fall 2022
Undergraduate Student Instructor, CS188 Spring 2022
berkeleyEECS Tutor, EECS16B Fall 2021

Source code from Jon Barron's website.