Ajay Sridhar

I am an undergraduate at UC Berkeley, majoring Electrical Engineering and Computer Science (EECS) and minoring in Logic. My current research is with Prof. Sergey Levine at the Robotic AI and Learning Lab. I am interested in building generalizable and robust robot learning systems that continuously improve with experience. Previously, I worked with Prof. Thomas Dietterich on domain generalization techniques in computer vision.

I have been a teaching assistant for CS 188: Introduction to Artificial Intelligence for the past four semesters under Prof. Stuart Russell, Prof. Dawn Song, Dr. Igor Mordatch, and Peyrin Kao.

Email  /  Google Scholar  /  Github  /  CV  /  Twitter

profile photo
Preprints

SELFI: Autonomous Self-improvement with Reinforcement Learning for Social Navigation
Noriaki Hirose, Dhruv Shah, Kyle Stachowicz, Ajay Sridhar, Sergey Levine
arXiv, 2024
arXiv / Summary Video

SELFI is an online reinforcement learning approach for fine-tuning control policies trained with model-based learning. We combine the objective used during model-based learning with a Q-value function learned online.

Publications

NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration
Ajay Sridhar, Dhruv Shah, Catherine Glossop, Sergey Levine
ICRA, 2024 (Best Conference Paper Award)
CoRL 2023 Workshop on Pre-Training for Robot Learning, 2023 (Oral Presentation)
NeurIPS 2023 Workshop on Foundation Models for Decision Making, 2023 (Oral Presentation)
arXiv / Summary Video / Code

NoMaD is a novel architecture for robotic navigation in previously unseen environments that uses a unified diffusion policy to jointly represent exploratory task-agnostic behavior and goal-directed task-specific behavior.

ViNT: A Foundation Model for Visual Navigation
Dhruv Shah*, Ajay Sridhar*, Nitish Dashora*, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine
Conference on Robot Learning (CoRL), 2023 (Oral Presentation & Live Demonstration)
Bay Area Machine Learning Symposium (BayLearn), 2023 (Oral Presentation)
arXiv / Summary Video / Code

ViNT is a flexible Transformer-based model for visual navigation that can be efficiently adapated to a variety of downstream navigational tasks.

SACSoN: Scalable Autonomous Control for Social Navigation
Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine
IEEE Robotics and Automation Letters (RA-L), 2023
Conference on Robot Learning (CoRL), 2023 (Live Demonstration)
arXiv / Summary Video / Dataset

SACSoN is vision-based navigation policy that learns socially unobtrusive behavior in human-occupied spaces through continual learning.

GNM: A General Navigation Model to Drive Any Robot
Dhruv Shah*, Ajay Sridhar*, Noriaki Hirose, Sergey Levine
International Conference on Robotics and Automation (ICRA), 2023
arXiv / Summary Video / Code / Media Coverage

GNM is vision-based navigation policy trained with a simple goal-reaching objective on a cross-embodiment navigation dataset. It exhibits positive transfer, outperforming specialist models trained on singular embodiment datasets, and generalizes to new robots.

ExAug: Robot-Conditioned Navigation Policies via Geometric Experience Augmentation
Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine
International Conference on Robotics and Automation (ICRA), 2023
arXiv / Summary Video

ExAug is a vision-based navigation policy that learns to control robots with varying camera types, camera placements, robot sizes, and velocity constraints by applying a novel geometric-aware objective to view augmented data.

Teaching
cs188 Undergraduate Student Instructor, CS188 Spring 2024
Undergraduate Student Instructor, CS188 Fall 2023
Undergraduate Student Instructor, CS188 Spring 2023
Undergraduate Student Instructor, CS188 Fall 2022
Undergraduate Student Instructor, CS188 Spring 2022
berkeleyEECS Tutor, EECS16B Fall 2021

Source code from Jon Barron's website.