Research
I'm interested in enabling agile robotics control and planning via novel vision+learning techniques, including the use of event-based vision.
Anish Bhattacharya* Nishanth Rao* Dhruv Parikh* Pratik Kunapuli Yuwei Wu Yuezhan Tao Nikolai Matni Vijay Kumar
We study the capabilities of a vision transformer (ViT) architecture that maps depth images to velocity commands trained in simulation and deployed in real-world. When combined with LSTM layers, a ViT outperforms other state-of-the-art architectures of choice (UNet, LSTM-only) as forward velocity increases. We also find that this model can zero-shot transfer to high-speed multi-obstacle real-world dodging in indoor scenes.
Anish Bhattacharya Ratnesh Madaan Fernando Cladera Sai Vemprala Rogerio Bonatti
Kostas Daniilidis Ashish Kapoor Vijay Kumar Nikolai Matni Jayesh Gupta
We present a pipeline to generate event-based datasets and train dynamic NeRF models in a self supervised fashion from event-based data; by reconstructing events from novel viewpoints and times, EvDNeRF can then act as an event camera simulator for a given scene.
I was the student lead for CMU Team Tartans in the Mohamed Bin Zayed International Robotics Challenge 2020, a competition centered on autonomous robot teams completing real-world tasks. Our team focused on 100% autonomous operation, and placed 8th and 4th in Challenges 1 and 2, respectively, placing 7th in the Grand Challenge. Some key achievements of our team include:
Running seven missions in under 20 minutes with our rapid deployment pipeline;
Popping all balloon targets;
One of four teams to pick and place a block with an autonomous UAV;
Most water dispensed onto an outdoor fire with an autonomous UAV.
Our work is published in Field Robotics, Special Issue on MBZIRC 2020. Please see project website and AirLab research page for details.
Cooperative Block Stacking with a UAV-UGV Team
Anish Bhattacharya Kevin Zhang
Unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) are often used to perform complimentary tasks for completing team missions, e.g., a UGV opening a door for a UAV, or a UAV scouting an area for a UGV. However, to maximize the benefits of such heterogenous teams, the robots will also need to be capable of performing tasks together, with physical interactions between the UAVs and UGVs.
In this work, we present a collaborative framework for performing robust manipulations using teams of UAVs and UGVs. In particular, we propose a method for jointly placing and stacking objects that exploits the UAV's mobility and the UGV's precise position and force control. The UAV is used to move the block, while the UGV's manipulator provides a surface for aligning and adjusting the block. The two robots thus reflect the two hands in a bimanual placing task when mating or aligning two surfaces. The sensors from both robots are used to monitor the overall process and detect errors. We evaluated the framework on an accurate block stacking task, in which we achieved 3 of 4 block placement attempts and were able to accurately align the block in instances when it was within the UGV manipulator’s alignment workspace.
Contact Inspection with Fully-Actuated UAVs
Infrastructure inspection is a dangerous job that typically requires heavy machinery and a crew of operators and engineers. There is an opportunity to aid such projects with the use of unmanned aerial vehicles (UAVs). UAVs benefit from high mobility and portability, and a semi-autonomous system could be used by workers to take measurements from bridges, dams, and even large aircraft. While UAVs are traditionally used for sensing from a distance (cameras, lidar), this project worked on taking in-situ measurements with a sonic depth gauge that required second-long contact. My main contributions to this work include visual servoing to user-selected targets via a live-feed GUI; working with and building the tilt-hex aerial robot that is a fully-actuated hexarotor with tilted rotors.