Repository navigation
end-to-end-autonomous-driving
- Website
- Wikipedia
[CVPR 2023 Best Paper Award] Planning-oriented Autonomous Driving
[IEEE T-PAMI 2024] All you need for End-to-end Autonomous Driving
[NeurIPS 2022] Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline.
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action Model
[ECCV 2022] ST-P3, an end-to-end vision-based autonomous driving framework via spatial-temporal feature learning.
Our insights of Openpilot, a deepdive project on it
A comprehensive survey of forging vision foundation models for autonomous driving, including challenges, methodologies, and opportunities.
[CVPR 2023] Pytorch implementation of ThinkTwice, a SOTA Decoder for End-to-end Autonomous Driving under BEV.
A collection of recent resources on End-to-End Autonomous Driving [survey accepted in IEEE TIV]
[ICLR 2023] Pytorch implementation of PPGeo, a fully self-supervised driving policy pre-training framework to learn from unlabeled driving videos.
VADv2: End-to-End Vectorized Autonomous Driving via Probabilistic Planning
Official Code Release of DiffSemanticFusion [including Mapless QCNet], which achieves SOTA in both nuScenes and NAVSIM
[Official] [IROS 2024] A goal-oriented planning to lift VLN performance for Closed-Loop Navigation: Simple, Yet Effective
Closed-loop evaluation for end-to-end VLM autonomous driving agent
A Data Converter for Nuplan and VAD(VADv2)
End to End Learning of Self Driving Vehicle in Urban Environments using CARLA
This repository contains the complied GUI and backend codebase to enable the full-functionalities of Project Varuna.
RecyclingRush ♻️: Towards Continuous Floating Invasive Plant Removal Using Unmanned Surface Vehicles and Computer Vision, IEEE Access 2024.
This repository is designed for the development and validation of an End-to-End Controller based on deep reinforcement learning to reach a desired destination within a ROS Gazebo simulation environment.
🤖 Explore open-source robotics models and packages, including advanced vision-language-action systems for versatile applications and fine-tuning options.