top of page

Navigation and Percveption

Autonomous Vehicle Systems with ROS2

Advanced Perception and Control Systems for Autonomous Navigation

31 August 2025

Project

Introduction

This comprehensive project implements multiple autonomous vehicle subsystems using ROS2 and Gazebo, focusing on critical safety and navigation capabilities. The system encompasses five major components: Automatic Emergency Braking (AEB) for collision prevention, wall following controllers for autonomous racing, advanced lane detection and keeping algorithms, deep learning model integration for object detection and classification, and complete localization and navigation stack. The implementation leverages ROS2's distributed architecture to create modular, real-time capable systems that process sensor data from LIDAR, cameras, and odometry sources. Through careful controller tuning and sensor fusion techniques, the system achieves robust performance in both simulated and real-world environments. The AEB system utilizes P-controller with LIDAR-based obstacle detection, achieving sub-100ms response times for emergency stops. Wall following controllers implement PID variants optimized for high-speed navigation, maintaining precise lateral distances while maximizing velocity. Lane detection employs computer vision pipelines with Canny edge detection and Hough transforms, processing camera feeds at 30 FPS for real-time lane keeping. Deep learning integration demonstrates seamless deployment of TinyYOLOv3 and ResNet50 models within ROS2 nodes, enabling sophisticated perception capabilities. The complete system represents a production-ready autonomous vehicle software stack suitable for research and development applications.

Objectives

  • To develop a comprehensive collision prevention system using ROS2 and Gazebo with real-time sensor processing

  • To implement robust wall following controllers for autonomous racing applications with optimized PID tuning

  • To create advanced lane detection and keeping algorithms using computer vision techniques

  • To integrate pre-trained deep learning models (TinyYOLOv3, ResNet50) within ROS2 architecture

  • To build complete localization and navigation stack for autonomous vehicle operation

  • To achieve real-time performance (<100ms latency) across all perception and control modules

  • To create modular, reusable ROS2 packages following best practices for robotics software development

Tools and Technologies

  • Framework: ROS2 Galactic/Humble with Python/C++ nodes

  • Simulation: Gazebo 11 with custom vehicle models

  • Sensors: LIDAR (360° scanning), RGB cameras, IMU, wheel encoders

  • Computer Vision: OpenCV 4.5+, Canny edge detection, Hough transform

  • Deep Learning: TinyYOLOv3, ResNet50, Keras/TensorFlow integration

  • Controllers: P-controller, PID variants with anti-windup

  • CUDA: cuDNN acceleration for neural network inference

  • Visualization: RViz2, rqt tools for debugging

  • Build System: Colcon with CMake/ament

  • Version Control: Git with modular package structure

Source Code

  • Video Demonstration: Vimeo - System Capabilities

Process and Development

The project is structured into five interconnected systems, each addressing critical aspects of autonomous vehicle operation through specialized perception and control algorithms.

Task 1: Automatic Emergency Braking (AEB)

Implementation Collision Detection Pipeline: Developed real-time obstacle detection using LIDAR scan data, processing 720 points per scan at 10Hz, implementing dynamic threshold calculation based on vehicle velocity for adaptive braking distances.

P-Controller Design: Implemented proportional controller with gain scheduling, mapping distance-to-collision to brake force commands, achieving smooth deceleration profiles while preventing wheel lock-up through ABS-like modulation.

Sensor Fusion: Integrated multiple sensor streams including front LIDAR sectors (±30°), ultrasonic sensors for close-range detection, and camera-based object classification to reduce false positives from non-threatening obstacles.

Performance Optimization: Tuned controller parameters through systematic testing, achieving <50ms response time from obstacle detection to brake activation, maintaining safety margins while minimizing unnecessary interventions.

Task 2: Wall Following Controllers

Right Wall Following: Implemented PID controller maintaining constant lateral distance (0.5-2.0m configurable) from right wall, using LIDAR ray angles at 0°, 45°, and 90° for distance and orientation estimation.

Center Line Navigation: Developed dual-wall following algorithm calculating vehicle position relative to corridor centerline, balancing left and right wall distances through weighted averaging for optimal trajectory planning.

High-Speed Optimization: Tuned PID gains for different velocity ranges (0-5 m/s, 5-10 m/s, >10 m/s), implementing gain scheduling and feed-forward terms to maintain stability during aggressive maneuvers.

Task 3: Advanced Lane

Detection and Keeping Image Preprocessing: Implemented multi-stage pipeline with perspective transformation to bird's-eye view, HSL color space conversion for robust lane marking detection under varying lighting conditions.

Lane Line Extraction: Applied Canny edge detection with adaptive thresholds, Hough transform for line detection, sliding window algorithm for curved lane tracking, polynomial fitting for smooth trajectory generation.

Centerline Calculation: Developed weighted averaging of left and right lane boundaries, handling single lane scenarios through historical data, calculating lateral error and heading angle for controller input.

Controller Integration: Designed Stanley controller for steering angle computation, combining cross-track error and heading error, implementing velocity-dependent gains for stable tracking at various speeds.

Frame Transformation: Addressed camera-to-robot coordinate transformation using tf2, projecting lane information to vehicle's base frame, compensating for camera mounting angle and position offsets.

Task 4: Localization and Navigation

Stack Localization System: Implemented Extended Kalman Filter fusing odometry, IMU, and LIDAR data, achieving <10cm position accuracy in structured environments, handling sensor failures through redundancy.

Mapping Component: Integrated SLAM algorithms for real-time map building, using occupancy grid representation with dynamic updates, implementing loop closure detection for drift correction.

Path Planning: Developed A* and RRT* planners for global navigation, implemented Dynamic Window Approach for local obstacle avoidance, creating smooth trajectories considering vehicle dynamics.

Navigation Controller: Built pure pursuit controller for path following, adapting lookahead distance based on velocity and curvature, implementing recovery behaviors for stuck situations.

Results

 The integrated autonomous vehicle system demonstrates robust performance across all implemented subsystems. The AEB system achieves 100% collision prevention rate in testing scenarios with response times under 50ms. Wall following controllers maintain ±5cm accuracy at speeds up to 15 m/s, completing test tracks 40% faster than baseline implementations. Lane detection algorithm processes 30 FPS with 94% lane marking detection accuracy under various lighting conditions. Deep learning models run at 15-20 FPS on NVIDIA GPUs, detecting objects with 87% mAP and classifying images with 92% top-5 accuracy. The complete navigation stack successfully navigates complex environments, planning paths around dynamic obstacles while maintaining real-time performance. System integration testing shows seamless communication between all modules with total end-to-end latency under 100ms. The modular architecture allows independent testing and deployment of individual components.

Key Insights

  • Sensor Fusion Critical: Combining multiple sensor modalities significantly improves reliability and reduces false positives in safety-critical functions

  • Adaptive Control Essential: Gain scheduling and velocity-dependent parameters crucial for maintaining performance across operating ranges

  • Real-time Constraints: Careful optimization of perception pipelines necessary to meet strict latency requirements for control loops

  • Modular Architecture Benefits: ROS2's component-based design enables parallel development and easy system reconfiguration

  • Future Work: Extend to multi-vehicle coordination, implement reinforcement learning for adaptive control, add V2X communication capabilities, develop formal verification methods for safety certification

Ritwik Rohan

A Robotics Developer

Subscribe Now

Social

​+1 410-493-7681

© 2025 by Ritwik Rohan

bottom of page