Skip to the content.

🔗 Navigation


Software Subsystem

Frontier-Based Goal Selection

Our robot uses a frontier exploration algorithm to autonomously determine where to navigate next when the map is incomplete. The purpose of this module is to select the nearest unexplored region and issue its coordinates as a goal to Nav2 for execution.

Unlike traditional frontier algorithms, our custom version introduces several optimizations:

This method is lightweight, fast, and effective in dynamic SLAM-generated maps.


Why Dijkstra or A* Are Not Used in Frontier Selection

Dijkstra and A* are not used for frontier goal selection because frontier exploration is purely concerned with finding proximity-based unexplored areas, not optimal pathing through known cost maps.

Algorithm Use Case Weighted Cost Consideration Suitability for Frontier Selection
Frontier Search Locate nearest unknown region No Fast, avoids unnecessary path cost analysis
Dijkstra Global pathfinding Yes Overhead too high for real-time exploration
A* Optimized global planning Yes Not needed – goal is not yet reachable

By separating goal discovery (frontier) from path execution (planner + controller), we maintain a clean division of concerns and improve overall system efficiency.

Custom Frontier Algorithm and Grid Filtering

Overview

Our custom frontier exploration module issues exploration goals to Nav2 and prevents repeated exploration. It is designed to work efficiently in real-time with minimal computational overhead.


Key Behavior

1. Frontier Detection Logic

2. Occupancy Grid Filtering


These filtering steps are applied before frontier detection and significantly enhance both performance and navigation safety.

Path Planning and Control Strategy

To navigate to assigned destinations within the explored map, our system uses the Navigation2 (Nav2) stack, which separates global path planning and local trajectory control into two distinct components. This modularity ensures scalability and enables precise control in cluttered and dynamic environments.


The table below summarizes the roles and tradeoffs of each navigation component:

Component Role Pros Cons
NavFn Planner (A*) Global Path Planning - Computes shortest path using A* search with cost heuristics
- Robust and well-integrated with Nav2
- Requires a known map
- May produce sharp or unsmoothed paths
MPPI Controller Local Trajectory Execution - Produces smooth motion through trajectory optimization
- Performs well in tight spaces due to predictive sampling
- Requires fine-tuning
- Sensitive to physical robot parameters

We chose the MPPI controller over more common options like DWB and Pure Pursuit because of its superior handling in narrow and complex environments. Based on observed behavior and prior benchmarking, MPPI offers the highest tolerance in tight spaces, followed by Pure Pursuit, and then DWB:

MPPI > Pure Pursuit > DWB (in terms of tight space navigation performance)

The MPPI controller continuously samples multiple control trajectories and evaluates them using a cost function. This allows it to produce smoother and more adaptable commands compared to rule-based controllers.

System-Level Navigation Logic

Control Logic

High-Level State Machine Overview

The robot’s navigation and mission execution are managed entirely by the GlobalController node, which employs a finite state machine to define clear phases of operation. Each state manages distinct tasks, utilizing sensor inputs and real-time feedback to transition between states. The primary states implemented in the system include:

  1. Initializing
  2. Exploratory Mapping
  3. IMU Interrupt
  4. Goal Navigation
  5. Launching Balls
  6. Attempting Ramp

The GlobalController constantly monitors sensor inputs—IMU (pitch), LIDAR, thermal sensors, and SLAM-generated occupancy grid updates—to dynamically handle state transitions and maintain safe operations.


Initialization Phase (Initializing)

def wait_for_map(self):
    self.get_logger().info("Waiting for initial map update...")
    while self.occdata.size == 0 or not np.any(self.occdata != 0):
        self.get_logger().warn("No occupancy grid data yet, waiting...")
        rclpy.spin_once(self)
        time.sleep(1)
    self.get_logger().info("Map received. Starting Dijkstra movement.")

Upon startup, the robot waits for two critical elements before proceeding:

Once these criteria are met, the robot logs its initial yaw orientation (self.initial_yaw) and transitions to the Exploratory_Mapping state. No motion occurs during initialization; the robot is strictly confirming readiness.


Exploratory Mapping (Exploratory_Mapping)

In this state, the robot systematically explores the environment using frontier-based exploration:

Mapping continues until no valid frontiers remain (self.finished_mapping = True). At this stage, collected heat source coordinates are clustered using KMeans (find_centers()) to define precise goal locations (self.max_heat_locations). The state then transitions to Goal_Navigation.


IMU Interrupt Handling (Imu_Interrupt)

Triggered by exceeding IMU pitch thresholds during exploration or navigation:

This logic ensures the robot never accidentally ascends ramps prematurely, enforcing a critical safety mechanism.


Heat Source Localization and Navigation (Goal_Navigation)

Once exploration concludes:

The robot repeats this process until all heat source locations in self.max_heat_locations have been visited and addressed.


Heat Detection and Localization Mechanism

The robot detects heat sources using two onboard thermal sensors (temperature_sensor_1 and temperature_sensor_2), each of which publishes a 64-element Float32MultiArray representing an 8×8 grid of temperature readings. These sensors are mounted to face slightly forward-left and forward-right, allowing the robot to detect heat in its frontal field of view.

The GlobalController subscribes to both thermal topics and processes them through sensor1_callback() and sensor2_callback(), respectively. Within each callback:

  1. A subset of 16 central indices from the 8×8 grid is extracted to represent the most relevant forward-facing readings.
  2. These readings are passed to the valid_heat() function, which checks if any temperature exceeds a preset threshold (self.temp_threshold, e.g. 27°C).
  3. If the threshold is exceeded, the system initiates a localization process to determine the global position of the heat source.

Laser-Based Angle and Distance Estimation

def laser_avg_angle_and_distance_in_mode_bin(self, angle_min_deg=-30, angle_max_deg=30, bin_width=0.1):

To spatially localize the heat source, the robot pairs thermal data with LIDAR measurements. This is handled using the laser_avg_angle_and_distance_in_mode_bin() function, which extracts a reliable angle and distance estimate from the current LIDAR scan in the region that corresponds to the thermal sensor’s field of view.

The function performs the following steps:

The output is a clean, noise-robust angle and distance estimate of the object associated with the heat source, in polar coordinates relative to the robot.


World Coordinate Transformation

The polar estimate (angle and distance) is passed to the calculate_heat_world() function, which uses the robot’s current global pose (from TF) to convert the detection into absolute map coordinates. The transformation uses trigonometry to project the LIDAR-localized point into the global frame, accounting for the robot’s position and yaw orientation.

The final (x, y) coordinate is stored in either self.heat_left_world_x_y or self.heat_right_world_x_y, depending on which sensor triggered the detection. These lists accumulate over time as the robot explores and observes new thermal points.


Clustering Heat Sources with KMeans

Once exploration is complete, the robot clusters the recorded heat points using the find_centers() method, which applies KMeans clustering to group spatially close heat detections. This helps reduce sensor noise and identify distinct heat-emitting objects in the environment.

These centroids are then stored in self.max_heat_locations and treated as navigation goals in the Goal_Navigation state. Each centroid represents the spatial center of a heat source the robot must visit and interact with (e.g., launching a ball).

This layered approach—combining temperature thresholds, filtered LIDAR processing, geometric transformation, and clustering—ensures robust, accurate heat source detection and targeting even in complex or cluttered indoor environments.


Coordinating Ball Launching Mechanism (Launching_Balls)

This state manages payload delivery at heat source locations:

After all payloads have been successfully delivered, the state transitions to the Attempting_Ramp phase.


Ramp Ascension Control Strategy (Attempting_Ramp)

To ascend the previously identified ramp safely:


Fast Loop vs. Control Loop Execution

The navigation logic is distributed across two execution loops:

Thread safety is rigorously enforced using a mutex lock (self.lock) to ensure consistent state transitions and data access between loops.

Visualization and Debugging Aids

RViz Markers

Regularly published using publish_visualization_markers(), including:

Logging

Comprehensive ROS2 logging provides detailed event tracking, including:

These logs serve as valuable tools for debugging and post-test analysis.


Heat source positioning

Heat plotting 🟢 Green sphere representing calculated heat source position.


Safety Mechanisms and Fail-Safes


System Launch Architecture

The robot’s autonomous system is structured into three modular launch components:

This separation ensures clarity and modularity across:


Top-Level Coordination: global_bringup.py

This is the central entry point for launching the robot system. It:


Initializes the full ROS2 Navigation2 stack:

Also:


High-Level Controller: global_controller_bringup.py

Launches the global_controller node, which manages mission-level logic:

It receives real-time sensor input (IMU, thermal, LIDAR) and governs transitions between mission states.


Here’s the full Markdown version of your content for Section 6.1.5 Tuning, complete with bullet points, tables, links, and formatting:


Tuning

The file can be found under: param/burger.yaml

This YAML file contains settings for:

Developers can modify this file to adjust the robot’s behavior in response to different environments (e.g., tighter corridors, more aggressive goal pursuit). It is located under the same root directory level as the launch/ folder and is automatically selected during bring-up based on the environment variable TURTLEBOT3_MODEL.

Note: After modifying the YAML file, remember to rebuild using:

colcon build --packages-select bot_bestie

Parameter Tuning for NavFn Planner

The NavFn global planner (A*-based) uses the global costmap to compute the shortest path to the goal. Proper tuning balances:

Tuning parameters should reflect:

📘 Docs: NavFn tuning guide


Parameter Tuning for MPPI Controller

The MPPI controller relies on predictive sampling, and its performance depends on:

Tuning is essential to achieve:

📘 Docs: MPPI tuning guide


Tuning Global and Local Costmaps

Both costmaps play a critical role in navigation:

Key tuning areas:

Proper tuning improves:

Two Key Parameters

📘 Docs: Costmap tuning guide


Parameters in global_controller.py

Parameter Description Value
angle_heat_scan Angular field (°) around thermal sensor center used for heat estimation 7
temp_threshold Min. temperature (°C) to classify a thermal reading as a heat source 27
heat_distance_max Max valid heat distance (m) 2.5
imu_threshold Multiplier for IMU pitch average to trigger ramp detection 5.0
imu_abs_threshold Absolute pitch angle (rad) for immediate ramp interrupt 0.16
ramp_backtrack Number of past robot positions to remember for backtracking 20
clusters Number of KMeans clusters for heat source grouping 3
rate_of_placement Radius (grid cells) used to mark frontiers as explored 4
use_padding Whether to pad obstacles in the occupancy grid False
padding Number of cells to pad around obstacles when enabled 1
fast_explore If True, uses LIDAR-detected free space as valid exploration area False

Mechanical Subsystem