TurtleBot4 Autonomous Movement Logic
Using OM1, the TurtleBot4 (TB4) is able to autonomously explore spaces such as your home. There are several parts to this capability. To get started, launch OM1:
OM1 uses the TB4’s RPLIDAR to tell the core LLMs about nearby objects. This information flows to the core LLMs from /input/plugins/rplidar.py
. The RPLIDAR data are also used in the action driver to check for viable paths right before motions are executed. See the RPLidar setup documentation for more information.
Depending on the environment of the TB4, the core LLMs can generate contextually appropriate motion commands.
These commands are defined in actions/move_turtle/interface.py
and are converted to TB4 zenoh/cycloneDDS cmd_vel
motions in /actions/move_turtle/connector/zenoh.py
.
In addition to LIDAR data, the TB4 also uses collision switches to detect hazards. When those switches are triggered, two things happen:
Immediately after a frontal (or side) collision, the TB4 will back off about 10cm. That avoidance motion is handled within the Create3
and cannot be changed by a user.
Beyond the immediate 10cm rewards motion, OM1 uses the TB4’s collision switches to invoke an enhanced object avoidance behavior, which consists of turning 100 deg left or right, depending on which switch of several side or frontal collision switches were triggered. This “turning to face away” from the object is handled directly inside the action
driver to ensure prompt responses to physical collisions:
In this case, the TB4 moves about the room controlled by the core LLMs.
In this case, the core LLMs should command the TB4 to turn away from the object.
In this case, the firmware logic will command an immediate 10 cm retreat, and then, the action
level collision avoidance code will command a 100 deg avoidance rotation. Once this rotation is complete, the system reverts to responding to commands from the core LLMs.