Reinforcement Learning with Sim2Real on the Turtlebot3 platform (experiments)

While creating the concept of a new University module that has the students do a project with the Turtlebot3 robots and RL, a few ideas emerged. While evaluating different robot simulation tools for Reinforcement Learning, one in particular caught my eye: the Godot plugin “godot_rl_agents” by edbeeching. This was the result of a paper creating a bridge from Godot to RL libraries like StableBaselines3.

After trying the plugin it was clear that good result can be achieved quickly, there emerged the idea that students might be more encouraged learning the whole software stack when it involves a popular game engine instead of a “random” simulator with its own proprietary structure and configuration. So now it had to be proven that Sim2Real works with this Situation.

A friend modelled a “digital twin” of a turtlebot3, as the existing open source models usually used were very unoptimized and would hinder performance of actual training. It was purposefully minimal, but with accents to make it recognizable.

At first there was an example with just driving to a target point based on the map. No sensors needed.
Simulation:

This was the first result:

The robot visible moves pretty badly in this clip. The reason which was later found: When the sent velocity commands would result in a jerky movement, the controller kind of rejects it and sometimes only does a part of the movement. Or sometimes no movement at all. To counteract this, the input has to be smoothed out beforehand to resist rejection from the controller.

Here is the next experiment with the lerp in mind:

This was the result:

The video shows that the robots performance can definitely be improved regarding stability and sensor calculations. Another big problem is also very visible here in that the small metal caster on the back of the turtlebot is very incompatible with the labs’ carpet flooring. This will be mitigated in the future with wooden plates that will make up the box’s floor.

Turtlebot3 Rebuild – Integrating a Raspi and Jetson nano in one robot

The university I was a student at has a bunch of Turtlebot3 for student projects. And they got upgraded with a Jetson nano instead of the stock Raspberry Pi it had originally. After one student project on the new platform, it was clear that the Jetson nanos were good for calculating AI stuff, but terrible for reliability. So there was a plan to bring back the Raspberry Pis, but also keep the Jetson Nanos.

This was the initial rebuild:

This setup now has:
– two motors
– the OpenCR Board 1.0 for motor and battery control
– A Raspberry Pi 4 4gb as the Robot brain with Ubuntu 22.04 and ROS Humble
– A Jetson Nano for AI and Image operations
– Various sensors (ultrasonic, button, fisheye camera, 2d lidar, imu)

This prototype later got specific instructions what to screw in where, and short enough cables so they could be managed inside the robot. Another caveat that later popped up was that the power cables for the jetson nano were too thin, so a brown out occurred on power spikes. This was remedied by using the original barrel jack cables cut open and soldered in the place of the cables.

Here an example of the build instructions: