In this Godot Wild Jam #80 we had a team of 6 people. Two of which have never used the godot engine before, so it was a learning jam for them. This was a 9 day jam.
In this Weekend long game jam “PULS GAME JAM 2025“, we (a team of 3), have created a game in Unity. An engine that I have not used in a while and was kind of rusty. Nonetheless we have created a small cozy fishing game in which you are on a frozen lake that breaks over time.
Also we had time problems and two people could not participate in development for a day each.
While creating the concept of a new University module that has the students do a project with the Turtlebot3 robots and RL, a few ideas emerged. While evaluating different robot simulation tools for Reinforcement Learning, one in particular caught my eye: the Godot plugin “godot_rl_agents” by edbeeching. This was the result of a paper creating a bridge from Godot to RL libraries like StableBaselines3.
After trying the plugin it was clear that good result can be achieved quickly, there emerged the idea that students might be more encouraged learning the whole software stack when it involves a popular game engine instead of a “random” simulator with its own proprietary structure and configuration. So now it had to be proven that Sim2Real works with this Situation.
A friend modelled a “digital twin” of a turtlebot3, as the existing open source models usually used were very unoptimized and would hinder performance of actual training. It was purposefully minimal, but with accents to make it recognizable.
At first there was an example with just driving to a target point based on the map. No sensors needed. Simulation:
This was the first result:
The robot visible moves pretty badly in this clip. The reason which was later found: When the sent velocity commands would result in a jerky movement, the controller kind of rejects it and sometimes only does a part of the movement. Or sometimes no movement at all. To counteract this, the input has to be smoothed out beforehand to resist rejection from the controller.
Here is the next experiment with the lerp in mind:
This was the result:
The video shows that the robots performance can definitely be improved regarding stability and sensor calculations. Another big problem is also very visible here in that the small metal caster on the back of the turtlebot is very incompatible with the labs’ carpet flooring. This will be mitigated in the future with wooden plates that will make up the box’s floor.
In this 9 day long game jam, the team of three of us aimed to swap the main roles in the team. So the programmers were to create art, and the artist was supposed to code the game. We did not pull this through completely and had more of a mix towards the end. But we stepped out of our usual roles more than usual. We chose the “Godot Wild Jam #74” for this.
We did not explicitly use any of the possible “wild cards”.
This game is kind of a walking sim/narrative type.
My master thesis topic was “Verbal training of a robot dog“. In this thesis I have created a program stack that tries to simulate real dog training. There are a few pre programmed actions the dog can perform, and it can either “anonymize” the actions or have a few preloaded commands for them. The usual training was done without any previous knowledge and from scratch.
A training step goes likes this: – (optional: Hotword recognition “Hey Techie!” to await actual speaking intent) – Speech recognition with “Whisper” (Open-source Speech-to-text from OpenAI) – Check if command has confirmed matches – Check if command has a small Levenshtein distance to a confirmed command (like “sit” and “sat”) – Query the local LLM which command could be used – If the LLM fails or picks a confirmed negative, a random action is rolled from the remaining actions – The dog executes the picked action – The dog awaits Feedback: Listens to “Yes” & “Correct” for positive, and “No” & “Wrong” for the negative feedback – The picked Command + Action Combo is memorized – The Loop repeats
The end result was a soft success. The training itself had to rely on quite a bit of randomness, since a very weak and small LLM was used which could’ve accelerated the process immensely. The same goes for the speech recognition, which failed a lot of times and resulted in bogus text recognized. With the stronger models it worked way better, but the calculation time was reduced from practically real time to up to 30 seconds, which was unacceptable in this case.
In this 9 day long game jam, we wanted to created a more polished entry with our usual team of 4. We chose the “Godot Wild Jam #71” mainly because of it’s length, and with the potential of the wild cards giving us more ideas for a game.
This game is actually 4 games at once, and it plays with the idea of you being the “bad apple” that wants to punish the snake for eating all of the other apples.
As the module “individual profiling” in university, I created reinforcement learning agent working only on visual inputs, which could generally control anything on the computer. It was mainly built to play a certain video game, but can (in theory) generalize to do anything with visual input. It just needs an interface class to be written which converts the outputs to the desired thing to do.
For more information, visit the GitHub repo of the project (there is also in-depth documentation of the creation of the project).
Here are the two main examples used to show the best progress in two different games:
1. Driving nightmare (a game jam game created by a team of three people including me)
This diagram shows the learning progress over 1600 iterations. The green line representing how long each run was (the higher the better) with the yellow line showing the average. The “loss” in blue being how far off the model thinks it is from the expected result.
2. A simple Flappy bird like program built specifically for the AI. The flappy bird game can be advanced by code in specific steps, so it can wait for a slower working network without dropping any inputs.
In our second gamjam with godot, we formed a team of 4. The jam was “Mini Jame Gam #30“, and for it we created a tower defence game.
The gimmick is, that you can’t go too far away of the towers or they’re losing signal and stop firing. There are relay towers available to increase that range locally, but they’re expensive.
The university I was a student at has a bunch of Turtlebot3 for student projects. And they got upgraded with a Jetson nano instead of the stock Raspberry Pi it had originally. After one student project on the new platform, it was clear that the Jetson nanos were good for calculating AI stuff, but terrible for reliability. So there was a plan to bring back the Raspberry Pis, but also keep the Jetson Nanos.
This was the initial rebuild:
This setup now has: – two motors – the OpenCR Board 1.0 for motor and battery control – A Raspberry Pi 4 4gb as the Robot brain with Ubuntu 22.04 and ROS Humble – A Jetson Nano for AI and Image operations – Various sensors (ultrasonic, button, fisheye camera, 2d lidar, imu)
This prototype later got specific instructions what to screw in where, and short enough cables so they could be managed inside the robot. Another caveat that later popped up was that the power cables for the jetson nano were too thin, so a brown out occurred on power spikes. This was remedied by using the original barrel jack cables cut open and soldered in the place of the cables.
For this jam, we – a team of 4 – wanted to try the first jam with Godot instead of Unity. This was on the aftermath of the Unity pricing changes, so we wanted to finally give Godot a try.
In Life Support, you have to survive in your fragile submarine until you manage to surface. During your ascent your submarine is getting attacked by something, which you can temporarily scare off with the underwater siren.