that's what ive been doing over at github.com/markusheimerl I built the rasterizer for visual inspection of the robotic simulation and built the simulation on top of it from first principle. In C. The entire codebase fits neatly into context windows of LLMs. No dependency and installation hell.

The geometric controller, with access to perfect real time position data inside the simulation serves as the ground truth for the machine learning model that will control the real robot. That one wont have access to perfect positional data but is going to have to infer motor speeds from noisy gyroscope and accelerometer data, both of which are simulated and logged during simulation. My doing enough domain randomization the machine learning controller should be robust enough to be simply loaded into the real robot and work out of the box.

Being free of dependencies is absolute bliss. All you need is proper devops, which github offers at a bargain.