Category Archives: V-REP

Self-driving car based on deep learning

Generalization: automated driving on a yet unknown complex track (compared to training tracks).
Note: “jumpy” steering reflects toy RC car limitations: it turns 45° to the left/right or drives straight ahead.
(Music: GoNotGently)

After struggling to make a neural net that would predict steering commands reliably for an autonomous toy RC car, only based on the current camera view (no history), I approached the problem systematically in a robot simulator, which allowed for faster experimentation, finally leading to success.

Training examples: manual driving with arrow keys to create a perfect left/right turn.
The purple “Trail” shows the driven path (geometrically clean after several tries).

With only two simple training tracks, one with a 90° left curve and the other one with a 90° right curve, I was able to teach reliable driving behavior. The neural net generalizes better than expected, such that the self-driving car stays on the “road”, even for tracks differing significantly from the training data.

Given more varied examples of successful steering, the driving behavior could become a lot smoother than the video shows. But interestingly, the convolutional neural network (CNN) seems to interpolate nicely between the provided training examples, and is able to handle unknown degrees of road bends.

It even manages to drive through road crossings (see after the break), if a little awkwardly, since crossings “look confusing” and were never trained. When positioned outside of the track facing it at a slight angle, the car also manages to steer in the “hinted” direction and aligns properly with the track!

Continue reading

3D scan with a hobby LIDAR on a pan-tilt kit

Since I learned about point clouds a while ago, I wanted to gain some practical experience with them. The goal was to scan a terrain / landscape-like scene, using an as basic as possible hardware setup, not a depth camera.

I used a LIDAR-Lite (a laser based distance sensor), mounted on a pan-tilt kit, driven by servos to position the sensor. All those devices were controlled by an Arduino, which streamed the sensor data back to a PC. To avoid problems with outdoor light, I added some wavy terrain-like features to a room, and scanned it instead of an outdoor scene.

Finally, I wrote my own software to compute each voxel (3D point / pixel in space) from the sensor data and servo angles, such that I could render them as a depth map and a point cloud, with various methods.

Continue reading