3D scan with a hobby LIDAR on a pan-tilt kit

Since I learned about point clouds a while ago, I wanted to gain some practical experience with them. The goal was to scan a terrain / landscape-like scene, using an as basic as possible hardware setup, not a depth camera.

I used a LIDAR-Lite (a laser based distance sensor), mounted on a pan-tilt kit, driven by servos to position the sensor. All those devices were controlled by an Arduino, which streamed the sensor data back to a PC. To avoid problems with outdoor light, I added some wavy terrain-like features to a room, and scanned it instead of an outdoor scene.

Finally, I wrote my own software to compute each voxel (3D point / pixel in space) from the sensor data and servo angles, such that I could render them as a depth map and a point cloud, with various methods.

Generation of point clouds and depth maps

Based on the distance measurements returned from the LIDAR and the pan and tilt angles, I computed the 3D points in space, using spherical coordinates (3D polar coordinates). Rendering the point cloud in a self-made program (viewed from a top angle), and then rotating it continually around itself to obtain several frames, resulted in the following animation:

Point cloud of a room set up to have terrain like features; viewed form above. (Notice the scanning artifacts: some points are “floating in the air” due to reflections).

While the animation does not look as pretty as the point cloud rendering from Meshlab in the introduction, it does highlight a few scanning artifacts. Such as some points that freely float in the air, as a result of reflections, and empty regions of varying point density, due to the scanning locations and angles. This results especially in an inaccurate rendering of the right part of the room. Scanning from several locations and perspectives would increase the accuracy.

It proved difficult to make sense of the scan, since it was more irregular and further from the “ground truth” than I expected. Unfortunately, there is no picture to compare the scan with the room setup (it’s been a few years since I made this scan), but there is a depth map rendering below (same as in the introduction), which might give an idea.

Depth map of the room (close = bright gray, far = dark gray)

To create the depth map, the pan angle serves as x coordinate, the tilt angle as y coordinate, and the distance measurement (from the LIDAR to the closest obstacle) as gray intensity. The bright rectangular like shape on the front-right is the table on which the robot with the LIDAR rested on.

As you may have noticed, there is a bright horizontal line over the door in the back, which is due to unwanted reflections, causing wrong distance measurements (equivalent to the “in air artifacts” in the point cloud above). It is also noticeable how the picture has a regular interlaced-like pattern: every other line is offset a little compared to the previous one. This is very likely due to the mechanical setup, since I had no such issues when simulating a similar scanner in V-REP.

Scene and room setup

The room setup is as follows: in the back, there is a slightly open door in the middle of the room. On its left is a tall cupboard almost reaching the ceiling, and other smaller cupboards on the right, with some boxes and paper sheets on top of them, to make a structured surface. Due to the low point cloud resolution in this region, it looks more irregular than it should, and not much can be recognized.

On the left there are thick bed sheets piled and squashed together to give an ascending gradient, wavy, terrain-like structure, which seems to have been scanned well.

The floor, between the bed sheets, the table, and the smaller cupboards, has just very few scan lines (close to the front, there are none at all). So despite being completely flat, it does not appear like a prominent flat surface in the point cloud.

Finally, on the front of the scene, the LIDAR is placed on the top surface of a two level table. The dark triangular patch, at about 1/3 of the room height, is close to the LIDAR’s position, and it is the origin of the point cloud. The dark voxel patch is probably generated when the laser is hitting the corner of the lower level table surface.

The rest are walls and the ceiling, which are both flat even if shown as slightly wavy (especially the walls).

Evaluation

Looking back, the scan is certainly too complex to make sense of, let alone to properly identify any distortion or to make any calibration. A scan of the inside of a cube-like shaped 3D box would have been clearer and easier to interpret, but that would have been less fun as well 😉

Some geometrical distortions are visible, when rotating the point cloud at other angles. Later, I made a simulation in V-REP with a perfect 3D box and a perfectly aligned scanner, to make sure the distortions were not caused by calculation errors. I tested out various methods until I could validate that they reflected the ground truth properly (i.e., the generated point cloud matched the 3D cube, without distortions or angle errors).

Point cloud generation of the inside of a 3D cube in V-REP, simulating my LIDAR pan-tilt setup, with two revolute joints (one for pan, one for tilt) that position a distance sensor.
Video of the 3D scan simulation in V-REP.

Experimenting with the room’s point cloud in V-REP allowed to fix some of the perspective distortions, but not all, so other causes are probably mechanical: the pan-tilt kit is not scanning precisely enough along the imaginary sphere surface, and my measurements of the origin of this sphere must have been too imprecise, as well. The origin (or rather the x and y distance of the LIDAR’s front from (0, 0)) is not easy to measure, since a pan-tilt kit is not a spherical joint with an obvious center.

Rendering as a mesh

I used Meshlab to render the point cloud in the introduction, but it can also derive a mesh from a point cloud, generating shades surfaces. If there is any interest, I can add a mesh rendering as well. Another interesting software package to process meshes and point clouds is CloudCompare.

Conclusion

In conclusion, I would have to make a more controlled setup again, and continue searching for the reasons that cause this distortion, ideally scanning a 3D cube. Then again, the actual goal was to obtain a point cloud of a real world scene.

Since the computations are correct given a proper sensor setup, I will probably use an Intel Realsense, or another type of depth camera in the future, instead of trying to optimize the scan results more.

This makes sense, since current depth cameras shrunk enough to fit on typical small mobile robot, and are much faster than using a mechanical scan. My current solution, while educational, can easily require a minute with the slow distance sensor and servos. Other mechanical scanners are faster, but 2D only, or are quite expensive.

It would also be interesting to make a point cloud rendering that shades the voxels (3D spheres) similar to a depth map, for a more natural appearance (and less overemphasizing contrasts).

However, the major future goal is navigating and action planning supported by a point cloud, and possibly using the point cloud library and ROS.

Leave a Reply

Your email address will not be published. Required fields are marked *