This project is about an autonomous vehicle, based on a modified toy RC car, that can drive along a “road” without any manual interaction required.
To this end, the car’s remote control is modified so it can be attached to a microcontroller, that receives commands from a Python program running on a laptop. The camera, mounted on the top of the car, streams its view wirelessly to a neural net on the laptop, that decides what steering commands are the most appropriate at every time step/frame.
In this post, I will present how to modify the remote control (soldering and mechanical changes), how to extend the car, and how to stream live video, with low latency, from the Raspberry Pi to a laptop using GStreamer and OpenCV. An upcoming post will show a reliable neural net model for automated steering.
Formal proofs and verification in robotics are a difficult subject to tackle, due to the unclear nature of the environment and the question of what constitutes a sufficient model to even be able to make valid proofs.
Since I learned about point clouds a while ago, I wanted to gain some practical experience with them. The goal was to scan a terrain / landscape-like scene, using an as basic as possible hardware setup, not a depth camera.
I used a LIDAR-Lite (a laser based distance sensor), mounted on a pan-tilt kit, driven by servos to position the sensor. All those devices were controlled by an Arduino, which streamed the sensor data back to a PC. To avoid problems with outdoor light, I added some wavy terrain-like features to a room, and scanned it instead of an outdoor scene.
Finally, I wrote my own software to compute each voxel (3D point / pixel in space) from the sensor data and servo angles, such that I could render them as a depth map and a point cloud, with various methods.
Core memory is the simplest kind of main memory you could build from really basic components (no chips!), and easy enough to understand well.
I always wanted to understand computers to their core (no pun intended), and to build one myself from the ground up. The following kit seems to make it quite fun to do that for working memory (RAM), while being physically large scale enough to allow for inspecting and measuring what is going on.
In the picture above, the core memory itself is just the wire net on the left with the small rings (that look like beads/dots). Enjoyably simple.
The chips on the board are there to easily write and read the hand-built memory and to provide an interface to an Arduino (or other microcontroller).
The kit comes with all the necessary components, including the PCB, magnetic cores, wires, etc., while requiring some soldering to assemble it. Head over to Jussi’s blog to read the complete documentation for the shield and how to get one.
Parsing the structured text files, provided by the Unicode consortium, at each startup is too inefficient, and merely storing the parsed text into a simple integer array wastes too much memory.
A more efficient storage uses a dictionary-like approach, to compress the needed data using a few layers of indirections, while still giving array-like performance with constant (and negligible) overhead.
In the following, I’ll briefly present the solution I found.
Before starting to learn Prolog I used various logic based systems, such as the SPIN model checker, or reasoners that work on ontologies encoded in OWL. The latter of them to have it reason about (visual) objects in RoboCup.
Prolog however seems to encode many problems in a more natural and fluent way, so I set out to make a few toy examples to test how well I could make it work and get a feel for its advantages and limitations.
Many concepts in AI are implicitly based on specific formulations / terminology as used for Prolog or its derivatives. Vague sounding words / expressions, often taken from general contexts, really mean something rather specific, and learning about Prolog sharpens the understanding of these wordings.