Since I learned about point clouds a while ago, I wanted to gain some practical experience with them. The goal was to scan a terrain / landscape-like scene, using an as basic as possible hardware setup, not a depth camera.
I used a LIDAR-Lite (a laser based distance sensor), mounted on a pan-tilt kit, driven by servos to position the sensor. All those devices were controlled by an Arduino, which streamed the sensor data back to a PC. To avoid problems with outdoor light, I added some wavy terrain-like features to a room, and scanned it instead of an outdoor scene.
Finally, I wrote my own software to compute each voxel (3D point / pixel in space) from the sensor data and servo angles, such that I could render them as a depth map and a point cloud, with various methods.
Core memory is the simplest kind of main memory you could build from really basic components (no chips!), and easy enough to understand well.
I always wanted to understand computers to their core (no pun intended), and to build one myself from the ground up. The following kit seems to make it quite fun to do that for working memory (RAM), while being physically large scale enough to allow for inspecting and measuring what is going on.
In the picture above, the core memory itself is just the wire net on the left with the small rings (that look like beads/dots). Enjoyably simple.
The chips on the board are there to easily write and read the hand-built memory and to provide an interface to an Arduino (or other microcontroller).
The kit comes with all the necessary components, including the PCB, magnetic cores, wires, etc., while requiring some soldering to assemble it. Head over to Jussi’s blog to read the complete documentation for the shield and how to get one.
Parsing the structured text files, provided by the Unicode consortium, at each startup is too inefficient, and merely storing the parsed text into a simple integer array wastes too much memory.
A more efficient storage uses a dictionary-like approach, to compress the needed data using a few layers of indirections, while still giving array-like performance with constant (and negligible) overhead.
In the following, I’ll briefly present the solution I found.
Before starting to learn Prolog I used various logic based systems, such as the SPIN model checker, or reasoners that work on ontologies encoded in OWL. The latter of them to have it reason about (visual) objects in RoboCup.
Prolog however seems to encode many problems in a more natural and fluent way, so I set out to make a few toy examples to test how well I could make it work and get a feel for its advantages and limitations.
Many concepts in AI are implicitly based on specific formulations / terminology as used for Prolog or its derivatives. Vague sounding words / expressions, often taken from general contexts, really mean something rather specific, and learning about Prolog sharpens the understanding of these wordings.
Analyzing these heatmaps can point out undesired correlations in the training data, between samples and labels. For example, an image classifiers for train track might rely on objects that are present in each picture (such as , while not being present in pictures of counter examples for horses. This artifact in the collected data set may be subtle, and not noticeable to a human, but would be visible on the heatmap that highlights the critical features in each image that drove the classification.