Simultaneous Localization and Mapping (SLAM)

SLAM is the process of mapping the area around a robot, while simultaneously accurately determining and updating the location of the robot within that map.

In this project, we were given data from THOR, a human-sized bipedal robot. This data included IMU readings, as well as Lidar and RGB-D data from a kinect, both of which were mounted on the robot’s head. Our task was to create a 2D map of the observed environment around THOR, as it was walking around our engineering buildings.

To actively map the environment, I wrote my own SLAM algorithm that made use of Lidar readings, in conjunction with joint angle and accelerometer readings, to accurately pinpoint the location of walls and objects in the environment. To make sure that the map would stay consistent with itself, I wrote a particle filter to filter the robot’s calculated location within the map, based on the correspondences between what it was seeing, and it’s conception of the world around it.

My algorithm ran in real time.

Directly below are images of the 2D maps that were created by my SLAM algorithm, where white pixels represent walls or objects, and block represents traversable space. The first of these images demonstrates loop-closure:

lidar1_skip5
lidar0_skip5 lidar2_skip5 lidar3_skip5_rt

 

Below are images where the visible “floor” of the current map is re-projected onto the image:

image_dots_lidar3 image_dots_lidar3_2 image_dots_lidar3_3