DNRacing v0.2 — Simultaneous Localization and Mapping

Written by hackernoon-archives | Published 2017/10/27
Tech Story Tags: autonomous-cars | mapping | dnracing | robotics | robot-operating-system

TLDRvia the TL;DR App

How I got started with the project + update on ROS SLAM packages.

My initial inspiration for building an autonomous race vehicle came from my younger brother, who built a SLAM vehicle out of his android phone and personal laptop.

My Brother’s Website: https://sungjik.wordpress.com/2015/09/28/my_personal_robotic_companion/

I decided to make one myself. To make things interesting, I decided to build two identical vehicles that could be fast enough to go at race speeds, much like Roborace. I looked up different projects that were already working on a similar idea and started my build. As of today, I can think of four online sources out of the top of my head which describe work that has been done on scaled autonomous race vehicles. I list them below.

MIT RACECAR: http://fast.scripts.mit.edu/racecar/hardware/

UPENN F1 TENTH: http://f1tenth.org/

DJTobias Cherry Car: https://github.com/DJTobias/Cherry-Autonomous-Racecar

Jetsonhacks Racecar: http://www.jetsonhacks.com/category/robotics/jetson-racecar/

There are others that should be mentioned, like the BARC Lab vehicle at UC Berkeley and the Ghost vehicle thats on Hackernoon as well.

One key similarity between most race platforms is that they use the Nvidia TX1 or the TK1 embedded processor developer’s board. I also started out with this developers kit, but later realized that I spent too much time debugging the board. Looking back, I spent more time debugging the developer board than focusing on actual software development. Rebooting Ubuntu more than 20 times was not very fun. Despite the challenges of using this bleeding edge embedded processor, I was still able to create a 2D SLAM Navigation stack in ROS using the ZED stereo camera, the Sparkfun Razor 9DoF IMU and the Hokuyo UST-10LX LIDAR.

Most of the time was spent figuring out how exactly the ROS tf tree works. There are a lot of information on how to set up a navigation stack in ROS, but most of the information is very disorganized and in bits and pieces. It took me a certain amount of patience to figure out what was going on through extensive googling.

Another portion of my time was spent calibrating the ZED camera and the Sparkfun IMU. The ZED camera provides the visual odometry which outputs the position in the x, y axis and this is fused with the IMU odometry which provides yaw and acceleration in the x, y axis. I tried using an extended Kalman Filter and an unscented Kalman Filter with different settings, but I did not change the weights of the covariance matrices. Trial and error with different settings gave me better results.

In the end, I created a 2D SLAM Navigation stack as shown below. I mapped the building of my classroom with the LIDAR while controlling the vehicle with a RC controller. No controls yet, but mapping seemed to work quite nicely.

As mentioned previously, the TX1 carried with it a lot of debugging issues as a developer board, so I sold my TX1 kit, and replaced my processor with a Zotac Zbox EN970. I did this because the EN970 carries a GTX970m with 3Gb of GPU RAM. There is also the added benefit of using a x64 architecture with an Intel i5 because it allows me to run programs like Matlab or Simulink. I can focus my time actually coding the control algorithms for my vehicle, instead of wasting time in dependency issues and other debugging problems.

Now the key problem with choosing a standard gaming nettop over a embedded processor like the TX1 was the power consumption. The maximum peak power consumption of the EN970 is 120W at 19DC. This is a lot compared to the TX1, which probably has a peak power consumption at 15W. It was difficult to find a portable battery which could handle this sort of output. Thankfully, I found one battery bank from Goal Zero called the Sherpa 100. Problem solved!


Published by HackerNoon on 2017/10/27