Ghost — My Plan To Race An Autonomous RC Car

Written by stevendaniluk | Published 2017/01/11
Tech Story Tags: self-driving-cars | autonomous-cars | robotics | racing | projects

TLDRvia the TL;DR App

I’ve decided to make an RC car race autonomously.

The general idea is that I’m going to take an RC car (e.g. below left), attach a small computer and some sensors to it, and enable it to drive around a track (e.g. below right) without any human controls. Should be pretty straightforward…

The goal here isn’t just to make the car move around the track slowly either. The challenge that I am more interested in is to push the limits of my autonomy knowledge to see how fast I can make it drive, and eventually start working on racing lines and improving lap times. When an autonomous car, or robot, is moving quickly, the task of determining where you are, planning where to go, and controlling the car in real time is quite challenging. But that’s what makes this so interesting!

I suppose an important question would be: why am I doing this? Firstly, most of my life has revolved around RC cars, racing, and robotics, which makes this project the perfect combination of all my hobbies. Secondly, projects are one of the best ways to learn new skills and expand your knowledge. You can spend all the time in the world studying how to do something, but it will never be as rewarding as actually bringing it to life.

I should mention that this project is very much inspired by the activity from RoboRace, the Self Racing Cars group in the U.S., the Self Driving Track Days group in Europe, and the F1/10 Autonomous Racing Competition. The racing and robotics world are both full of talented and passionate people, and it’s very exciting that they are finally mixing! I’m currently doing my master’s in aerospace engineering, and afterwards my goal is to be working on something like what RoboRace is doing. So, I’m essentially creating my own scaled down version of that, which I hope will act as a stepping stone towards working on the real thing.

Also, I’ve named this little project Ghost. When I initially thought of making one of my RC’s autonomous, I basically wanted to make something that I could follow on the track, just like the “ghost” feature in most racing video games. The name stuck, so that’s what it will be called.

The stories that I will be posting here are intended for tech-savy people who would like a behind-the-scenes look on how autonomy works, as well as hobbyists who would like to read a somewhat-step-by-step guide. I will divide the stories into sub tasks of the project, such determining the car’s position on the track, and always start off with a bit of the (high level) theory before diving into the details. I have actually already been working on some portions of this project, but am only now starting to write about it. That being said, some posts will come together quicker than others, however I will aim for a new post every ~2 weeks.

Now, onto the plan.

Project Outline

Before beginning, there were a few assumptions and restrictions I made. This project is ambitious enough to begin with, so I need to choose my battles carefully.

The first assumption is that there will be no other cars or obstacles in the way. Attempting to understand other cars and their intentions to overtake them, while minimizing time lost, could easily turn into a PhD thesis (or a few). It is best to go one step at a time. The second assumption is that a map of the track is available before hand. There are cool techniques such as Simultaneous Localization and Mapping (SLAM) to create a map while you move around. However, I can’t justify spending too much time on something that will only need to be done once per track. Especially something that can easily be replicated by a human (in potentially less time when you factor in any post-processing necessary).

The one restriction that I have placed on the design is that vision (i.e. cameras) must be used for external perception about the environment, as opposed to lidar (or any other means). The tracks this car would drive on may not always have the track boundaries defined by walls of some form, they may simply be markings on the ground. In which case lidar be of little use. Plus, the lidar sensors that I would need are simply too expensive for me (over $1,000). There are inexpensive units such as RPLidar (which I have used and quite liked), but it only generates scans of the surroundings at 5–10 Hz. A 10 Hz update when moving at 10 m/s (36 km/h) results in one reading per meter of movement, which is quite large considering the track is only 2–3 meters wide. Alternatively, cameras can provide 30–100 frames per second for a few hundred dollars quite easily.

With any autonomous system, it is good to start with an architecture in mind. An architecture is essentially how you decompose the system into smaller subsystems, and how those subsystems relate to each other. Below is a fairly standard architecture for an autonomous robot, but it helps illustrate how the task will be divided.

Architecture for how the RC car will drive itself

The Trajectory Generator is responsible to creating a racing line around the track given the layout of the track. External Perception determines some geometry about the world from a sensor, such as a camera or lidar unit, while Odometry Estimation predicts how the car is moving (e.g. counting wheel rotations). Data from External Perception and Odometry Estimation will then get fused together (a future post will discuss what data fusion is and how it is done) and compared against the track map in the Localization module to estimate where the car is on the track. Finally, Path Tracking Controller will look at the car’s position and issue the appropriate throttle and steering commands that will make the car follow the desired line.

There you have it, that’s the basic idea for how the RC car will drive itself. Next, a few words about implementation.

Hardware

The car that I’ll be using is a Kyosho TF-5 Stallion 1/10 scale on-road car. I spent many years racing RC’s, a few of which I got to drive for Kyosho America. This is a car I used for practice during the winter.

In terms of sensory and computational hardware, I’ll be using the following:

  • Intel NUC Computer
  • Arduino Nano Microcontroller
  • Phidgets Spatial 3/3/3 IMU
  • PointGrey Blackfly Camera

Clockwise from top left: Intel NUC, Arduino micro controller, IMU, and the PointGrey camera

Alternative hardware can of course be used, but I already had most of these items from previous projects. I may even end up changing components, or adding more as I go. All of the sensors, as well as the car’s servo and motor ESC (Electronic Speed Control), will be connected to the NUC where all the processing will be done. In future posts, when each piece of hardware is introduced I’ll go into more detail about it, the setup, etc.

Software

I will be developing this entirely in ROS, as well as using OpenCV, and potentially even TensorFlow later on (I’ve been itching to play with that some more). Those of you who would like to look at the code or follow along can do so at my GitHub account: stevendaniluk/ghost.

This wraps up the introduction. Stay posted for the next update where I will get the computer talking to the car to send controls. Then we’ll me moving onto the really interesting stuff…

You can find the rest of the stories in this series here:

Ghost II — Controlling An RC Car With A Computer

Ghost III — Dead Reckoning Navigation

Ghost IV — Sensor Fusion: Encoders + IMU


Published by HackerNoon on 2017/01/11