Ghost III — Dead Reckoning Navigation

Written by stevendaniluk | Published 2017/02/23
Tech Story Tags: self-driving-cars | autonomous-cars | robotics | racing | arduino

TLDRvia the TL;DR App

This is the third story in a series documenting the development of an autonomous RC race car. You can find out the earlier stories here:

Ghost — My Plan To Race An Autonomous RC Car

Ghost II — Controlling An RC Car With A Computer

With an autonomous race car, there are two main problems you need solve: “Where am I?” and “What should I do?”. The “What should I do?” deals with the control of the car, and can only be answered once you know something about where the car is, so that will come later. The “Where am I?” is what I’ll be diving into today.

You might think this is a trivial problem. Your smartphone knows where you are all the time by using GPS, so how is this any different? Well, I’ll expand a bit on what this question entails. For a system, such as an autonomous car, or even an airplane, we typically describe it by something called a state, comprised of state variables. For this car, the state variables would be the position coordinates in some reference frame, X and Y, the orientation in that frame (yaw angle), the velocities in each direction, and rate of change of the orientation (yaw rate). You could of course include other information such as roll, pitch, or acceleration if they are relevant to the application, but for this project those six variables will suffice to describe the car.

Generally, no single sensor provides all the information required to completely define your state (at least not very well). To overcome this, we do something called sensor fusion, which fuses together sensory data from multiple sources to form a single estimate.

The way we perceive and control our bodies is a great example of sensor fusion at work, and provides the perfect analogy for how I’ll be solving the “Where am I?” question. There is of course a lot happening when we perceive and control our bodies, so this will obviously be a simplification, but there are three central components that I’ll bring up. The first is sight: through our eyes we perceive our surroundings, both through appearance and spatially. The second is our vestibular system: the sensory system located in our ears that can detect linear acceleration and rotational movements. The third is what we commonly call muscle memory: the ability to consistently perform bodily motions with a high level of accuracy.

If you were to try to answer the “Where you are?” question using only one of these, it likely wouldn’t work very well. Your sight would perform the best, but even that can be deceived at times. Think of looking out the window of a stationary car and seeing the car beside pull away in the opposite direction. I’m sure we’ve all had momentary lapses where we thought it was us moving when it really wasn’t. The vestibular system detecting linear and angular acceleration wouldn’t do very well on its own either. Imagine closing your eyes while someone carries you away, and try to keep track of your position. You may be accurate in the short term, but over time the error will continuously accumulate, and pretty soon your estimate will be way off. Finally, our muscle memory would also perform poorly. Consider closing your eyes and trying to walk to a specific spot 100 meters away. Although you can perform walking motions with great accuracy from a lifetime of practice, you will still make small deviations from your predicted motion, and those errors accumulate over time.

Yet when we combine our sight, vestibular system, and muscle memory, we can coordinate our bodily motion very well. Our muscle memory predicts how a motion will move our body, which then gets updated with our vestibular system and sight to form a more accurate prediction. That can then can be used to more accurately update our motion, which will again get updated with our vestibular system and sight, and so on.

A car can operate in a similar fashion. Sight can be emulated with a camera, detecting features and geometry from the surroundings to be compared with a map or previous observations. The vestibular system is essentially an Inertial Measurement Unit (IMU), which is a combination of accelerometers for measuring acceleration and gyroscopes for measuring angular rotation. The muscle memory can be emulated with a mathematical model of how the vehicle moves for a specific steering angle and number of wheel rotations.

How you combine information from each of these sources is actually very interesting, but I’ll cover that in a future post once I’ve presented how I’m actually getting at least two of these sources of information.

The first one I’ll talk about is the muscle memory case, which is called odometry estimation in the robotics world. The general idea is that if you can count how many times each wheel rotates, and if you know some geometry about your vehicle, you can calculate how the vehicle will move.

You start by making a few assumptions: that the vehicle moves along an arc of constant radius and at constant speed, and that there is no slip at the wheels (i.e. no spinning). If you measure wheel rotation at a high enough frequency then the constant speed and radius assumption will be valid for the brief period between measurements when you perform this calculation. The no slip assumption however does not hold as well. The faster the car goes, the more invalid that assumption will be. But remember, this is only one estimate that is going to be fused with others. You can quickly go down a rabbit hole trying to account for everything, so it is best to keep it simple until additional complexity is necessary.

With a bit of math (which doesn’t belong here) applied to the graphic above, you can figure out the vehicle velocity and yaw rate based on the wheel rotations and steering angle. From there you simply integrate that change over time between updates to determine the change in position and orientation.

An important point to make is that the error with this method is unbounded. Meaning, each prediction made will contain some error, which will get added to the error of the next prediction, and the next one after that. This is called drift, and it’s exactly what happens when you close your eyes and walk. Unless you have some global reference to update your estimate with, that error continuously accumulates. That is why odometry is often referred to as dead-reckoning, and needs to be fused with other estimates to be accurate.

Wheel rotations can be measured with a sensor called an encoder. There are a variety of types available, but I’ll be using one of the simpler types: an optical encode. Basically you have an infrared emitter and detector facing each other, and a disc with a pattern of holes rotating in between them. The path between the emitter and detector gets interrupted as the disc rotates, so you can track the disc rotations by monitoring the output signal of the detector.

I’ve 3D printed my own encoder discs to mount on the drive cups of the front and rear differentials, as well as mounts to hold the encoders in place. With this setup I’m able to detect 1/24th of a wheel rotation, which corresponds to ~8mm of movement.

Steering angle will come from mapping the executed control published by the onboard Arduino to the physical angle of the wheel. Cars have something called Ackermann steering geometry, which positions the wheels at slightly different angles to account for the fact that the inside wheel travels less distance than the outside wheel. Instead of accounting for these slight angle differences, I simply assume the steering angle is the average of the inner and outer wheel. There is some error in this assumption, but it is negligible unless you are making a very tight turn.

The equations that govern this process rely on two measurements from the vehicle: the wheel diameter, and the track width. I can of course measure these, but they won’t be accurate. Tires compress by a small amount, and wheel camber angle (when the wheel leans inwards or outward from the car) will alter the track width.

(Left) The effective wheel radius as a result of tire compression, (Right) A depiction of Ackermann steering geometry [Wikipedia Commons]

This means that those measurements need to be calibrated from actual data. Every car or mobile robot will have to do this for odometry estimation. The simplest way to calibrate these measurements is to perform two maneuvers: a purely straight motion, and purely rotational motion. The math works out so that the wheel diameter only influences the purely straight motion, allowing you to solve for the effective wheel diameter by counting the number of wheel rotations over a measured distance. The rotational motion depends on both wheel diameter and track width. But once you’ve calibrated the wheel diameter, you can calibrate the track width by drive in circles a fixed number of times and counting the wheel revolutions.

After repeating a 7.62m (25 ft) straight line test 5 times, and driving in three CW and three CCW circles, my measured wheel diameter had to be increase by 3.2%, while the track width had to be decreased by 8.3%. Over a long distance it is quite clear that a small error in those measurements could result in very different predictions.

To test everything I drove the car around a building at my university, going around a few hallways to make a big “T” shaped lap ending at the same position as the beginning of the lap. The path from odometry estimation is shown below. The total distance was ~100 meters and involved ~ 720 degrees of rotation. To my pleasant surprise the error between the start and end points was only 0.27 meters, with an orientation error of 9 degrees.

Estimated path from dead reckoning while driving slowly

That’s not too bad for just dead reckoning. However, this was all done with the car moving quite slowly. Remember how earlier I mention that assumption about ignoring wheel slip becomes less valid as speed increases? Well, below is what happens when you drive the same route, but much more aggressively. I was averaging about 3–5 m/s (11–18km/h) on smooth tile floors, and as you can see the odometry estimate is way off. This is basically like trying to run on ice with your eyes closed.

Dead reckoning while driving quickly

How can that error be corrected? Well, that will be the subject of the next story. I’ll add an IMU to the car, and discuss method of sensor fusion to produce a single more accurate estimate that should (hopefully) make the high speed estimation as accurate as the low speed estimation.

You can find the Github repo for this project here, and the rest of the stories in this series below.

Ghost — My Plan To Race An Autonomous RC Car

Ghost II — Controlling An RC Car With A Computer

Ghost IV — Sensor Fusion: Encoders + IMU


Published by HackerNoon on 2017/02/23