Follow the {yellow} brick road

Written by avrahamraskin | Published 2016/06/15
Tech Story Tags: tech | cars | augmented-reality | self-driving-cars | artificial-intelligence

TLDRvia the TL;DR App

the case for AR Navigation

When I first sat down to write this article a couple years ago it was all about the idea of Augmented Reality navigation and the real world safety features it comes with. It was just an idea back then, but since then, a number of companies have started working on some of these ideas. Therefore, in this article I want to showcase some of these solutions, bring these concepts to your attention and even ideate on some of the future directions we can take with AR Nav.

Paper

The venerable paper map has been with us for centuries, helping us discover, explore and find our way home. It was once convenient and portable but as the digital world started taking hold two issues became clear. The first was that it was static: although we could carry it around, it was still just a pile of papers, thick, heavy and local. it only catered to one city or a general overlay of a country.

The second problem was contextual: once you had pinpointed your location, you had to first understand the context of the map; then you had to look around and understand the context around you, i.e. the landmarks and streets; and then you had to create a model inside your head that would bring these two contexts together so you could proceed to make the calculations you would need to get to the next place.

Digital

Now you have one small device that can fit in your pocket. You can have all the maps in the world and it won’t get any heavier! But we haven’t completely solved the problem of context. The cognitive load is a lot smaller than it used to be: where back in the day you would need to pull over to the side of the road to reorient yourself and figure out the next part of your journey, now you just peer down at your phone, get the next instruction, and you’re golden.

However, we’re still taking our eyes off the road!! Although it’s only a couple of seconds, those seconds can be fatal especially when going 100km/h (60mph) down the highway. We used to pull over to the side of the road because these cognitive calculations would take a minute or two; but now that it only takes a couple of seconds we’ve decided we can afford to do this while driving, and that can be fatal.

Imagine this: you’re on your way to a doctor’s appointment. This being a new doctor you don’t know his address, so you enter his details into Google Maps and start driving. You know your way out of the city so you ignore the first few directions. This means you probably miss the first direction that you actually need, so you frantically press some buttons to get you back where you were supposed to start paying attention. You correct your course and then hear, “Left turn onto Highway 4” You have no clue what it’s talking about but a glance at the screen shows you that it really means “Princess Highway.” Who calls it Highway 4? But it’s a big road and you know where it is so you’re fine. Then as you come closer to your destination it ventures, “Slight Right onto Hop St.” Slight right? how slight are we talking? is it the same street curving or does it branch off? A quick look back onto the screen, then another one as you get closer just to confirm. Now you’re nearly there. “Left on Hilsborough Ln.” Okay, where is this street?.. and the street signs are so small and hard to read. You think you’re gonna have to get your glasses out for this one. You fumble with your bag, get your glasses out, put them on, and now you find yourself slowing down at every street just to read the street sign so you can turn into the correct street. Slowing down… speeding up… you look very unsure and unpredictable to the other drivers on the road.

Does any of that sound familiar? Because the map in our GPS system lives in a different context and requires us to take our eyes off the road we use voice to help us out here. VUI (Voice User Interface) is great, it is the primary way we communicate with each other, but we use it in our GPS as a band-aid, a small patch to stop us looking down so much. We need to make the roads safer for everyone, but is this the the best we’ve got?

What if we could get rid of most of these extra features and create a mapping system that is simple to follow, doesn’t require the constant mapping of the two worlds, reduces the cognitive load and allows us to effectively follow the {yellow} brick road?

Enter AR. The king of context.

HUDs and Glasses

There are two main approaches of AR Navigation here. The first is a HUD or Head-Up Display. This comes in many flavours, from a Google Glass type of device to the very cool half a million dollar F-35 HMD. Here we will talk about the more traditional windshield HUD. The second approach is a pair of AR Glasses that would be typically worn by the driver, a future iteration of something like the Hololens or Meta glasses.

There is a first wave of HUD displays coming out like the Navdy and the Continental Combiner-HUD. These are mostly smaller transparent strips of plastic in front of the windshield that display your information. They allow you to see your navigation in front of you without have to take your eyes off the road, they’re kind of like Glass for our windshields. However you still have a cognitive load trying to map the directions and the actual road you are driving on together.

Navdy and Continentals products; Windshield HUDs inside their own context

Then there’s the second wave. This is where glasses also jump in; it starts getting interesting and we begin to tackle the road in the real-world context. Here we have companies like Ford, Mini, BMW and WayRay starting to release some interesting-looking concepts. We start to see the directions actually “painted” on the road. This completely gets rid of the mapping process that needs to go on in our head because we map it straight onto the world. We still have different icons and arrows plastered all over the place and it doesn’t feel completely fleshed out, but at least we are headed in the right direction.

follow the {yellow} brick road

There is some speculation that Tesla will replace the conventional dashboard with a HUD in the upcoming Model 3. This would put them firmly in the second wave of HUDs, as your directions wouldn’t be crammed onto a tiny plastic screen in front of the car but could take up the entire real estate of the windshield. This would plant the tech in front of the Model 3 firmly into AR space as you can’t project information in front of your eyes without being aware of the surrounding landscape. Tesla would need cameras on the outside of the car for this computer vision task — good thing they have these kind of sensors sitting around for their Autopilot features.

Pretty empty looking dashboard on the Tesla Model 3

Future directions

Understanding the world around the Car

When we start working in the context of the world, and cultivate more than just navigation, we can start thinking of new directions:

  • We could augment your personal directions, detours and instructions onto the billboards that dot the highway today.
  • Flash your speed onto the speed limit signs
  • Highlight landmarks and areas of interest to you
  • Project a ghost car on the pace that unpredicatable cars would be taking, like seeing if they will drift into your lane or seeing if a turning car will swipe your car
  • Follow a virtual tour from the safety of your car
  • Display your petrol status when you pass by a petrol station
  • Silhouette hazards or cars on the side of the road, a sort of Augmented Waze where you could see the hazard from afar, and then maybe make a thumbs up to confirm that the hazard is still on the side of the road
  • X-ray a mountain; using sensors in the car you would literally be able to see a car coming at you through the mountain or around a corner
  • Night mode; street lamps and headlights just illuminate places with light. You could silhouette specific animals, cars or hazards without unnecessarily bothering others
  • Weather on the road

Once we move back into the real worlds context the possibilites for navigation in the car really become endless, but I am not just looking forward to when the GUI (Graphical User Interface) comes into real world world but I’m also very much looking forward to when VUI joins as well. I have a couple articles I’ve written on the future of VUI and I think it’s going to be a huge area of potential in the next coming years. Now, without diving to deeply into VUI here these are some place I think voice can really set itself apart and move away from its job today of only voicing the next instruction.

  • Voice should become more conversational and less instructional
  • Signalling that there is a car in your blindspot
  • Conversation with the nav on the weather and the routes
  • 3D sound replacing dashboard controls
  • Alert to something that would either be blocked or easily missed on the screen ie. someone about to walk from behind a car, an animal crossing the street

If I had to give one reason why I am so optimistic about AR and why I believe without a doubt that this is the next wave, it would be because AR will solve context, it’s really that simple. With Autonomous Cars and smart AR, where we are not just shoving visuals in front of our face, but designing placement and timing with care will not just lead to a safer road, but also a much safer world.


Published by HackerNoon on 2016/06/15