New Waveguides for AR Glasses Are Coming to a Face Near You

Lasers, fashion, occlusion, and other news from SPIE Photonics West

The HoloLens 2, an AR headset designed by Microsoft, exhibited during the Mobile World Congress, on February 28, 2019 in Barcelona, Spain. Photo: NurPhoto/Getty Images

I spent two days at SPIE Photonics West this week catching up on the latest innovations in augmented reality (AR) and virtual reality (VR) displays. This particular conference may have been a bit too dry for even tech journalists to cover, so I’ll extract the most interesting bits for a less technical audience here.

Disclaimer: I am an adviser to several of the companies that presented. To keep this article balanced, I try to avoid endorsing or bashing companies by name. I will talk mostly about the tech: pros and cons. I’m also not an optics expert, though I’ve learned a bit by working with experts on early HoloLens and undisclosed projects in other companies. Some of my work was in scouting and evaluating technologies like these. However, I spent much more time considering the questions around what should we build and why.

What’s new with waveguides?

Waveguides have been the top technology option for most AR glasses thus far, as used in HoloLens, Magic Leap, and more. A waveguide is a mostly clear, thin piece of glass or plastic inside AR glasses that (almost) magically helps bend and combine light into your eye. This added light represents the virtual 3D objects you’re meant to see on top of the real world.

Companies designing AR glasses face a thousand challenges even after they solve the display issues. Most of the deep AR research and development companies tend to work with the more established players to build their glasses in original equipment manufacturer (OEM) style. Few try to go to market alone. But either way, there are some impressive new waveguides coming soon to a face near you. Having more display options is generally good news for the industry at this point in time.

Humans can see typically out to 220 degrees horizontally using both eyes. But our peripheral perception (outside of the central 10 degrees) is very limited. Waveguides generally suffer from a limited field of view (FOV) and related issues due to fundamental physics constraints — mainly that the angle that light can exit the guide is limited by the material used (technically the index of refraction, which is related to how prisms split white light into rainbows).

Refraction of light through a prism. Image: Wikimedia

For reference, the HoloLens 1 and Magic Leap waveguides have FOVs around 30–50 degrees (diagonally measured) while the former Meta glasses (not a waveguide) offered around 90+ but were quite bulky as a result. HoloLens 2 significantly improved its vertical FOV but less so the horizontal.

The widest FOV waveguides I see nearing production today offer about 55 degrees (see WaveOptics and Lumus), and the visuals are impressive. It’s starting to feel like, “Yes, this is big enough to use.” Even 60–70 degrees may become possible in the next few years. The biggest breakthroughs are coming in new materials and more clever encoding schemes, shrinking what used to take six stacked waveguides down to one or maybe two.

A common trade-off with most if not all AR glasses comes in providing a large enough “eye-box” (the area near your eyes where you can correctly see the virtual imagery, about 15–20 millimeters wide) to allow for the many different shapes and sizes of users and also for the glasses to slide around a bit as you move without losing the signal. But a bigger eye-box traditionally comes at the expense of FOV because these numbers are intricately linked.

Image from Avegant’s presentation

One of the killer applications for AR that’s easy to imagine is “holographic telepresence” — think Star Wars holograms someday replacing phone calls.

However, if the FOV of your glasses is too small, then the person you’re talking to will be randomly cut off as you or they move, or virtually vivisected, which is quite distracting in practice.

If you want to deduce the ideal FOV for this use case, imagine standing a comfortable distance from a real person and measure the angles needed to fully contain them, head to toe and with their arms outstretched. I’d wager that 100–150 degrees FOV is going to be needed for this application. Some people may try this with a decent 55 degrees or less, and it might work as long as the virtual participants are far enough away to fit within the FOV.

However, if you take the AR scenario of a technician virtually X-raying a car or building, 30–40 degrees might be enough today. Even if you had a much wider FOV to use, app designers might want to keep the virtual X-ray action limited to the center of vision so that we have the real world to “ground us” everywhere else.

There are some impressive new waveguides coming soon to a face near you.

Something that all waveguides (that I’m aware of) have in common is that the glass or plastic sheet sitting in front of the eye needs to be perfectly flat. That makes it difficult to create very thin and fashionable glasses. Eyewear is almost always curved in at least one or two dimensions even before prescriptions are added, and almost all waveguides need an “air gap” between layers to enable light to be refracted as it escapes total internal reflection.

An Israeli company, Oorym, uses a clever method to glue a bit of film on top of a simple waveguide, which makes for easy and cheap high-res displays and eliminates the air gap between any layers. This makes it easier to embed these inside prescription lenses, for example.

Several companies, like Kura Tech and LetinAR, promise to erase the old FOV physics constraints by using something akin to reflective dots in the glasses in place of traditional waveguides. Both approaches may reach 100+ degrees, but neither is much beyond the proof-of-concept stage yet.

These and all of the waveguides I’m aware of require tiny projectors on the sides or top of the glasses, adding bulk and heat wherever they are housed. The biggest breakthrough will come in the form of micro-LED displays, which can be much smaller and lower-power than other options like digital micro-mirror display (DMD), liquid crystal on silicon (LCOS), and organic light-emitting diodes (OLEDs), allowing these projectors to shrink in size and heat.

The same features that magically bend light into your eyes also tend to bend light for other people observing you, causing the optical structures (mirrors, etc.) to appear to anyone else as colors or distortions. As Kelly Peng from Kura Tech pointed out, consumers will most likely prefer at least 90% clear and transparent waveguides so they can make natural eye contact throughout the day. Few people want to wear sunglasses indoors.

Fashion, lasers, and AR glasses

Besides the existing North Focals, which reflect low-power scanning lasers off custom-built prescription lenses, several other companies are approaching this from the fashion-first perspective. While Intel’s project Vaunt didn’t make it past launch, its patents live on. Bosch has its own apparently different way to scan a laser in your eye.

The fashion-first approach assumes we’ll wear these glasses all day long, including our normal eyewear prescriptions and offsets. So these companies generally focus on everyday apps like weather, news, messages, and navigation. Telepresence and any deep immersion for them are still a bit down the road because the effective FOVs are all quite small.

The other notable constraint of the fashion-first approach is power. Batteries must fit in very small spaces. More importantly, these glasses must never feel hot to the wearer, meaning engineers and designers must think about power consumption down to low milliwatts. Devices like the HoloLens are still big enough to hide lots of heat-dissipation techniques (fans, heatsinks, or moving hot stuff off the face). Fashionable glasses have no room to hide this extra tech.

Darran Milne of VividQ did a good job of covering this area in his talk; he built a pitch for his company’s holographic optical approach. VividQ’s approach also addresses how to make 3D imagery always appear naturally focused as in the real world. The company’s big limitation, though, is that it prefers to make fairly static holograms, like logos or simple clock displays. Dynamic holograms are still very expensive for it to run.

On the other hand, one company, C-Real, showed very dynamic holograms with natural focus cues and all, but its image quality is still what we’d call grainy. Interesting as this is, it will take a bit more time to perfect.

One thing missing in all the talk of lasers is the scientific documentation on laser safety for direct, all-day consumer use. By the numbers, their power output is low enough that you should not be harmed. They already adhere to laser safety regulations in place for the tiny lasers in optical mice, DVD players, and more. In fact, you would receive much less energy from these AR devices than from standing outside and looking away from the sun.

The underlying concern is that natural light ordinarily floods the cones and rods of our eyes pretty evenly. So can anyone yet tell us what happens when a normally safe amount of light is focused down to a fixed point for an extended period of time? Seems like a reasonable concern to address upfront. Given that anti-vaxxers are incensed by the minuscule amounts of aluminum in some vaccines (far less of it than if they consume canned foods or drinks), how will we communicate the safety in the clearest and most persuasive way?

Foveation and rendering

While this was not a 3D graphics conference, presenters have talked about the need for “foveated rendering” for some years now. Several companies, such as Avegant and Varjo, are making good on that talk.

Foveation matches our eyes’ ability to see clearly in the center of vision (the fovea) while we’re very limited everywhere else. Image courtesy of the author.

Your foveal vision is so small that you can only see one or two words on this page clearly at any given time. Your brain is imagining that this whole page is just as clear. Read this story I wrote for Motherboard for much more detail on foveal vision and privacy issues around eye-tracking.

Most companies understand that usable AR will need to render text at high resolution — the equivalent of 6K–8K resolution over an entire FOV. But most realize that it’s only critical in the foveal center of vision.

Avegant’s eye-tracking includes very fast “optical flow” and camera-based features.

So future AR displays may legitimately claim to be 8K while the reality is that will only be true for the central 5–10% of the view. That’s fine, as long we perceive it to be 8K everywhere.

Eye-tracking can help optimize the answer to which 5–10% is the highest resolution at any given time while allowing the low-res portion to go as wide FOV as possible. However, as Avegant pointed out, this eye-tracking must be super fast, or the foveation will seem wrong to the wearer.

The secrets of occlusion

Ron Azuma, a legendary AR researcher, did a great job covering the latest in FOV advances as well as the question of occlusion, which is something I’ve spent a lot of time working on myself (see several MS patents) but rarely talk about. Occlusion is less about what to draw and more about what to hide.

Most approaches (other than video see-through) can only add light to your vision. If objects in the real world are brightly lit, you can still see those real objects behind the virtual objects, making the virtual bits seem ghost-like. Occluding virtual objects behind real ones is easy — just don’t draw those hidden virtual bits. But occluding real objects behind virtual ones will require a new way to block or cancel light bouncing off the real object to our eyes, per pixel and ideally at the same focal depth as the objects.

The most common workarounds today are to block some percentage of natural light everywhere in the FOV (e.g., HoloLens 1 was 85% opaque). This, unfortunately, diminishes natural human eye contact and our nice clean view of the world, again making it like wearing sunglasses everywhere.

The most common approach to overcome this limit is to blast more light from AR displays into your eyes, overpowering any natural light (except perhaps the sun). This works but effectively does the same thing as a global dimming shutter above since our pupils naturally constrict to limit light.

Favoring video see-through AR, a startup called Lynx demoed a headset that uses cameras and displays instead of allowing natural light to reach your eyes directly. It’s not the first company to try this, but it claims to be the first to be fully untethered. I didn’t get a chance to try it myself, but I have tried similar tech, like Varjo and others, which are still tethered to the beefy PC doing the rendering for them.

Two of the biggest challenges with any video see-through approach are 1) how much latency is added and how much quality — dynamic range for color and shadows, natural focus, and depth of field — is lost by digitizing the natural light before re-displaying it; and 2) how to deal with the fact that we need those cameras perfectly aligned to the eyes — ideally inside the eyeballs, which is technically impossible without somehow bending the light either with bulky mirrors or via software reprojections.

Any resulting misalignment of cameras and eyes means we have to re-train our brains’ natural sense of proprioception (body posture) to do simple things like eating popcorn or drinking coffee without missing our mouth. And when we remove the headset, we have to re-train ourselves again. So the question comes down to what trade-offs are made to make a product “good enough.”

No single company or research group presented a comprehensive solution for all-day consumer AR. That’s expected and totally okay, in my opinion. Displays are one important piece of the puzzle, as are spatial audio, input, and A.I. too. It’s important that companies take the needed time to get the technology to be safe, effective, and useful.

Design and Technology Leader (fmr. HoloLens, Apple, Google Earth, Second Life, Disney VR) Profile photo is from generated.photos (read “Who owns YOU?” for why)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store