How Google Earth (Really) Works

One of the original creators of Google Earth explains how it works

Avi Bar-Zeev
OneZero
Published in
16 min readJan 30, 2020

--

Photo: Timothy A. Clary/Getty Images

I originally posted a version of this story in 2007 and have added a few updates for 2020. For the technically inclined, you may want to read the patents — Asynchronous Multilevel Texture Pipeline, Server for geospatially organized flat file data — that protect these ideas. [Note: Michael Jones, Chris Tanner, Phil Keslin, David Kornmann, John Hanke, and more contributed to Google Earth in different ways and currently work at Niantic (the makers of Pokemon Go).]

WWe’re going to proceed in reverse, strange as it may seem, from the instant the 3D Earth is drawn on your screen and later trace back to the time the data is served. I believe this will help explain why things are done as they are and why some other approaches don’t work nearly as well.

Part 1, The Result: Drawing a 3D Virtual Globe

There are two principal differences between Google Maps and Google Earth that inform how things should ideally work under the hood. The first is the difference between fixed-view (often top-down) 2D and free-perspective 3D rendering. The second is between real-time and prerendered graphics. These two distinctions are fading away as the products improve and converge. As of today, you can jump between 2D and 3D in the same webpage with just a click.

What both have in common is that they begin with traditional digital photography — lots of it: basically one giant high-resolution (or multiresolution) picture of the Earth. How they differ is largely in how they render that data.

Consider: The Earth is approximately 40,000 km around the equator. Whoever says it’s a small world is being cute. If you stored only one pixel of color data for every square kilometer of surface, a whole-earth image (flattened out in, say, a Mercator projection) would be about 40,000 pixels wide and roughly half as tall. That’s far more than most 3D graphics hardware can handle today. We’re talking about an image of 800 megapixels and 2.4 gigabytes at least. PCs in 2000 had 56k modems and were only beginning to commonly have GPUs for 3D rendering. Even in 2020, only high-end gaming PCs could handle this amount of information at once without some…

--

--

Avi Bar-Zeev
OneZero

XR Pioneer (30+ years), started/helped projects at Microsoft (HoloLens), Apple, Amazon, Keyhole (Google Earth), Linden Lab (Second Life), Disney (VR), XR Guild