Right now, almost all of your “wireless” life is riding a glass thread thinner than a hair, stretched across the ocean floor. In this episode, we’ll dive under the waves, into orbit, and through the air to trace the very real hardware behind the so‑called cloud.
Ninety‑five percent of the world’s intercontinental data doesn’t fly through space; it crawls along the seafloor inside fiber‑optic cables that ships sometimes snag, earthquakes occasionally snap, and governments quietly map and monitor. Above that lattice of glass, a second internet rides the sky: dense constellations of low‑Earth‑orbit satellites like Starlink, high‑altitude microwave links hopping between towers, and the cellular and Wi‑Fi networks you actually touch. Each layer has a personality: fiber offers huge capacity and stability, satellites trade raw speed for reach, and radio links navigate terrain and politics. In practice, your messages may start on home Wi‑Fi, jump to a nearby cell tower, dive into a terrestrial fiber backbone, plunge into a transoceanic cable, then surface through a satellite link in a remote region—an intricate route shaped by physics, cost, and control.
Together these layers behave less like a neat stack and more like a constantly shifting traffic system. Routing software is always re‑calculating “shortest path,” but shortest can mean different things: fewest milliseconds, lowest cost, or most resilient if something breaks. Undersea repairs can take weeks, so operators may steer flows across entirely different oceans or through space. And because links are owned by competing companies and countries, business deals and treaties can matter as much as physics in deciding how your packets actually travel.
In practice, the “where” of the physical internet matters as much as the “what.” Those glass strands don’t just wander randomly across the seafloor; they converge on a surprisingly small set of landing points and terrestrial hubs. Cities like London, Marseille, Singapore, and New York act as chokepoints where dozens of cable systems terminate and splice into massive data centers and internet exchange points (IXPs). These are the rooms—sometimes just a few racks in an office building—where hundreds of networks meet and swap traffic. If your message is crossing a border, there’s a good chance it’s passing through one of a few such buildings.
On land, long‑haul connections often track familiar infrastructure: railroad rights‑of‑way, highway medians, power‑line corridors. It’s cheaper and easier to get permits where construction already exists, and easier to maintain gear when you can drive a truck to it. That also means natural disasters and conflicts tend to hit clusters of connectivity at once: a single earthquake can break multiple buried fibers and disturb undersea routes near a coastline.
Because of this fragility, operators design layers of redundancy. A streaming company might pay for capacity on several distinct cable systems crossing different oceans, plus backup satellite or microwave contracts for emergencies. Financial trading firms, obsessed with shaving milliseconds, sometimes fund their own ultra‑direct terrestrial links or even specialized transoceanic routes that sacrifice capacity for slightly shorter paths.
Ownership adds another twist. Many subsea systems are consortia: telecoms, cloud giants, and sometimes governments share the cost of construction and rights to bandwidth. Others are fully controlled by a single company, giving that player leverage over pricing and, indirectly, over which regions become digital hubs. Some states now insist that at least one landing station per cable be under local control, partly for security, partly for bargaining power.
If this sounds a bit like a medical system where blood must pass through certain arteries and organs before reaching limbs, that’s not far off: block or constrict a major “artery” and the whole “body” of the internet has to compensate, rerouting flow through whatever alternative vessels exist, often at reduced performance.
Your challenge this week: pick a city you care about—maybe your own—and look up where its nearest major cable landing station or internet exchange point is located, then check which companies operate it. You’ll start to see how physical geography and corporate decisions quietly shape what “fast” and “connected” mean in that place.
A concrete example: when an undersea cable near Egypt was damaged in 2023, video calls between Asia and Europe didn’t just “slow down”—traffic spilled onto alternate cables around the Cape of Good Hope and, for some routes, into satellite links. Like a hospital diverting ambulances when its ER is full, neighboring “organs” took the strain, but patients—in this case, your streams and Zooms—waited longer.
On the other end of the spectrum, Starlink ground stations are often tucked into ordinary‑looking facilities near strong terrestrial backbones. Your dish in a rural field may bounce traffic into space, but minutes later that same flow is intermingling with big‑city fiber streams and hopping across continents through carriers you’ve never heard of. In some remote Pacific islands, local ISPs blend a thin undersea cable with satellite capacity the way a clinic blends scarce specialist time with general practitioners—reserving the rarest, priciest “appointments” for critical uses like clinics, schools, and government links.
Future implications: As more finance, surgery, and city infrastructure depend on low‑lag links, routing choices start to look less like web traffic and more like air‑traffic control—priorities, queues, and no‑fail backstops. New space‑based backbones, quantum‑safe fibers, and denser edge hubs could cut delays enough for remote robots or AR contact lenses to feel local. But the same tools that dodge chokepoints can also harden digital borders, carving today’s shared network into rival “internets.”
As more layers stack on top of this physical mesh—edge data centers in small towns, sensor‑laden streets, even smart farm fields—the “distance” between you and your data can shrink to a few city blocks. The tradeoff: like adding new subway lines, every extension raises fresh questions about who funds it, who rides first, and who gets left waiting on the platform.
Here’s your challenge this week: run a “trace your packet” experiment from your own device. Use a traceroute tool (like `traceroute` on Mac/Linux or `tracert` on Windows, or an online traceroute site) to see the actual path your data takes to reach a site hosted on another continent, then copy the hop IPs into an online IP geolocation tool and map the rough physical route. Next, do a speed test on your home internet, look up the advertised bandwidth from your ISP, and calculate how close your real-world performance is to what they promise. Finally, write a short, plain-language explanation (2–3 sentences) you could say to a friend that describes where your data physically traveled and how long it realistically took compared to the “speed of light” in fiber.

