I just saw my plane cross the mid-Atlantic, not by looking out the window, but by watching routing updates cascade across the Internet. I'm writing from a Lufthansa jet right now, travelling from Munich to Boston. This plane offers the (relatively) new Connexion by Boeing wifi + satellite Internet service. It's seriously cool stuff - high latency, but absolutely functional. I've been aware of it for a while since the Boeing folks did a NANOG presentation about it last year. But this is the first time I've been able to use it.
Renesys has been tracking Internet updates for a very long time. We set realtime routing alerts to tell us when changes in the Internet's structure are a violation of someone's routing or security policy. We have known that due to satellite connectivity, the Internet routing tables could be used for tracking aircraft and the like. But this is the first time I've been on an Internet-connected vehicle, travelling 950kph, that changed its connection to the Internet. If this interconnection architecture is used by others, this could signal the rise of all kinds of interesting uses of the global Internet for monitoring.
I was able to see the mid-Atlantic shift because the plane I'm on withdrew its routes from the European communications satellites and re-announced them in North America. The Boeing engineers faced some interesting challenges in designing this system. They wanted a wifi-delivered platform that was easy to use. They also wanted fully-functional connectivity. They were targetting business customers so simple web connectivity was not enough: customers would want VPNs, ssh and all manner of connections to corporate applications. And finally, if this service was going to work properly, it would have to be as low-latency as possible, not just high bandwidth.
Most Internet users have heard about latency (almost always in the context of gaming) but don't really understand much about it. Latency is the delay in a single bit (or packet - on fast circuits there's not much queuing delay so there's not much difference) getting from one place to another and back. Latency is almost always the result of path selection and limitations of the speed of light. For example, if my best path to your server goes from NYC to London and back, then I will have a *minimum* latency of around 60ms and likely more like 80. Not terrible but not great.
Large latencies impact all kinds of uses of the Net, including connection set-up, interactive typing or screen-refresh and throughput. The main problem that the Boeing engineers faces is that geostationary satellites (that maintain their position above a particular spot on the earth - almost all communications satellites fit this description) are really high up. In fact, they are at least 300ms unidirectional latency all by themselves (that's aircraft->satellite->Europe).
One simple architecture for the Connexion service would have been to just put a network operations center in one place, and drag all the traffic from all the planes back to it. The problem is that that would add unacceptable latency. For example, if they located a network operations center in California (an obvious place to put it), trans-pacific customers talking to a server in Europe would have a total of almost 600ms unidirectional latency (300 satellite, 130ms East Asia -> North America, 70ms across North America, 80ms North America->Europe). That means that a simple TCP connection (which every web session involves hundreds of) would take 2 seconds to set up. This is muy malo.
So how did they solve it? They assigned a /24 (256 globally visible IP addresses) to each plane. They announce that network from the origin site (in my case, Europe since I took off from Germany). When the plane is between the two satellites and in view of each, it is programmed to re-connect to the North American satellite. So traffic is always getting to the ground the fastest it can, minimizing latency. In the example above, they were able to cut that latency in half by utilizing this strategy. So each connection set-up now takes 1s instead of 2. Now, granted, 1s connection set-up time is not fantastic, but it is perfectly usable.
When I found out I was going to be on a plane with the Connexion service, I was excited that I could finally try it out. It only costs $27 for the whole flight, which is clearly a good deal if you have some work (or blogging) to do. :-) As soon as I got connected, I woke up some of my colleagues at Renesys and got them to set BGP alarms on the network prefix for my plane. The IP addresses on the plane are all NATted (Network Address Translation - this means they use private addresses described in RFC1918 for the customer laptops), but the plane itself still has a visible /24 of address space. In my case this was 184.108.40.206/24.
About 2 hours west of Ireland, my connectivity froze for about two minutes. I had a ping running in the background and it just hung. I waited until it restored, reconnected to my screen session, and sure enough, colleagues at home reported massive routing change associated with that network: Boeing had withdrawn that prefix from their European ground station and advertised it from the North American one. This showed up as a change of origin alert as well as a series of announcement and path change alerts.
Here are screenshots from the Renesys Routing Intelligence application showing the routing update (open in new window):
Fun stuff. Of course, I know the capabilities of Renesys's platform. I explain it to customers (and prospective customers) all the time. But there's a world of difference between that and seeing it detect my plane crossing the Atlantic Ocean. That is visceral and seriously cool.
We'll be landing in about an hour, so I'll proof and post this blog from the air. Because that's pretty cool, too. I seriously do wonder what else can be tracked via the global routing tables using this kind of approach. And what value that might have to people. Suggestions?