The company, however, has always been reticent when it came to sharing the details of its architecture a not-unreasonable precaution given that its entire business rest on these resources. According to Vahdat, Google began pioneering the use of software-defined networking, which decouples the control plane (which makes decisions about where traffic is sent) ten years ago. The company uses arrays of small, cheap switches to provide some of the same capabilities as much more expensive hardware, then manages workload distribution in software.
Right now, Google’s planetary network relies on a platform, codenamed Jupiter, that can provide more than 1Pb/s of bisection bandwidth. That’s an important distinction, as the term refers to the bandwidth between two equal parts, not the bandwidth capability of every part. As Google notes, that’s sufficient for 100,000 servers to exchange data at 10Gb/s each, or to read the scanned contents of the library of Congress in about 1/10 of a second.
Many of the “open” shifts in datacenter management and provision have been described as cost-saving measures, often with baleful glances cast at the likes of Cisco or Intel. That’s undoubtedly part of the underlying trend, but Google, Apple, or Amazon can easily afford to pay top-tier pricing for premium hardware. Vahdat’s remarks reflect a different reality, one where software provisioning and flexible design were required to compensate for the difficulty of scaling conventional network infrastructure up to match the needs of data centers.
While they serve different purposes, much of the research into efficient datacenter communication, SDN, and faster, lower-power interconnects has ramifications for scientific computing and the HPC ecosystem. Moving bits for minimal power at maximum speed is as important to scientific clusters as it is to datacenters. Bringing down interconnect power is a high priority for DARPA, as well as firms like Intel and IBM.