by Larry Chisvin, PLX VP of strategic initiatives
PCI Express, or PCIe, has been around for longer than anybody can remember, but until recently hasn't been regarded as a viable general-purpose fabric. That’s now changing, as we're seeing designers opting for PCIe as the main interconnect inside data center or cloud-based racks, with either Ethernet or InfiniBand connecting those racks together. That fact that these technologies coexist and complement one another gives rise to a brand-new application for PCIe -- fabrics – which is bringing the technology to the forefront of data center architectures, while refining the role of traditional interconnect solutions.
To give this some context, Ethernet has historically been the go-to scheme for most networking and connectivity needs, since it has momentum on its side, and is well understood. Yet, Ethernet’s performance lags in many applications, primarily due to its high latency. InfiniBand, on the other hand, provides a low-latency, high-performance alternative to Ethernet, and its software base enables powerful systems to be deployed quickly and easily. However, InifiniBand is expensive and, therefore, tends to be used in a limited number of environments – those in which the better performance justifies the cost premium.
Another way of looking at this is, if the application’s main objective is simple connectivity, 40G or 100G Ethernet is a reasonable choice. If the highest performance is desired and cost isn’t an issue, then InfiniBand makes more sense, especially at the enhanced data rates.
But while Ethernet and InfiniBand each has some desired attributes and an established track record in connecting racks together, neither is well-suited for interconnect within the rack. That distinction belongs to PCIe. The ideal data center has PCIe inside the rack, and either Ethernet or InfiniBand – or something else, such as the less commonly used Fibre Channel -- connecting the racks together.
While PCIe may be less established as a rack-level fabric, it’s been gaining interest as a powerful interconnect solution for data center for cloud equipment. It’s emergence is fueled by a number of factors: 1) PCIe combines both high performance and low latency; 2) almost every processor, storage device and I/O device has a native PCIe connection, so component requirements – and their associated costs and power needs -- are kept to a minimum, and 3) PCIe-based fabrics, such as PLX’s award-winning ExpressFabric technology.
ExpressFabric, which presently is being evaluated by top system makers around the world, extends the reach of PCIe by creating a powerful fabric for data center and cloud computing. It leverages existing hardware and software to create converged, multi-host, shared-I/O systems. With PCIe already native and dominant inside systems, ExpressFabric enables a PCIe-based box-to-box fabric within a rack, eliminating the need for expensive, power-hungry bridging devices (e.g., adapter cards) to translate PCIe to InfiniBand or Ethernet. ExpressFabric technology is presently targeted at small- to medium-sized cloud clusters, which is where the majority of the today’s high-volume innovations are taking place. SSD-based tiered storage, micro-servers and high-end general purpose GPU computing, for example, are ideal applications for ExpressFabric.
Looking forward, we envision PCIe-based fabrics in general and ExpressFabric in particular becoming more integral – front and center, you could say -- to data center architectures, while co-existing rather nicely with Ethernet and InfiniBand.

No comments:
Post a Comment