Intel’s Romley Server Platform Launch: A Tipping Point for 10-Gigabit Ethernet

By Bruce Tolley

Technology


Share

On March 6 Intel® launched the Xeon® E5-2600 family of processors, formerly code named Romley. This is big news for data centers and networking. All the server manufacturers big and small from A to Z will be shipping servers based on Romley. In other words, this platform will be the basis for computing for the next 2 to 3 years.

Highlights

New integrated PCI Express 3 bus
The current PCIe® 2 bus for server I/O uses 8b/10b encoding, operating at 5 GT/s.  On a PCIe x8 card, PCIe 2 can deliver a maximum bidirectional throughput of 20 Gbps per port.  PCIe 3 doubles the PCIe bandwidth by increasing the transfer rate 60% from 5 to 8 GT/sec and increasing the coding efficiency 23%, from 80 to to 98.5%. PCIe 3 therefore enables on a dual-port PCIe x8 card a theoretical maximum bidirectional throughput of 40 Gbps. In short, Romley promises a significant increase in server I/O by enabling full- bandwidth, four-port 10GbE server adapters as well as dual-port 40GbE server adapters. For more detail on how the PCIe 3 doubles the PCIe 2 bandwidth, see the table below and http://www.pcisig .com/news_room/faqs/pcie3.0_faq

  PCIe 2 PCIe 3
Signaling rate per lane 5 GT/s 8 GT/s
Encoding 8b/10b 128b/130b
Coding efficiency 80% 98.5%
Max bandwidth per lane* 4 Gb/s ~8 Gb/s
     

Performance
Romley offers IT managers building clusters and data centers a very big speed increase, arguably bigger than the standard Moore’s law improvement. Intel states that Romley will deliver 80% more performance over the previous Westmere generation.  Why is this good for networking? If IT managers and network architects can find the right applications —many pundits point to cloud, virtualization, and big data —, those applications running on Romley servers can easily fill 10GbE pipes, and, after the time delay pointed out below, 40 GbE pipes. According to Crehan Research, in conjunction with the ramp of Romley-based servers, 10GbE port shipments are expected to become a majority of server ports by 2014, and to continue to increase as a portion of total ports through 2016. See http://www.crehanresearch.com/

Latency reductions
Unlike previous generations where I/O operations were managed by a different chip, the newly supported PCIe 3 functionality has been brought directly into the main processor.  This integration of I/O significantly reduces latency and improves data throughput. Latency matters to the early adopters who will pay real money for these servers, such as the high-frequency traders (HFT) in Chicago, London, and New York and high-performance compute (HPC) customers around the world.

Intel has also streamlined its caching architecture with a functionality called Intel Data Direct I/O (DDIO) that further reduces latency and increases energy efficiency and throughput.  With DDIO, a PCIe server adapter (or controller LOM) communicates directly to the processor’s last-level cache rather than making a detour to system memory on ingress or egress.

Looking forward to the next 12 months
Romley will be a catalyst for a broad industry shift from 1 Gigabit to 10 Gigabit connections at the server access edge.  But analogous to the transition from PCIe 1.x to 2.0, it takes time and focus to develop the PCIe 3 ecosystem. Behind the scenes, Intel and the PCIe community (OEMs, adapter vendors, chipset vendors, IP vendors, and test equipment companies) are doing a lot of heavy lifting to promote interoperability testing and plugfests. Based on vendor press releases, initial applications appear to be mostly storage, graphics, and Infiniband PCIe cards. It is expected that PCIe 3 support for 10GbE and 40GbE server adapter cards will arrive in late 2012.

Bottom line
On the plus side, customers will see some amazing performance when plugging very fast PCIe 2 x8 10GbE server cards into Romley.  With first deployments, we should see the 10GbE switch to server attach rates dramatically increase and, over time, a move to 40GbE at the server access edge.

 

 

 

Share

Subscribe to our blog

Subscribe to receive the latest insights and updates from the Ethernet Alliance.

No, thanks!

*

*

*