Ethernet Alliance

Ethernet Alliance Blog

Terabits in a Rack and Petabits in the Data Center

By Scott Kipp

As the American scholar Albert Bartlett said, “The greatest shortcoming of the human race is our inability to understand the exponential function.”  Bartlett was mainly referring to population growth, but the IT industry is seeing unbelievable, exponential bandwidth growth today.  While human populations are growing at about 1%/year, bandwidths are growing at tens of percent per year. 

For example, server output bandwidth is growing on average at 41%/year1.  That means servers are pushing twice as much data into the network every two years.   This exponential growth is what is driving servers to move from Gigabit Ethernet (GbE) to 10GbE.  These 10GbE server connections are having an avalanche effect on the rest of the network in the data center.  Within the data center, the growth in core network bandwidth is expected to grow at 58%/year1 – doubling every 18 months.  Exponential growths at over 10% per year lead to amazing growths.

We’ve all heard about how the Internet is growing exponentially, but it is only growing at 32%/year2.  OK, that was a joke because 32%/year is crazy fast growth, but it is still quite smaller than 58% growth.  The bandwidth within the data center is expected to grow faster than the bandwidth out to the Internet because there is much more server-to-server traffic within the data center than server-to-Internet traffic.  Switched bandwidth within the data center can be orders of magnitude less expensive than routed bandwidth to the Internet.  The bandwidth within the data center is expected to grow considerably faster than bandwidth on the Internet.

Exponential growths are hard to imagine as Barnet alludes to.  To get a better understanding of the growth, I pick a few examples and see how bandwidth grows over a decade in Table 1.    The first growth rate in the table shows how a server that pushes 1 Gbps in 2010 and grows at 41%/year would drive 32 Gbps of bandwidth in 2020.  This shows how by the end of the decade, 10GbE won’t be enough bandwidth for servers. Ethernet networks have reached giga-levels so they are growing in Gbps per year now.

Table 1: Examples of Exponential Bandwidth Growth

Traffic Type

Bandwidth in 2010

Bandwidth Growth/Year

Months to Double Bandwidth

Bandwidth in 2020

Server to Switch

1 Gbps

41%

24

32 Gbps

Switch to Switch

 5 Gbps

58%

18

506 Gbps

Data Center to Internet

40 Gbps

32%

30

642 Gbps

Total IP Traffic

61 Tbps

32%

30

979 Tbps

 

The network is growing even faster as these server bandwidths are aggregated.  If a switch-to-switch link was pushing 5Gbps in 2010 and growing at 58%/year, then that link will be pushing 506 Gbps in 2020.  This type of growth makes 100GbE look slow. This growth is hard to imagine, but that is what the numbers tell us.  We can listen or not.

I show a couple of other examples in the table for traffic growth to the Internet.  If an Internet data center is delivering 40Gbps in 2010 and growing at 32% per year, then the data center will need 642 Gbps by 2020.  Likewise, the aggregate bandwidth of the Internet was 61 Tbps in 2010.  If the IP traffic continues to grow at 32%/year, then the Internet will push 979Tbps of traffic in 2020.  These astronomical numbers are hard to imagine, but data center operators are getting use to it. 

Many data center planners have gotten used to these giga-scale increases.  If they are deploying 100 servers with 10Gbps connections, then they have Tbps of aggregate bandwidth.  Assuming a scenario where each server is delivering 25Gbps of output within the decade, then a rack with 40 servers can produce a terabit/second (Tbps) of data.  If you have a thousand of these racks in a mega data center, then you’ll get a petabit per second (Pbps or 1,000 Terabits) of bandwidth.  While few mega data centers are being deployed at this scale, lots of data centers can deploy hundreds of 10GbE servers and produce a Tbps of bandwidth within a few racks.  These astronomical numbers are the result of exponential growth.

I’ll be giving a keynote at the “Terabit Optical and Data Networking Conference” on Thursday April 19th at 9:45am in Cannes, France.  The speech is titled “Terabits in a Rack and Petabits in a Data Center” and I’ll discuss how 10GbE servers can drive terabits of data from a rack and petabits (1,000 terabits) of data through the data center every second.  Imagine moving a petabit/s (Pbps) of bandwidth within the data center to scale applications to unprecedented levels.

I’ve also written a story for Network World titled “Exponential Growth in Bandwidth and Cost Declines”.  This article shows how the costs of networks will need to decline exponentially to maintain the growth rates that are being forecast.  Read the article online here.

Enjoy,

Scott Kipp

President of the Ethernet Alliance

PS.  Tom Palkert will also be representing the Ethernet Alliance and giving a presentation on 100GbE standards on Wednesday, April 18th at 2:05pm. 

  1. http://www.ieee802.org/3/100GCU/public/nov10/CFI_01_1110.pdf
  2. http://www.cisco.com/web/solutions/sp/vni/vni_forecast_highlights/index.html

 

 

 

Share

The views and opinions expressed in this blog are solely that of the individual(s) and should not be considered the views or positions of the Ethernet Alliance.

Share

Intel’s Romley Server Platform Launch: A Tipping Point for 10-Gigabit Ethernet

By Bruce Tolley

On March 6 Intel® launched the Xeon® E5-2600 family of processors, formerly code named Romley. This is big news for data centers and networking. All the server manufacturers big and small from A to Z will be shipping servers based on Romley. In other words, this platform will be the basis for computing for the next 2 to 3 years.

Highlights

New integrated PCI Express 3 bus
The current PCIe® 2 bus for server I/O uses 8b/10b encoding, operating at 5 GT/s.  On a PCIe x8 card, PCIe 2 can deliver a maximum bidirectional throughput of 20 Gbps per port.  PCIe 3 doubles the PCIe bandwidth by increasing the transfer rate 60% from 5 to 8 GT/sec and increasing the coding efficiency 23%, from 80 to to 98.5%. PCIe 3 therefore enables on a dual-port PCIe x8 card a theoretical maximum bidirectional throughput of 40 Gbps. In short, Romley promises a significant increase in server I/O by enabling full- bandwidth, four-port 10GbE server adapters as well as dual-port 40GbE server adapters. For more detail on how the PCIe 3 doubles the PCIe 2 bandwidth, see the table below and http://www.pcisig .com/news_room/faqs/pcie3.0_faq

 

PCIe 2

PCIe 3

Signaling rate per lane

5 GT/s

8 GT/s

Encoding

8b/10b

128b/130b

Coding efficiency

80%

98.5%

Max bandwidth per lane*

4 Gb/s

~8 Gb/s

 

 

 

Performance
Romley offers IT managers building clusters and data centers a very big speed increase, arguably bigger than the standard Moore’s law improvement. Intel states that Romley will deliver 80% more performance over the previous Westmere generation.  Why is this good for networking? If IT managers and network architects can find the right applications —many pundits point to cloud, virtualization, and big data —, those applications running on Romley servers can easily fill 10GbE pipes, and, after the time delay pointed out below, 40 GbE pipes. According to Crehan Research, in conjunction with the ramp of Romley-based servers, 10GbE port shipments are expected to become a majority of server ports by 2014, and to continue to increase as a portion of total ports through 2016. See http://www.crehanresearch.com/

Latency reductions
Unlike previous generations where I/O operations were managed by a different chip, the newly supported PCIe 3 functionality has been brought directly into the main processor.  This integration of I/O significantly reduces latency and improves data throughput. Latency matters to the early adopters who will pay real money for these servers, such as the high-frequency traders (HFT) in Chicago, London, and New York and high-performance compute (HPC) customers around the world.

Intel has also streamlined its caching architecture with a functionality called Intel Data Direct I/O (DDIO) that further reduces latency and increases energy efficiency and throughput.  With DDIO, a PCIe server adapter (or controller LOM) communicates directly to the processor’s last-level cache rather than making a detour to system memory on ingress or egress. 

Looking forward to the next 12 months
Romley will be a catalyst for a broad industry shift from 1 Gigabit to 10 Gigabit connections at the server access edge.  But analogous to the transition from PCIe 1.x to 2.0, it takes time and focus to develop the PCIe 3 ecosystem. Behind the scenes, Intel and the PCIe community (OEMs, adapter vendors, chipset vendors, IP vendors, and test equipment companies) are doing a lot of heavy lifting to promote interoperability testing and plugfests. Based on vendor press releases, initial applications appear to be mostly storage, graphics, and Infiniband PCIe cards. It is expected that PCIe 3 support for 10GbE and 40GbE server adapter cards will arrive in late 2012.

Bottom line
On the plus side, customers will see some amazing performance when plugging very fast PCIe 2 x8 10GbE server cards into Romley.  With first deployments, we should see the 10GbE switch to server attach rates dramatically increase and, over time, a move to 40GbE at the server access edge.

Bruce Tolley
Vice President, Solutions Marketing
Solarflare
btolley@solarflare.com
iPhone 650.862.1074
www.solarflare.com

 

 

Share

The views and opinions expressed in this blog are solely that of the individual(s) and should not be considered the views or positions of the Ethernet Alliance.

Share

Ethernet on the Road

By John D'Ambrosia

Congratulations to the group of individuals, who were successful at the IEEE 802 March Plenary at getting the IEEE 802.3 Reduced Twisted Pair Gigabit Ethernet PHY Study Group formed.  This effort seeks to define a new Gigabit Ethernet standard for automotive networks that would operate on fewer than the 4 pairs of UTP cabling currently defined.  This new standard would enable a paradigm shift to a centralized Ethernet-based backbone architecture for automobiles, which will help enable new high bandwidth applications, such as on-board camera systems, sensors, and infotainment.  Come 2019 – and this new standard could be the basis for anywhere from 200 million to 350 million ports per year.  Further port deployment could be enabled by applications targeting industrial and avionics networking. To view Steve Carlson discuss the  new activity in IEEE 802.3 taking Ethernet into the realm of automobiles, click here!

So what happens to the Ethernet Ecosystem with this new port type?  Will this have an impact on the ever exponential growing bandwidth demand?  What services will be offered?  Are automobiles the next “cubicle” that will need to be worried about in developing enterprise networks?  All reasonable questions , and all possible development efforts that could drive another surge in industry activity.

From its inception the Ethernet Alliance has always espoused a philosophy that Ethernet goes from carriers to consumers.  Its very charter is to support the market expansion and continuing development of IEEE 802 Ethernet standards.  This latest effort represents another step in Ethernet’s continuing saga, and the Ethernet Alliance stands ready to support Ethernet, its members, and the industry.

John D’Ambrosia
Chair, Ethernet Alliance Board of Directors

Share

The views and opinions expressed in this blog are solely that of the individual(s) and should not be considered the views or positions of the Ethernet Alliance.

Share