Many AWS customers create high-performance systems that run across multiple EC2 instances and make good use of all available network bandwidth. Over the years, we have been working to make EC2 an ever-better host for this use case. For example, the CC1 instances were the first to feature 10 Gbps networking. Later, the Enhanced Networking feature reduced latency, increased the packet rate, and reduced variability. Through the years, the goal has remained constant-ensure that the network is not the bottleneck for the vast majority of customer workloads.
We've been talking to lots of customers and watching technology trends closely. Along with a short-term goal of enabling higher throughput by providing more bandwidth, we established some longer-term goals for the next generation of EC2 networking. We wanted to be able to take advantage of the increased concurrency (more vCPUs) found in today's processors, and we wanted to lay a solid foundation for driver support in order to allow our customers to take advantage of new developments as easily as possible.
I'm happy to be able to report that we are making great progress toward these goals with today's launch of the new Elastic Network Adapter (ENA) to provide even better support for high performance networking. Available now for the new X1 instance type, ENA provides up to 20 Gbps of consistent, low-latency performance when used within a Placement Group, at no extra charge!
Per our longer-term goals, ENA will scale as network bandwidth grows and the vCPU count increases; this will allow you to take advantage of higher bandwidth options in the future without the need to install newer drivers or to make other changes to your configuration, as was required by earlier network interfaces.
ENA Advantages
We designed ENA to work well in conjunction with modern processors, such as those found on X1 instances. Because these processors feature a large number of virtual CPUs (128 in the case of X1), efficient use of shared resources such as the network adapter is important. While delivering high throughput and great packet per second (PPS) performance, ENA minimizes the load on the host processor in a number of ways, and also does a better job of distributing the packet processing workload across multiple vCPUs. Here are some of the features that enable this improved performance:
- Checksum Generation – ENA handles IPv4 header checksum generation and TCP / UDP partial checksum generation in hardware.
- Multi-Queue Device Interface – ENA makes uses of multiple transmit and receive queues to reduce internal overhead and to increase scalability. The presence of multiple queues simplifies and accelerates the process of mapping incoming and outgoing packets to a particular vCPU.
- Receive-Side Steering – ENA is able to direct incoming packets to the proper vCPU for processing. This reduces bottlenecks and increases cache efficacy.
All of these features are designed to keep as much of the workload off of the processor as possible and to create a short, efficient path between the network packets and the vCPU that is generating or processing them.
Using ENA
In order to make use of ENA, you need to use our new driver and tag the AMI as having ENA support.
The new driver is available in the latest Amazon Linux AMIs and will soon be available in the Windows AMIs. The Open Source Linux driver is available in source form on GitHub for use in your own AMIs. Also, a driver for the Intel® Data Plane Developer Kit (Intel® DPDK) is available for developers that are building network processing applications such as load balancers or virtual routers.
If you are creating your own AMI, you also need to set the enaSupport
attribute when you register it. Here's how you do that from the command line (see the register-image
documentation for a full list of parameters):
$ aws ec2 register-image --ena-support ...
You can still use the AMI on instances that do not support ENA.
Going Forward
As noted earlier, ENA is available today for X1 instances. We are also planning to make it available for future EC2 instance types.
Jeff;
No comments:
Post a Comment