Home / Data Center GPU coupled with Ice Lake Xeon

Data Center GPU coupled with Ice Lake Xeon

icelake gold

Machine learning and data analytics are two examples of data-hungry workloads. Enterprises require accelerated servers that are optimized for high performance to handle these compute-intensive tasks.

Intel’s new 3rd Gen Intel Xeon Scalable processors (code-named “Ice Lake”) are based on a new architecture that allows for a significant increase in performance and scalability. These new systems, when enhanced with NVIDIA GPUs and networking, are an ideal platform for enterprise accelerated computing, and include features that are well-suited for GPU-accelerated applications.

PCIe Gen 4 doubles the data transfer rate over the previous generation, matching the native speed of NVIDIA Ampere architecture-based GPUs like the NVIDIA A100 Tensor Core GPU. This improves throughput to and from the GPU, which is critical for machine learning workloads with large amounts of training data. This also speeds up data-intensive tasks such as 3D design for NVIDIA RTX Virtual Workstations, which are accelerated by the powerful NVIDIA A40 data center GPU, and others.

GPU direct memory access transfers are also accelerated by faster PCIe performance. A powerful solution for live broadcasts is faster I/O communication of video data between the GPU and GPUDirect for Video-enabled devices.

NVIDIA ConnectX family of HDR 200Gb/s InfiniBand adapters and 200Gb/s Ethernet NICs, as well as the upcoming NDR 400Gb/s InfiniBand adapter technology, take advantage of the higher data rate to enable networking speeds of 200Gb/s.

The Ice Lake platform supports 64 PCIe lanes, allowing for the installation of more hardware accelerators – such as GPUs and networking cards – in the same server, resulting in a higher density of acceleration per host. This also means that with the latest NVIDIA GPUs and NVIDIA Virtual PC software, higher user density can be achieved for multimedia-rich VDI environments.

These improvements enable GPU acceleration scaling on a never-before-seen scale. Using more GPUs within a host, as well as more effectively connecting GPUs across multiple hosts, enterprises can tackle the most difficult tasks.

Ice Lake’s memory subsystem has also been improved by Intel. The number of DDR4 memory channels has been increased from six to eight, with a maximum data transfer rate of 3,200 MHz for memory. This enables higher data transfer bandwidth from main memory to the GPU and networking, potentially increasing throughput for data-intensive workloads.

Finally, the processor has improved in ways that will help with high-performance computing workloads. The 10-15% increase in instructions per clock can result in a 40% increase in overall performance for the CPU portion of accelerated workloads. There are also more cores in the 8xxx variant, up to 40. This will allow for a higher density of virtual desktop sessions per host, allowing server GPU investments to go further.

We’re excited to see partners already announcing new Ice Lake systems accelerated by NVIDIA GPUs, such as Dell Technologies’ Dell EMC PowerEdge R750xa, which is purpose-built for GPU acceleration, and new Lenovo ThinkSystem Servers, which are based on 3rd Gen Intel Xeon Scalable processors and PCIe Gen4 and include several models with NVIDIA GPUs.

For enterprise customers looking to upgrade their data center, Intel’s new Ice Lake platform with accelerator hardware is a great option. Our mutual customers will be able to quickly benefit from its new architectural enhancements, which enable enterprises to run accelerated applications with better performance and at a data center scale.

Leave a Reply