Intel E1G44ET2 E1G44ET2BLK Manual Do Utilizador
Códigos do produto
E1G44ET2BLK
2
Optimized for Virtualization
The Intel® Gigabit ET, ET2, and EF Multi-Port Server Adapters
showcase the latest virtualization technology called Intel®
Virtualization Technology for Connectivity (Intel® VT for
Connectivity). Intel VT for Connectivity is a suite of hardware
assists that improve overall system performance by lowering
the I/O overhead in a virtualized environment. This optimizes
CPU usage, reduces system latency, and improves I/O through-
put. Intel VT for Connectivity includes:
showcase the latest virtualization technology called Intel®
Virtualization Technology for Connectivity (Intel® VT for
Connectivity). Intel VT for Connectivity is a suite of hardware
assists that improve overall system performance by lowering
the I/O overhead in a virtualized environment. This optimizes
CPU usage, reduces system latency, and improves I/O through-
put. Intel VT for Connectivity includes:
• Virtual Machine Device Queues (VMDq)
• Intel® I/O Acceleration Technology
1
(Intel® I/OAT)
Use of multi-port adapters in a virtualized environment is very
important because of the need to provide redundancy and
data connectivity for the applications/workloads in the virtual
machines. Due to slot limitations and the need for redundancy
and data connectivity, it is recommended that a virtualized
physical server needs at least six GbE ports to satisfy the
I/O requirement demands.
important because of the need to provide redundancy and
data connectivity for the applications/workloads in the virtual
machines. Due to slot limitations and the need for redundancy
and data connectivity, it is recommended that a virtualized
physical server needs at least six GbE ports to satisfy the
I/O requirement demands.
Virtual Machine Device queues (VMDq)
VMDq reduces I/O overhead created by the hypervisor in a
virtualized server by performing data sorting and coalescing in
the network silicon.
virtualized server by performing data sorting and coalescing in
the network silicon.
2
VMDq technology makes use of multiple
queues in the network controller. As data packets enter the
network adapter, they are sorted, and packets traveling to the
same destination (or virtual machine) get grouped together in a
single queue. The packets are then sent to the hypervisor, which
directs them to their respective virtual machines. Relieving the
hypervisor of packet filtering and sorting improves overall CPU
usage and throughput levels.
network adapter, they are sorted, and packets traveling to the
same destination (or virtual machine) get grouped together in a
single queue. The packets are then sent to the hypervisor, which
directs them to their respective virtual machines. Relieving the
hypervisor of packet filtering and sorting improves overall CPU
usage and throughput levels.
This generation of PCIe Intel® Gigabit adapters provides improved
performance with the next-generation VMDq technology, which
includes features such as loop back functionality for inter-VM
communication, priority-weighted bandwidth management, and
doubling the number of data queues per port from four to eight.
It now also supports multicast and broadcast data on a virtual-
ized server.
performance with the next-generation VMDq technology, which
includes features such as loop back functionality for inter-VM
communication, priority-weighted bandwidth management, and
doubling the number of data queues per port from four to eight.
It now also supports multicast and broadcast data on a virtual-
ized server.
Intel® I/O Acceleration Technology
Intel I/O Acceleration Technology (Intel I/OAT) is a suite of fea-
tures that improves data acceleration across the platform, from
networking devices to the chipset and processors, which help
to improve system performance and application response times.
The different features include multiple queues and receive-side
scaling, Direct Cache Access (DCA), MSI-X, Low-Latency Inter-
rupts, Receive Side Scaling (RSS), and others. Using multiple
queues and receive-side scaling, a DMA engine moves data using
the chipset instead of the CPU. DCA enables the adapter to
pre-fetch data from the memory cache, thereby avoiding cache
misses and improving application response times. MSI-X helps
in load-balancing I/O interrupts across multiple processor cores,
and Low Latency Interrupts can provide certain data streams a
non-modulated path directly to the application. RSS directs the
interrupts to a specific processor core based on the application’s
address.
tures that improves data acceleration across the platform, from
networking devices to the chipset and processors, which help
to improve system performance and application response times.
The different features include multiple queues and receive-side
scaling, Direct Cache Access (DCA), MSI-X, Low-Latency Inter-
rupts, Receive Side Scaling (RSS), and others. Using multiple
queues and receive-side scaling, a DMA engine moves data using
the chipset instead of the CPU. DCA enables the adapter to
pre-fetch data from the memory cache, thereby avoiding cache
misses and improving application response times. MSI-X helps
in load-balancing I/O interrupts across multiple processor cores,
and Low Latency Interrupts can provide certain data streams a
non-modulated path directly to the application. RSS directs the
interrupts to a specific processor core based on the application’s
address.
Single-Root I/O Virtualization (SR-IOV)
For mission-critical applications, where dedicated I/O is required
for maximum network performance, users can assign a dedicated
virtual function port to a VM. The controller provides direct
VM connectivity and data protection across VMs using SR-IOV.
SR-IOV technology enables the data to bypass the software
virtual switch and provides near-native performance. It assigns
either physical or virtual I/O ports to individual VMs directly.
for maximum network performance, users can assign a dedicated
virtual function port to a VM. The controller provides direct
VM connectivity and data protection across VMs using SR-IOV.
SR-IOV technology enables the data to bypass the software
virtual switch and provides near-native performance. It assigns
either physical or virtual I/O ports to individual VMs directly.