IBM REDP-4285-00 Manual De Usuario

Descargar
Página de 170
Chapter 4. Tuning the operating system 
127
Draft Document for Review May 4, 2007 11:35 am
4285ch04.fm
change the configuration, you can use ethtool if the device driver supports the ethtool 
command. You may have to change /etc/modules.conf for some device drivers.
4.7.3  MTU size
Especially in Gigabit networks, large maximum transmission units (MTU) sizes (also known 
as JumboFrames) may provide better network performance. The challenge with large MTU 
sizes is the fact that most networks do not support them and that there are a number of 
network cards that also do not support large MTU sizes. If your objective is transferring large 
amounts of data at gigabit speeds (as in HPC environments, for example), increasing the 
default MTU size can provide significant performance gains. In order to change the MTU size, 
use /sbin/ifconfig as shown in Example 4-17.
Example 4-18   Changing the MTU size with ifconfig
[root@linux ~]# ifconfig eth0 mtu 9000 up
4.7.4  Increasing network buffers
The Linux network stack is rather cautious when it comes to assigning memory resources to 
network buffers. In modern high-speed networks that connect server systems, these values 
should be increased to enable the system to handle more network packets.
򐂰
Initial overall TCP memory is calculated automatically based on system memory; you can 
find the actual values in:
/proc/sys/net/ipv4/tcp_mem
򐂰
Set the default and maximum amount for the receive socket memory to a higher value:
/proc/sys/net/core/rmem_default
/proc/sys/net/core/rmem_max
򐂰
Set the default and maximum amount for the send socket to a higher value:
/proc/sys/net/core/wmem_default
/proc/sys/net/core/wmem_max
򐂰
Adjust the maximum amount of option memory buffers to a higher value:
/proc/sys/net/core/optmem_max
Tuning window sizes
Maximum window sizes can be tuned by the network buffer size parameters described above. 
Theoretical optimal window sizes can be obtained by using BDP (bandwidth delay product). 
BDP is the total amount of data that resides on the wire in transit. BDP is calculated with this 
simple formula:
BDP = Bandwidth (bytes/sec) * Delay (or round trip time) (sec)
To keep the network pipe always full and fully utilize the line, network nodes should have 
buffers available to store the same size of data as BDP. Otherwise, a sender has to stop 
sending data and wait for acknowledgement to come from the receiver (refer to “Traffic 
control” on page 32)
For example, in a Gigabit Ethernet LAN with 1msec delay BDP comes to:
Attention: For large MTU sizes to work, they must be supported by both the network 
interface card and the network components.