IBM SG24-5131-00 사용자 설명서

다운로드
페이지 240
Cluster Customization 
125
The failure rate of networks varies, depending on their characteristics. For 
example, for an Ethernet, the normal failure detection rate is two keepalives 
per second; fast is about four per second; slow is about one per second. For 
an HPS network, because no network traffic is allowed when a node joins the 
cluster, normal failure detection is 30 seconds; fast is 10 seconds; slow is 60 
seconds.
The Change / Show Topology and Group Services Configuration screen 
includes the settings for the length of the Topology and Group services logs. 
The default settings are highly recommended. The screen also contains 
entries for heartbeat settings, but these are not operable (see 
HACMP/ES 
Installation and Administration Guide, SC23-4284, Chapter 18). The 
heartbeat rate is now set for each network module in the corresponding 
screen (see above).
To learn more about Topology and Group Services, see Chapter 32 of the 
HACMP/ES Installation and Administration Guide, SC23-4284. 
5.4  NFS considerations
For NFS to work correctly in an HACMP cluster environment, you have to 
take care of some special NFS characteristics. 
The HACMP scripts have only minimal NFS support. You may need to modify 
them to handle your particular configuration. The following sections contain 
some suggestions for handling a variety of issues.
5.4.1  Creating Shared Volume Groups
When creating shared volume groups, normally, you can leave the Major 
Number field blank and let the system provide a default for you. However, 
unless all nodes in your cluster are identically configured, you will have 
problems using NFS in an HACMP environment. The reason is that the 
system uses the major number as part of the file handle to uniquely identify a 
Network File System. 
In the event of node failure, NFS clients attached to an HACMP cluster 
operate exactly the way they do when a standard NFS server fails and 
reboots. If the major numbers are not the same, when another cluster node 
takes over the file system and re-exports it, the client application will not 
recover, since the file system exported by the node will appear to be different 
from the one exported by the failed node.