IBM SG24-5131-00 User Manual

Page of 240
Cluster Planning 
49
2.7.2  Cluster Passwords
While user and group management is very much facilitated with C-SPOC, the 
password information still has to be distributed by some other means. If the 
system is not configured to use NIS or DCE, the system administrator still has 
to distribute the password information, meaning that found in the 
/etc/security/password file, to all cluster nodes. 
As before, this can be done through 
rdist
 or 
rcp
. On RS/6000 SP systems, 
there are tools like 
pcp
 or 
supper
 to distribute information or better files.
2.7.3  User Home Directory Planning
As for user IDs, the system administrator has to ensure that users have their 
home directories available and in the same position at all times. That is, they 
don’t care whether a takeover has taken place or everything is normal. They 
simply want to access their files, wherever they may reside physically, under 
the same directory path with the same permissions, as they would on a single 
machine.
There are different approaches to that. You could either put them on a shared 
volume and handle them within a resource group, or you could use NFS 
mounts.
2.7.3.1  Home Directories on Shared Volumes
Within an HACMP cluster, this approach is quite obvious, however, it restricts 
you to only one machine where a home directory can be active at any given 
time. If you have only one application that the user needs to access, or all of 
the applications are running on one machine, where the second node serves 
as a standby machine only, this would be sufficient.
2.7.3.2  NFS-Mounted Home Directories
The NFS mounted home directory approach is much more flexible. Because 
the directory can be mounted on several machines at the same time, a user 
can work with it in several applications on several nodes at the same time.
However, if one cluster node provides NFS service of home directories to 
other nodes, in case of a failure of the NFS server node, the access to the 
home directories is barred. Placing them onto a machine outside the cluster 
doesn’t help either, since this again introduces a single point of failure, and 
machines outside the cluster are not any less likely to fail than machines 
within.