IBM Flex System V7000 Expansion Enclosure 4939H29 Manuel D’Utilisation

Codes de produits
4939H29
Page de 610
Chapter 2. Introduction to IBM Flex System V7000 Storage Node 
59
Draft Document for Review January 29, 2013 12:52 pm
8068ch02-HW Intro.fm
chooses a new canister as the configuration node and the new canister takes over the system 
IP addresses.
The system can be configured using either the IBM Flex System V7000 Storage Node 
management software the command-line interface (CLI), 
2.5.5  RAID
The IBM Flex System V7000 Storage Node setup contains a number of internal disk drive 
objects known as candidate drives, but these drives cannot be directly added to storage 
pools. The drives need to be included in a Redundant Array of Independent Disks (RAID) 
grouping used for performance and to provide protection against the failure of individual 
drives.
These drives are referred to as members of the array. Each array has a RAID level. Different 
RAID levels provide different degrees of redundancy and performance, and have different 
restrictions regarding the number of members in the array. 
IBM Flex System V7000 Storage Node supports hot spare drives. When an array member 
drive fails, the system automatically replaces the failed member with a hot spare drive and 
rebuilds the array to restore its redundancy (the exception being RAID 0). Candidate and 
spare drives can be manually exchanged with array members.
Each array has a set of goals that describe the wanted location and performance of each 
array. A sequence of drive failures and hot spare takeovers can leave an array unbalanced, 
that is, with members that do not match these goals. The system automatically rebalances 
such arrays when the appropriate drives are available.
The available RAID levels are:
��
RAID 0 (striping, no redundancy)
��
RAID 1 (mirroring between two drives)
��
RAID 5 (striping, can survive one drive fault)
��
RAID 6 (striping, can survive two drive faults)
��
RAID 10 (RAID 0 on top of RAID 1)
RAID 0 arrays stripe data across the drives. The system supports RAID 0 arrays with just one 
member, which is similar to traditional JBOD attach. RAID 0 arrays have no redundancy, so 
they do not support hot spare takeover or immediate exchange. A RAID 0 array can be 
formed by one to eight drives.
RAID 1 arrays stripe data over mirrored pairs of drives. A RAID 1 array mirrored pair is rebuilt 
independently. A RAID 1 array can be formed by two drives only.
RAID 5 arrays stripe data over the member drives with one parity strip on every stripe. RAID 5 
arrays have single redundancy. The parity algorithm means that an array can tolerate no more 
than one member drive failure. A RAID 5 array can be formed by 3 to 16 drives.
RAID 6 arrays stripe data over the member drives with two parity stripes (known as the 
P-parity and the Q-parity) on every stripe. The two parity strips are calculated using different 
algorithms, which give the array double redundancy. A RAID 6 array can be formed by 5 to 16 
drives.
RAID 10 arrays have single redundancy. Although they can tolerate one failure from every 
mirrored pair, they cannot tolerate two-disk failures. One member out of every pair can be 
rebuilding or missing at the same time. A RAID 10 array can be formed by 2 to 16 drives.