HP (Hewlett-Packard) HP BladeSystem Enclosure technologies ユーザーズマニュアル

ページ / 28
 
server power consumption, checks it against the power cap goal, and, if necessary, adjusts server 
performance to maintain an average power consumption that is less than or equal to the power cap 
goal. This functionality is available on all Intel-based ProLiant server blades. 
Using the Insight Power Manager (IPM) v1.10 plug-in to Systems Insight Manager v5.1, customers 
can set power caps on groups of supported servers. The IPM software statically allocates the group 
power cap among the servers in the group. The group cap is allocated equitably among all servers in 
the group based on a calculation using the idle and maximum measured power consumption of each 
server. In addition, IPM can track and graph over time the actual power usage of groups of servers 
and enclosures. This provides data center facilities managers with measured power consumption for 
various time periods, reducing the need to install monitored PDUs to measure actual power usage in 
data centers.  
Interconnect options and infrastructure  
The BladeSystem enclosures easily enable connecting the ports of embedded devices to the 
interconnect bays. The c3000 Enclosure signal midplane (Figure 9) acts as a PCI Express (PCIe) bus 
connecting interconnect ports on blade devices to interconnect modules. It has eight device bay signal 
connectors (one for each half-height server blade and two for each full-height server blade) and four 
interconnect module connectors (one for each interconnect bay). The device connections are in groups 
of lanes. Each lane is a group of four pins (two sending traces and two receiving traces), resulting in 
full-duplex communication. This combination provides a 1x (500Mb/s) transfer rate with 2x = 2 lanes 
(1Gb/s). 
 
Figure 9. Diagram of the HP BladeSystem c3000 signal midplane
 
 
 
By taking advantage of the similar four-wire differential transmit and receive mechanism, the signal 
midplane can support either network-semantic protocols (for example, Ethernet, Fibre Channel, and 
InfiniBand) or memory-semantic protocols (PCIe), using the same signal traces. Figure 10 illustrates 
how the physical lanes can be logically “overlaid” onto sets of four traces. Interfaces such as Gigabit 
Ethernet (1000-base-KX) or Fibre Channel need only a 1x lane, or a single set of 4 traces. Higher 
bandwidth interfaces, such as InfiniBand DDR, will need to use up to four lanes.  
12