Quantum 6-01376-05 User Manual

Page of 32
  StorNext File System Tuning
The Underlying Storage System
StorNext File System Tuning Guide
3
metadata operations throughput. This is easily observed in the hourly 
File System Manager (FSM) statistics reports in the 
cvlog
 file. For 
example, here is a message line from the cvlog file:
PIO HiPriWr SUMMARY SnmsMetaDisk0 sysavg/350 sysmin/333 sysmax/367
This statistics message reports average, minimum, and maximum write 
latency (in microseconds) for the reporting period. If the observed 
average latency exceeds 500 microseconds, peak metadata operation 
throughput will be degraded. For example, create operations may be 
around 2000 per second when metadata disk latency is below 500 
microseconds. However, if metadata disk latency is around 5 
milliseconds, create operations per second may be degraded to 200 or 
worse.
Another typical write caching approach is a “write-through.” This 
approach involves synchronous writes to the physical disk before 
returning a successful reply for the I/O operation. The write-through 
approach exhibits much worse latency than write-back caching; therefore, 
small I/O performance (such as metadata operations) is severely 
impacted. It is important to determine which write caching approach is 
employed, because the performance observed will differ greatly for small 
write I/O operations.
In some cases, large write I/O operations can also benefit from caching. 
However, some SNFS customers observe maximum large I/O 
throughput by disabling caching. While this may be beneficial for special 
large I/O scenarios, it severely degrades small I/O performance; 
therefore, it is suboptimal for general-purpose file system performance.
RAID Read-Ahead 
Caching
0
RAID read-ahead caching is a very effective way to improve sequential 
read performance for both small (buffered) and large (DMA) I/O 
operations. When this setting is utilized, the RAID controller pre-fetches 
disk blocks for sequential read operations. Therefore, subsequent 
application read operations benefit from cache speed throughput, which 
is faster than the physical disk throughput.
This is particularly important for concurrent file streams and mixed I/O 
streams, because read-ahead significantly reduces disk head movement 
that otherwise severely impacts performance. 
While read-ahead caching improves sequential read performance, it does 
not help random performance. Furthermore, some SNFS customers 
actually observe maximum large sequential read throughput by disabling 
caching. While disabling read-ahead is beneficial in these unusual cases,