Cisco Cisco UCS C220 M4 Rack Server 데이터 시트

다운로드
페이지 2
TechnIcal specIfIcaTIons
Tesla K10
a
Tesla K20
Tesla K20X
Peak double precision floating point 
performance (board)
0.19 teraflops
1.17 teraflops
1.31 teraflops
Peak single precision floating point 
performance (board)
4.58 teraflops
3.52 teraflops
3.95 teraflops
Number of GPUs
2 x GK104s
1 x GK110
Number of CUDA cores
2 x 1536
2496
2688
Memory size per board (GDDR5)
8 GB
5 GB
6 GB
Memory bandwidth for board (ECC off)
b
320 GBytes/sec
208 GBytes/sec
250 GBytes/sec
GPU computing applications
seismic, image, signal 
processing, video analytics
CFD, CAE, financial computing, computational chemistry 
and physics, data analytics, satellite imaging, weather 
modeling
Architecture features
sMX
sMX, Dynamic Parallelism, Hyper-Q
System
servers only
servers and Workstations
servers only
a
 Tesla K10 specifications are shown as aggregate of two GPUs.
b
 With ECC on, 12.5% of the GPU memory is used for ECC bits. so, for example, 6 GB total memory yields 5.25 GB of user available memory with ECC on.
feaTUres and BenefITs
ECC memory error protection
Meets a critical requirement for computing accuracy and reliability in data centers and 
supercomputing centers. External DrAM is ECC protected in Tesla K10. Both external and internal 
memories are ECC protected in Tesla K20 and K20X.
System monitory features
Integrates the GPU subsystem with the host system’s monitoring and management capabilities 
such as IPMI or OEM-proprietary tools. IT staff can now manage the GPU processors in the 
computing system using widely used cluster/grid management solutions.
L1 and L2 caches
Accelerates algorithms such as physics solvers, ray-tracing, and sparse matrix multiplication 
where data addresses are not known beforehand. 
Asynchronous transfer with 
dual DMA engines
Turbocharges system performance by transferring data over the PCIe bus while the computing 
cores are crunching other data.
Flexible programming 
environment with broad 
support of programming 
languages and APIs
Choose OpenACC, CUDA toolkits for C, C++, or Fortran to express application parallelism and take 
advantage of the innovative Kepler architecture.
sofTware and drIvers
© 2012 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Tesla, Kepler, and CUDA are trademarks and/or registered 
trademarks of NVIDIA Corporation. All company and product names are trademarks or registered trademarks of the respective owners 
with which they are associated. Features, pricing, availability, and specifications are all subject to change without notice. OCT12
 
> software applications page: 
www.nvIdIa.com/Teslaapps
 
> Tesla GPU computing accelerators are 
supported for both linux and Windows. 
server modules are only supported on 
64-bit Oses and workstation/desktop 
modules are supported for 32-bit as well.
 
> Drivers– NVIDIA recommends that 
users get drivers for Tesla server 
products from their system OEM to 
ensure that the driver is qualified by 
the OEM on their system. The latest 
drivers can be downloaded from 
www.nvIdIa.com/drivers
 
> learn more about Tesla data center 
management tools at 
www.nvIdIa.com/
object/softwarefor-tesla-products.html
 
> software development tools are available 
at 
http://developer.nvidia.com/getting-
started-parallelcomputing
To learn more about NVIDIA Tesla, go to 
www.nvIdIa.com/Tesla