Cisco Cisco UCS C22 M3 Rack Server 백서
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 5 of 29
Scalability concerns arise in traditional client-server file systems as a result of inherent centralization. Ceph
decouples data and metadata operations by eliminating file allocation tables and replaces them with its own
CRUSH algorithm. Ceph uses object storage devices not just for data access, but for serialization, replication,
and failure detection. All these features make Ceph a compelling choice over traditional storage.
Figure 2 provides an overview of the Ceph architecture.
Figure 2.
Ceph Architecture
Ceph Components
Ceph includes the following components:
●
Data storage: The Ceph storage cluster receives data from Ceph clients. This data may come through a
Ceph block device, Ceph object storage, Ceph FS, or a custom implementation you create using librados.
The cluster then stores the data as objects. Each object corresponds to a file in a file system, which is
stored on an object storage device (OSD). Ceph OSD daemons handle the read-write operations on the
storage disks.
●
Pools: The Ceph storage cluster stores data objects in logical partitions called pools. You can create pools
for particular data repositories, such as for block devices and object gateways, or simply to separate user
groups. From the perspective of a Ceph client, the storage cluster is very simple. When a Ceph client reads
or writes data (called an I/O context), it always connects to a storage pool in the Ceph storage cluster.
In a replicated storage pool, Ceph defaults to making three copies of an object with a minimum of two clean
copies for write operations. If drives containing two of the three copies fail, data will be preserved, but write
operations will be interrupted.