Is there a "known" **full** hardware configuration that someone can share
where they are "happy" with the CEPH performance? By "full", I mean the
full specs of server node (including SSD purchased, hard disks bought, RAID
controller used, ethernet card purchased, file-system type used, OS used)
and
Is there a guide/blog/document describing how one can identify bottlenecks
in a ceph cluster? For example, what if one node in my machine has a slow
hard-disk, CPU, or internet -- is there an easy and reliable way that I can
trace what is causing the CEPH performance to be poor?
Also, are there so
If a partition name such as "/dev/sdd" changes to "/dev/sde" and ceph was
already mapped to the old "/dev/sdd", how will CEPH react? For example,
would it get corrupted, or notice a problem and fail remove that one OSD
from cluster or somehow automatically re-adapt?
FYI: I ask because we added a n
When I use ceph-deploy to add a bunch of new OSDs (from a new machine), the
ceph cluster starts rebalancing immediately; as a result, the first couple
OSDs are started properly; but the last few can't start because I keep
getting a "timeout problem", as shown here:
[root@ia6 ia_scripts]# service c
I'd like to take daily snapshots (rbd snap create) of a large VM (~10 TB;
image format 2). The VM grows by about 50 to 100 GB per day. These
snapshots would server as my daily backups. So far, the snapshots for small
VMs (eg 500 GB) take only a second or two and, from what I understand,
don't take