Dear all,
I deploy ceph with ext4 format for OSDs, I use SSD partition for journal's
OSDs.
I want to set:
filestore flusher = true
filestore sync flush = true
filestore fsync flushes journal data = true
(filestore max sync interval = 5
filestore min sync interval = .01 )
The parameters of
Hi folks,
We're setting up a ceph testing environment and having an issue where
ceph-deploy hangs on preparing drives. This is the output from ceph-deploy
-v osd prepare ceph_host:xvde1:/dev/xvdb2:
Preparing cluster ceph disks ceph_host:/dev/xvde1:/dev/xvdb2
Deploying osd to ceph_host
Host ceph_h
Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écrit :
> I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
> repairing. How to repair it exclude re-creating of OSD?
>
> Now it "easy" to clean+create OSD, but in theory - in case there are multiple
> OSDs - it may
From what I read, one solution could be "ceph pg force_create_pg", but
if I well understand it will recreate the whole PG as an empty one.
In my case I would like to only create missing objects (empty, of
course, since data is lost), to don't have anymore IO locked "waiting
for missing object".
Hi,
I am correct that placement groups without data have no impact on the
performance of a ceph cluster? Like in my case the pools data and rbd.
Thanks for clarification.
http://ceph.com/docs/master/rados/operations/pools/#create-a-pool
When you create a pool, set the number of placement grou