Hi All,
I have a situation here.
I have an EC pool that is having cache tier pool (the cache tier is
replicated with size 2).
Had an issue on the pool and the crush map got changed after rebooting
some OSD in any case I lost 4 cache ties OSDs
those lost OSDs are not really lost they look f
ice ceph-osd@18 stop
8. import the file using ceph-objectstore-tool
> ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-18 --op
import --file /tmp/recover.1.24
9. start the osd
> service-osd@18 start
this worked for me -- not sure if this is the best way or if I took
any extra
didnt test with luminous i am still using kraken.
but for normal RBD workload using EC with cache tier does not give give
good results at all
this pool has been running for few months with real users and while in
testing it seems somehow ok and usable (still slow) in real workset it
is very