I am trying to install a fresh Ceph cluster on CentOS 8.
Using the latest Ceph repo for el8, it still is not possible because of certain
dependencies:
libleveldb.so.1 needed by ceph-osd.
Even after manually downloading and installing the
leveldb-1.20-1.el8.x86_64.rpm package, there are still depe
Quick & dirty solution if only one OSD is full (likely as it looks
very unbalanced): take down the full OSD, delete data, take it back
online
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
T
Leveldb is currently in epel-testing and should be moved to epel next
week. You can get the rest of the dependencies from
https://copr.fedorainfracloud.org/coprs/ktdreyer/ceph-el8/ It works
fine. Hopefully, everything will make it into epel eventually but for
now this is good enough for me.
Going to resurrect this thread to provide another option:
LVM-cache, ie putting a cache device in-front of the bluestore-LVM LV.
I only mention this because I noticed it in the SUSE documentation for SES6
(based on Nautilus) here:
https://documentation.suse.com/ses/6/html/ses-all/lvmcache.html
My main problem with LVM cache was always the unpredictable
performance. It's *very* hard to benchmark properly even in a
synthetic setup, even harder to guess anything about a real-world
workload.
And testing out both configurations for a real-world setup is often
not feasible, especially as usage
Hopefully someone can sanity check me here, but I'm getting the feeling that
the MAX AVAIL in ceph df isn't reporting the correct value in 14.2.8
(mon/mgr/mds are .8, most OSDs are .7)
> RAW STORAGE:
> CLASS SIZEAVAIL USEDRAW USED %RAW USED
> hdd 530 T
On Sat, Apr 11, 2020 at 12:43 AM Reed Dier wrote:
> That said, as a straw man argument, ~380GiB free, times 60 OSDs, should be
> ~22.8TiB free, if all OSD's grew evenly, which they won't
Yes, that's the problem. They won't grow evenly. The fullest one will
grow faster than the others. Also, your
That definitely makes sense.
However should a hybrid pool not have 3x available of that of all SSD pool?
There is plenty of rust behind it that won't impede it being able to satisfy
all 3 replicants.
Example lets say I write 5.6TiB (current max avail):
to a hybrid pool, that's 5.6TiB to ssd osds