Looks like the journal SSD is broken. If it's still readable but not
writable, then you can run
ceph-osd --id ... --flush-journal
and replace the disk after doing so.
You can then just point the sym links in
/var/lib/ceph/osd/ceph-*/journal to the new journal and run
ceph-osd --id ... -
I currently have two roots in my crush map, one for HDD devices and one for SSD
devices, and have had it that way since Jewel.
I am currently on Nautilus, and have had my crush device classes for my OSD's
set since Luminous.
> ID CLASS WEIGHTTYPE NAME
> -13 105.37599 root ssd
> -11
On Mon, 30 Sep 2019, Reed Dier wrote:
> I currently have two roots in my crush map, one for HDD devices and one for
> SSD devices, and have had it that way since Jewel.
>
> I am currently on Nautilus, and have had my crush device classes for my OSD's
> set since Luminous.
>
> > ID CLASS WEIGHT
I need to move a 6+2 EC pool from HDDs to SSDs while storage must remain
accessible. All SSDs and HDDs are within the same failure domains. The crush
rule in question is
rule sr-rbd-data-one {
id 5
type erasure
min_size 3
max_size 8
step set_chooseleaf_tri
Hello!
We have a small proxmox farm with
ceph consisting of three nodes.
Each node has 6 disks each with a capacity of 4 TB.
A only one pool has been created on these disks.
Size 2/1.
In theory, this pool should have a capacity: 32.74 TB
But the ceph df command returns only: 22.4 TB (USED + MAX AVA
On Mon, Sep 30, 2019 at 7:42 PM Frank Schilder wrote:
>
> and I would be inclined just to change the entry "step take ServerRoom class
> hdd" to "step take ServerRoom class ssd" and wait for the dust to settle.
yes
> However, this will almost certainly lead to all PGs being undersized and
>
ceph df shows a worst-case estimate based on current data
distribution, check "rados df" for more "raw" counts
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon, S
Paul, thank you!
Do you mean that value?
total_space 75.3TiB
Could you tell me where I can read about algorithm of calculating of worst
case scenario?
*rados df out:*
POOL_NAMEUSEDOBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND
DEGRADED RD_OPS RD WR_OPS WR
ala01vf01