Are you using KRBD or librbd ? I remember seeing a similar issue when we
were using KRBD, switching to librbd fixed it. Could be something else
though.
Regards,
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-7199 x868
On 2025-11-03 08:55, Roland Giesler wrote:
On 2025/11/03 14:26, Roland Giesler wrote:
The next step would be to check for a MTU/jumbo mismatch.
After checking all interfaces, I can confirm no Jumbo frames are being
used. Everything runs at the default 1500 mtu.
Some details: ceph version 17.2.7
# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 62.29883 root default
-3 15.17824 host FT1-NodeA
2 hdd 1.86029 osd.2 up 1.00000 1.00000
3 hdd 1.86029 osd.3 up 1.00000 1.00000
4 hdd 1.86029 osd.4 up 1.00000 1.00000
5 hdd 1.86589 osd.5 up 1.00000 1.00000
0 ssd 0.93149 osd.0 up 1.00000 1.00000
28 ssd 3.30690 osd.28 up 1.00000 1.00000
29 ssd 3.49309 osd.29 up 1.00000 1.00000
-7 16.10413 host FT1-NodeB
10 hdd 1.86029 osd.10 up 1.00000 1.00000
11 hdd 1.86029 osd.11 up 1.00000 1.00000
26 hdd 1.86029 osd.26 up 1.00000 1.00000
27 hdd 1.86029 osd.27 up 1.00000 1.00000
6 ssd 0.93149 osd.6 up 1.00000 1.00000
7 ssd 0.93149 osd.7 up 1.00000 1.00000
25 ssd 3.49309 osd.25 up 1.00000 1.00000
41 ssd 3.30690 osd.41 up 1.00000 1.00000
-10 15.84383 host FT1-NodeC
14 hdd 1.59999 osd.14 up 1.00000 1.00000
15 hdd 1.86029 osd.15 up 1.00000 1.00000
16 hdd 1.86029 osd.16 up 1.00000 1.00000
17 hdd 1.86029 osd.17 up 1.00000 1.00000
8 ssd 0.93149 osd.8 up 1.00000 1.00000
9 ssd 0.93149 osd.9 up 1.00000 1.00000
24 ssd 3.49309 osd.24 up 1.00000 1.00000
43 ssd 3.30690 osd.43 up 1.00000 1.00000
-13 15.17264 host FT1-NodeD
20 hdd 1.86029 osd.20 up 1.00000 1.00000
21 hdd 1.86029 osd.21 up 1.00000 1.00000
22 hdd 1.86029 osd.22 up 1.00000 1.00000
23 hdd 1.86029 osd.23 up 1.00000 1.00000
12 ssd 3.30690 osd.12 up 1.00000 1.00000
13 ssd 0.93149 osd.13 up 1.00000 1.00000
19 ssd 3.49309 osd.19 up 1.00000 1.00000
The spinners have their RocksDB on the nvme's, for extra performance,
but I have not noticed any crc errors prior to installing the 4TB
Samsungs in each node.
Suggestions are welcome.
thanks
Roland
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]