The hosts with no OSDs now previously had OSDs, but they no longer do.
Those hosts don't exist any more. I guess they could be removed from
the crushmap, but I assume hosts with no OSD doesn't hurt anything?
The OSDs that used to be on these servers were deleted, and then new
OSDs were created on new servers. The osd.id was automatically
reused, so the id of the OSD is not actually reflective of the order
the disks were added into the cluster.
Extract of the crushmap:
host ceph-osd13 {
id -5 # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
id -2 class ssd # do not change unnecessarily
# weight 0.00000
alg straw2
hash 0 # rjenkins1
}
host ceph-osd11 {
id -7 # do not change unnecessarily
id -8 class hdd # do not change unnecessarily
id -3 class ssd # do not change unnecessarily
# weight 0.00000
alg straw2
hash 0 # rjenkins1
}
host ceph-osd12 {
id -9 # do not change unnecessarily
id -10 class hdd # do not change unnecessarily
id -11 class ssd # do not change unnecessarily
# weight 0.00000
alg straw2
hash 0 # rjenkins1
}
etc...
On 2022-10-14 13:27, Frank Schilder wrote:
You have hosts in the crush map with no OSDs. They are out+down and will be
counted while other hosts are also down. It will go back to normal when you
start the host with disks again. If you delete the hosts with no disks, you
will probably see misplaced objects. Why are they there in the first place? Are
you planning to add hosts or are these replaced ones?
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Matthew Darwin<b...@mdarwin.ca>
Sent: 14 October 2022 18:57:37
To:c...@elchaka.de;ceph-users@ceph.io
Subject: [ceph-users] Re: strange OSD status when rebooting one server
https://gist.githubusercontent.com/matthewdarwin/aec3c2b16ba5e74beb4af1d49e8cfb1a/raw/d8d8f34d989823b9f708608bb2773c7d4093c648/ceph-osd-tree.txt
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS TYPE NAME
-1 1331.88013 - 1.3 PiB 889 TiB 887 TiB 48 GiB 1.9 TiB
443 TiB 66.76 1.00 - root default
-7 0 - 0 B 0 B 0 B 0 B 0 B
0 B 0 0 - host ceph-osd11
-9 0 - 0 B 0 B 0 B 0 B 0 B
0 B 0 0 - host ceph-osd12
-5 0 - 0 B 0 B 0 B 0 B 0 B
0 B 0 0 - host ceph-osd13
-16 180.80972 - 181 TiB 122 TiB 122 TiB 6.5 GiB 262 GiB
59 TiB 67.55 1.01 - host ceph-osd16
0 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 23 GiB
6.2 TiB 61.91 0.93 70 up osd.0
1 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.2 TiB 68.08 1.02 77 up osd.1
8 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 16 KiB 27 GiB
3.3 TiB 79.58 1.19 90 up osd.8
15 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 0 B 24 GiB
5.1 TiB 68.95 1.03 78 up osd.15
17 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.4 TiB 67.14 1.01 76 up osd.17
18 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 0 B 28 GiB
3.2 TiB 80.43 1.20 91 up osd.18
22 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 0 B 24 GiB
5.2 TiB 68.05 1.02 77 up osd.22
27 hdd 16.37109 1.00000 16 TiB 8.5 TiB 8.5 TiB 0 B 18 GiB
7.8 TiB 52.10 0.78 59 up osd.27
42 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 23 GiB
6.2 TiB 61.85 0.93 70 up osd.42
46 hdd 16.37109 1.00000 16 TiB 9.6 TiB 9.5 TiB 1 KiB 21 GiB
6.8 TiB 58.35 0.87 66 up osd.46
49 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 1 KiB 28 GiB
3.4 TiB 79.49 1.19 90 up osd.49
53 ssd 0.72769 1.00000 745 GiB 22 GiB 14 GiB 6.5 GiB 1.1 GiB
723 GiB 2.94 0.04 102 up osd.53
-19 164.43863 - 164 TiB 101 TiB 101 TiB 4.3 GiB 219 GiB
64 TiB 61.32 0.92 - host ceph-osd17
2 hdd 16.37109 1.00000 16 TiB 9.5 TiB 9.5 TiB 1 KiB 20 GiB
6.8 TiB 58.26 0.87 66 up osd.2
5 hdd 16.37109 1.00000 16 TiB 8.7 TiB 8.7 TiB 1 KiB 21 GiB
7.7 TiB 53.07 0.79 60 up osd.5
14 hdd 16.37109 0.85004 16 TiB 8.7 TiB 8.7 TiB 0 B 19 GiB
7.7 TiB 53.02 0.79 60 up osd.14
16 hdd 16.37109 1.00000 16 TiB 9.7 TiB 9.7 TiB 0 B 20 GiB
6.7 TiB 59.14 0.89 67 up osd.16
21 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 0 B 24 GiB
5.2 TiB 68.06 1.02 77 up osd.21
24 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 2 KiB 26 GiB
3.8 TiB 76.89 1.15 87 up osd.24
28 hdd 16.37109 1.00000 16 TiB 9.7 TiB 9.7 TiB 0 B 21 GiB
6.7 TiB 59.25 0.89 67 up osd.28
34 hdd 16.37109 0.85004 16 TiB 7.7 TiB 7.7 TiB 0 B 16 GiB
8.7 TiB 46.86 0.70 53 up osd.34
44 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 25 GiB
5.7 TiB 65.40 0.98 74 up osd.44
45 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 26 GiB
3.9 TiB 75.90 1.14 86 up osd.45
54 ssd 0.72769 1.00000 745 GiB 14 GiB 9.3 GiB 4.3 GiB 701 MiB
731 GiB 1.92 0.03 100 up osd.54
-22 164.43863 - 164 TiB 108 TiB 108 TiB 6.3 GiB 235 GiB
56 TiB 65.76 0.99 - host ceph-osd18
3 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 1 KiB 26 GiB
3.8 TiB 76.87 1.15 87 up osd.3
19 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 0 B 22 GiB
6.2 TiB 61.89 0.93 70 up osd.19
23 hdd 16.37109 1.00000 16 TiB 9.4 TiB 9.4 TiB 0 B 21 GiB
7.0 TiB 57.45 0.86 65 up osd.23
26 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 0 B 24 GiB
5.4 TiB 67.19 1.01 76 up osd.26
29 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 0 B 22 GiB
5.7 TiB 65.41 0.98 74 up osd.29
32 hdd 16.37109 1.00000 16 TiB 8.8 TiB 8.8 TiB 17 KiB 19 GiB
7.5 TiB 53.97 0.81 61 up osd.32
35 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 0 B 25 GiB
4.8 TiB 70.72 1.06 80 up osd.35
37 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 24 GiB
5.9 TiB 63.70 0.95 72 up osd.37
47 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 23 GiB
6.4 TiB 60.98 0.91 69 up osd.47
50 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 1 KiB 29 GiB
2.9 TiB 82.21 1.23 93 up osd.50
55 ssd 0.72769 1.00000 745 GiB 20 GiB 13 GiB 6.3 GiB 1.0 GiB
725 GiB 2.70 0.04 116 up osd.55
-13 164.43863 - 164 TiB 110 TiB 110 TiB 3.9 GiB 239 GiB
54 TiB 66.97 1.00 - host ceph-osd19
4 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 22 GiB
6.0 TiB 63.62 0.95 72 up osd.4
20 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 0 B 27 GiB
3.3 TiB 79.55 1.19 90 up osd.20
25 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 0 B 22 GiB
5.7 TiB 65.37 0.98 74 up osd.25
30 hdd 16.37109 1.00000 16 TiB 7.8 TiB 7.8 TiB 0 B 17 GiB
8.6 TiB 47.76 0.72 54 up osd.30
31 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 0 B 25 GiB
4.5 TiB 72.45 1.09 82 up osd.31
33 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 0 B 25 GiB
4.5 TiB 72.48 1.09 82 up osd.33
36 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 0 B 25 GiB
4.8 TiB 70.68 1.06 80 up osd.36
38 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 26 GiB
4.7 TiB 71.55 1.07 81 up osd.38
48 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 25 GiB
4.9 TiB 69.88 1.05 79 up osd.48
51 hdd 16.37109 1.00000 16 TiB 9.7 TiB 9.7 TiB 1 KiB 23 GiB
6.7 TiB 59.24 0.89 67 up osd.51
56 ssd 0.72769 1.00000 745 GiB 14 GiB 9.3 GiB 3.9 GiB 1.1 GiB
731 GiB 1.92 0.03 97 up osd.56
-25 0 - 0 B 0 B 0 B 0 B 0 B
0 B 0 0 - host ceph-osd41
-28 0 - 0 B 0 B 0 B 0 B 0 B
0 B 0 0 - host ceph-osd42
-31 0 - 0 B 0 B 0 B 0 B 0 B
0 B 0 0 - host ceph-osd43
-34 164.43863 - 164 TiB 109 TiB 109 TiB 6.7 GiB 234 GiB
55 TiB 66.46 1.00 - host ceph-osd57
6 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 22 GiB
5.5 TiB 66.29 0.99 75 up osd.6
7 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 1 KiB 28 GiB
2.9 TiB 82.24 1.23 93 up osd.7
9 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 24 GiB
5.7 TiB 65.41 0.98 74 up osd.9
12 hdd 16.37109 1.00000 16 TiB 9.4 TiB 9.4 TiB 1 KiB 19 GiB
7.0 TiB 57.45 0.86 65 up osd.12
62 hdd 16.37109 1.00000 16 TiB 9.8 TiB 9.8 TiB 1 KiB 23 GiB
6.5 TiB 60.10 0.90 68 up osd.62
66 hdd 16.37109 1.00000 16 TiB 9.3 TiB 9.2 TiB 1 KiB 20 GiB
7.1 TiB 56.56 0.85 64 up osd.66
70 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 24 GiB
5.4 TiB 67.16 1.01 76 up osd.70
74 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 1 KiB 26 GiB
3.6 TiB 77.80 1.17 88 up osd.74
78 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 22 GiB
6.1 TiB 62.80 0.94 71 up osd.78
82 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 24 GiB
4.7 TiB 71.59 1.07 81 up osd.82
57 ssd 0.72769 1.00000 745 GiB 18 GiB 10 GiB 6.7 GiB 934 MiB
728 GiB 2.36 0.04 84 up osd.57
-37 164.43863 - 164 TiB 114 TiB 114 TiB 2.2 GiB 245 GiB
50 TiB 69.54 1.04 - host ceph-osd58
10 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 25 GiB
5.5 TiB 66.34 0.99 75 up osd.10
13 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.4 TiB 67.24 1.01 76 up osd.13
39 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 25 GiB
4.4 TiB 73.33 1.10 83 up osd.39
40 hdd 16.37109 1.00000 16 TiB 13 TiB 13 TiB 1 KiB 26 GiB
3.6 TiB 77.78 1.17 88 up osd.40
63 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 24 GiB
5.9 TiB 63.68 0.95 72 up osd.63
67 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 25 GiB
5.5 TiB 66.35 0.99 75 up osd.67
71 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 26 GiB
3.9 TiB 76.00 1.14 86 up osd.71
75 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 22 GiB
6.0 TiB 63.60 0.95 72 up osd.75
79 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 26 GiB
3.9 TiB 76.01 1.14 86 up osd.79
83 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.2 TiB 68.07 1.02 77 up osd.83
58 ssd 0.72769 1.00000 745 GiB 14 GiB 11 GiB 2.2 GiB 166 MiB
731 GiB 1.85 0.03 87 up osd.58
-40 164.43863 - 164 TiB 112 TiB 112 TiB 4.5 GiB 240 GiB
52 TiB 68.12 1.02 - host ceph-osd59
11 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 26 GiB
5.1 TiB 68.93 1.03 78 up osd.11
41 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 25 GiB
4.4 TiB 73.33 1.10 83 up osd.41
52 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 25 GiB
4.2 TiB 74.15 1.11 84 up osd.52
64 hdd 16.37109 1.00000 16 TiB 14 TiB 14 TiB 1 KiB 30 GiB
2.5 TiB 84.86 1.27 96 up osd.64
68 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 26 GiB
5.1 TiB 69.01 1.03 78 up osd.68
72 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.8 TiB 64.49 0.97 73 up osd.72
76 hdd 16.37109 1.00000 16 TiB 9.7 TiB 9.7 TiB 1 KiB 21 GiB
6.7 TiB 59.23 0.89 67 up osd.76
80 hdd 16.37109 1.00000 16 TiB 9.7 TiB 9.7 TiB 1 KiB 21 GiB
6.7 TiB 59.23 0.89 67 up osd.80
84 hdd 16.37109 1.00000 16 TiB 10 TiB 10 TiB 1 KiB 21 GiB
6.4 TiB 60.98 0.91 69 up osd.84
87 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
4.9 TiB 69.86 1.05 79 up osd.87
59 ssd 0.72769 1.00000 745 GiB 17 GiB 12 GiB 4.5 GiB 557 MiB
728 GiB 2.28 0.03 106 up osd.59
-43 164.43863 - 164 TiB 112 TiB 112 TiB 13 GiB 235 GiB
52 TiB 68.29 1.02 - host ceph-osd60
43 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 22 GiB
5.4 TiB 67.06 1.00 76 up osd.43
61 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 25 GiB
5.1 TiB 68.95 1.03 78 up osd.61
65 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.5 TiB 66.27 0.99 75 up osd.65
69 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 25 GiB
4.9 TiB 69.85 1.05 79 up osd.69
73 hdd 16.37109 1.00000 16 TiB 12 TiB 12 TiB 1 KiB 24 GiB
4.6 TiB 71.63 1.07 81 up osd.73
77 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 24 GiB
4.9 TiB 69.77 1.05 79 up osd.77
81 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.4 TiB 67.19 1.01 76 up osd.81
85 hdd 16.37109 1.00000 16 TiB 11 TiB 11 TiB 1 KiB 23 GiB
5.1 TiB 69.00 1.03 78 up osd.85
86 hdd 16.37109 1.00000 16 TiB 14 TiB 14 TiB 1 KiB 29 GiB
2.8 TiB 83.04 1.24 94 up osd.86
88 hdd 16.37109 1.00000 16 TiB 8.7 TiB 8.7 TiB 1 KiB 17 GiB
7.7 TiB 52.98 0.79 60 up osd.88
60 ssd 0.72769 1.00000 745 GiB 27 GiB 13 GiB 13 GiB 1.0 GiB
718 GiB 3.61 0.05 103 up osd.60
TOTAL 1.3 PiB 889 TiB 887 TiB 48 GiB 1.9 TiB
443 TiB 66.76
MIN/MAX VAR: 0.03/1.27 STDDEV: 20.83
On 2022-10-14 12:52,c...@elchaka.de wrote:
Could you please share output of
Ceph osd df tree
There could be an hint...
Hth
Am 14. Oktober 2022 18:45:40 MESZ schrieb Matthew Darwin
<b...@mdarwin.ca>:
Hi,
I am hoping someone can help explain this strange message. I took 1 physical server
offline which contains 11 OSDs. "ceph -s" reports 11 osd down. Great.
But on the next line it says "4 hosts" are impacted. It should only be 1
single host? When I look the manager dashboard all the OSDs that are down belong to a
single host.
Why does it say 4 hosts here?
$ ceph -s
cluster:
id: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
health: HEALTH_WARN
11 osds down
4 hosts (11 osds) down
Reduced data availability: 2 pgs inactive, 3 pgs peering
Degraded data redundancy: 44341491/351041478 objects degraded
(12.631%), 834 pgs degraded, 782 pgs undersized
2 pgs not deep-scrubbed in time
1 pgs not scrubbed in time
----------------------------------------------------------------------
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an emailtoceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io