Need to work out why the 4 aren’t starting then.
First I would check they are showing in the OS layer via dmesg or fdisk e.t.c
If you can see the correct amount of disks on each node then check the service
status / ceph logs for each osd.
Depending how you setup the cluster/osd depends on the l
I should have 10 OSDs, below is the output:
root@ceph-mon1:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.95297 root default
-5 0.78119 host ceph-mon1
2hdd 0.19530 osd.2 down 0 1.0
4hdd 0.19530
What does
‘ceph osd tree’ show?
How many OSD’s should you have 7 or 10?
> On 22 Feb 2022, at 14:40, Michel Niyoyita wrote:
>
> Actually one of my colleagues tried to reboot all nodes and he did not
> prepare the node like setting noout , norecover .., once all node are up
> the cluster i
Actually one of my colleagues tried to reboot all nodes and he did not
prepare the node like setting noout , norecover .., once all node are
up the cluster is no longer accessible and above are messages we are
getting. I did not remove any osd . except are marked down.
below is my ceph.conf:
m
You have 1 OSD offline, has this disk failed or you aware of what has caused
this to go offline?
Shows you have 10 OSD’s but only 7in, have you removed the other 3? Was the
data fully drained off these first?
I see you have 11 Pool’s what are these setup as, type and min/max size?
> On 22 Feb 2
Hello team
below are details when I try to run ceph osd dump
pg_temp 11.11 [7,8]
blocklist 10.10.29.157:6825/1153 expires 2022-02-23T04:55:01.060277+
blocklist 10.10.29.157:0/176361525 expires 2022-02-23T04:55:01.060277+
blocklist 10.10.29.156:0/815007610 expires 2022-02-23T04:54:56.05665