Hi,
If device mounted is not coming up, you can replace with new disk and ceph
will handle rebalancing the data.
Here are the steps if you would like to replace the failed disk with new
one :
1. ceph osd out osd.110
2. Now remove this failed OSD from Crush Map , as soon as its removed from
cru
there was a blackout and one of my osds remains down, I have noticed that the
journal partition an data partion is not showed anymore so the device cannot
mounted…
8 1145241856 sdh2
8 128 3906249728 sdi
8 129 3901005807 sdi1
8 1305241856 sdi2
8 14
I was thinking about all that osd take them out from crush mapping and removing
and if I try to get them back again they took the same osd ids, maybe in cache
or in some place they didnt complete was removed or something, but thats why I
also Edit the crushmap and inject again I dont want to mak
Yes actually I did it before ceph deployment. Also the filesystem is mounted =(
[root@geminis ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy A
Have you checked firewall or made sure you created the mount for the osd
correctly?
Did you check the osd.43 log?
On Mar 8, 2015 3:58 PM, "Jesus Chavez (jeschave)"
wrote:
> Hi all, I had issues with a stuck PGs so I decided to star over and
> delete 1 OSDs host that was causing the issues I re
Hi all, I had issues with a stuck PGs so I decided to star over and delete 1
OSDs host that was causing the issues I remove everything also purge data… Now
I tried to active osd but still remains down in ceph osd tree command, I tried
also to remove from crushmap (compiling and decompiling etc)