Hi,
If device mounted is not coming up, you can replace with new disk and ceph
will handle rebalancing the data.
Here are the steps if you would like to replace the failed disk with new
one :
1. ceph osd out osd.110
2. Now remove this failed OSD from Crush Map , as soon as its removed from
cru
I was thinking about all that osd take them out from crush mapping and removing
and if I try to get them back again they took the same osd ids, maybe in cache
or in some place they didnt complete was removed or something, but thats why I
also Edit the crushmap and inject again I dont want to mak
Yes actually I did it before ceph deployment. Also the filesystem is mounted =(
[root@geminis ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy A
Have you checked firewall or made sure you created the mount for the osd
correctly?
Did you check the osd.43 log?
On Mar 8, 2015 3:58 PM, "Jesus Chavez (jeschave)"
wrote:
> Hi all, I had issues with a stuck PGs so I decided to star over and
> delete 1 OSDs host that was causing the issues I re