Hi, If device mounted is not coming up, you can replace with new disk and ceph will handle rebalancing the data.
Here are the steps if you would like to replace the failed disk with new one : 1. ceph osd out osd.110 2. Now remove this failed OSD from Crush Map , as soon as its removed from crush map , recovery process will start. ceph osd crush remove osd.110 3.delete keyrings for that OSD and finally remove OSD ceph auth del osd.110 ceph osd rm 110 4. Once recovery is done and ceph status is active+clean, remove the old drive and insert new drive say /dev/sdb 5 Now create osd using ceph-deploy: (or the way you added osds at first) ceph-deploy osd create <node>:/dev/sdb --zap-disk Thanks Sahana On Fri, Mar 20, 2015 at 12:10 PM, Jesus Chavez (jeschave) < jesch...@cisco.com> wrote: > there was a blackout and one of my osds remains down, I have noticed that > the journal partition an data partion is not showed anymore so the device > cannot mounted… > > > 8 114 5241856 sdh2 > 8 128 3906249728 sdi > 8 129 3901005807 sdi1 > 8 130 5241856 sdi2 > 8 144 3906249728 sdj > 8 145 3901005807 sdj1 > 8 146 5241856 sdj2 > 8 192 3906249728 sdm > 8 176 3906249728 sdl > 8 177 3901005807 sdl1 > 8 178 5241856 sdl2 > 8 160 3906249728 sdk > 8 161 3901005807 sdk1 > 8 162 5241856 sdk2 > 253 0 52428800 dm-0 > 253 1 4194304 dm-1 > 253 2 37588992 dm-2 > > > the device is the /dev/sdm and the osd is the number 110, so what does > that mean ? that I have lost everything in OSD 110? > > Thanks > > > > /dev/mapper/rhel-root 50G 4.4G 46G 9% / > devtmpfs 126G 0 126G 0% /dev > tmpfs 126G 92K 126G 1% /dev/shm > tmpfs 126G 11M 126G 1% /run > tmpfs 126G 0 126G 0% /sys/fs/cgroup > /dev/sda1 494M 165M 330M 34% /boot > /dev/sdj1 3.7T 220M 3.7T 1% /var/lib/ceph/osd/ceph-80 > /dev/mapper/rhel-home 36G 49M 36G 1% /home > /dev/sdg1 3.7T 256M 3.7T 1% /var/lib/ceph/osd/ceph-50 > /dev/sdd1 3.7T 320M 3.7T 1% /var/lib/ceph/osd/ceph-20 > /dev/sdc1 3.7T 257M 3.7T 1% /var/lib/ceph/osd/ceph-10 > /dev/sdi1 3.7T 252M 3.7T 1% /var/lib/ceph/osd/ceph-70 > /dev/sdl1 3.7T 216M 3.7T 1% /var/lib/ceph/osd/ceph-100 > /dev/sdh1 3.7T 301M 3.7T 1% /var/lib/ceph/osd/ceph-60 > /dev/sde1 3.7T 268M 3.7T 1% /var/lib/ceph/osd/ceph-30 > /dev/sdf1 3.7T 299M 3.7T 1% /var/lib/ceph/osd/ceph-40 > /dev/sdb1 3.7T 244M 3.7T 1% /var/lib/ceph/osd/ceph-0 > /dev/sdk1 3.7T 240M 3.7T 1% /var/lib/ceph/osd/ceph-90 > [root@capricornio ~]# > > > 0 3.63 osd.0 up 1 > 10 3.63 osd.10 up 1 > 20 3.63 osd.20 up 1 > 30 3.63 osd.30 up 1 > 40 3.63 osd.40 up 1 > 50 3.63 osd.50 up 1 > 60 3.63 osd.60 up 1 > 70 3.63 osd.70 up 1 > 80 3.63 osd.80 up 1 > 90 3.63 osd.90 up 1 > 100 3.63 osd.100 up 1 > 110 3.63 osd.110 down 0 > > > > * Jesus Chavez* > SYSTEMS ENGINEER-C.SALES > > jesch...@cisco.com > Phone: *+52 55 5267 3146 <%2B52%2055%205267%203146>* > Mobile: *+51 1 5538883255* > > CCIE - 44433 > > > Cisco.com <http://www.cisco.com/> > > > > > > Think before you print. > > This email may contain confidential and privileged material for the sole > use of the intended recipient. Any review, use, distribution or disclosure > by others is strictly prohibited. If you are not the intended recipient (or > authorized to receive for the recipient), please contact the sender by > reply email and delete all copies of this message. > > Please click here > <http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for > Company Registration Information. > > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com