You can use ceph-volume to get the LV ID

> # ceph-volume lvm list
> 
> ====== osd.24 ======
> 
>   [block]    
> /dev/ceph-edeb727e-c6d3-4347-bfbb-b9ce7f60514b/osd-block-1da5910e-136a-48a7-8cf1-1c265b7b612a
> 
>       type                      block
>       osd id                    24
>       osd fsid                  1da5910e-136a-48a7-8cf1-1c265b7b612a
>       db device                 /dev/nvme0n1p4
>       db uuid                   c4939e17-c787-4630-9ec7-b44565ecf845
>       block uuid                n8mCnv-PW4n-43R6-I4uN-P1E0-7qDh-I5dslh
>       block device              
> /dev/ceph-edeb727e-c6d3-4347-bfbb-b9ce7f60514b/osd-block-1da5910e-136a-48a7-8cf1-1c265b7b612a
>       devices                   /dev/sda
> 
>   [  db]    /dev/nvme0n1p4
> 
>       PARTUUID                  c4939e17-c787-4630-9ec7-b44565ecf845

And you can then match this against lsblk which should give you the LV

> $ lsblk -a
> NAME                                                                          
>                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda                                                                           
>                           8:0    0   1.8T  0 disk
> └─ceph--edeb727e--c6d3--4347--bfbb--b9ce7f60514b-osd--block--1da5910e--136a--48a7--8cf1--1c265b7b612a
>  253:6    0   1.8T  0 lvm
> nvme0n1                                                                       
>                         259:0    0 372.6G  0 disk
> ├─nvme0n1p4                                                                   
>                         259:4    0  14.9G  0 part

And if the device has just dropped off, which I have seen before, you should be 
able to see that in dmesg

> [Sat May 11 22:56:27 2019] sd 1:0:17:0: attempting task abort! 
> scmd(000000002d043ad6)
> [Sat May 11 22:56:27 2019] sd 1:0:17:0: [sdr] tag#0 CDB: Inquiry 12 00 00 00 
> 24 00
> [Sat May 11 22:56:27 2019] scsi target1:0:17: handle(0x001b), 
> sas_address(0x500304801f12eca1), phy(33)
> [Sat May 11 22:56:27 2019] scsi target1:0:17: enclosure logical 
> id(0x500304801f12ecbf), slot(17)
> [Sat May 11 22:56:27 2019] scsi target1:0:17: enclosure level(0x0000), 
> connector name(     )
> [Sat May 11 22:56:28 2019] sd 1:0:17:0: device_block, handle(0x001b)
> [Sat May 11 22:56:30 2019] sd 1:0:17:0: device_unblock and setting to 
> running, handle(0x001b)
> [Sat May 11 22:56:30 2019] sd 1:0:17:0: [sdr] Synchronizing SCSI cache
> [Sat May 11 22:56:30 2019] sd 1:0:17:0: [sdr] Synchronize Cache(10) failed: 
> Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
> [Sat May 11 22:56:31 2019] scsi 1:0:17:0: task abort: SUCCESS 
> scmd(000000002d043ad6)
> [Sat May 11 22:56:31 2019] mpt3sas_cm0: removing handle(0x001b), 
> sas_addr(0x500304801f12eca1)
> [Sat May 11 22:56:31 2019] mpt3sas_cm0: enclosure logical 
> id(0x500304801f12ecbf), slot(17)
> [Sat May 11 22:56:31 2019] mpt3sas_cm0: enclosure level(0x0000), connector 
> name(     )
> [Sat May 11 23:00:57 2019] Buffer I/O error on dev dm-20, logical block 
> 488378352, async page read
> [Sat May 11 23:00:57 2019] Buffer I/O error on dev dm-20, logical block 1, 
> async page read
> [Sat May 11 23:00:58 2019] Buffer I/O error on dev dm-20, logical block 
> 488378352, async page read
> [Sat May 11 23:00:58 2019] Buffer I/O error on dev dm-20, logical block 1, 
> async page read
> 
> # smartctl -a /dev/sdr
> smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-46-generic] (local build)
> Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
> 
> Smartctl open device: /dev/sdr failed: No such device
> Hopefully that helps.

Reed

> On Jul 18, 2019, at 1:11 PM, Paul Emmerich <paul.emmer...@croit.io> wrote:
> 
> 
> 
> On Thu, Jul 18, 2019 at 8:10 PM John Petrini <jpetr...@coredial.com 
> <mailto:jpetr...@coredial.com>> wrote:
> Try ceph-disk list
> 
> no, this system is running ceph-volume not ceph-disk because the mountpoints 
> are in tmpfs
> 
> ceph-volume lvm list
> 
> But it looks like the disk is just completely broken and disappeared from the 
> system.
> 
> 
> -- 
> Paul Emmerich
> 
> Looking for help with your Ceph cluster? Contact us at https://croit.io 
> <https://croit.io/>
> 
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io <http://www.croit.io/>
> Tel: +49 89 1896585 90
>  
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to