This admittedly is the case throughout the docs.

> On Nov 2, 2023, at 07:27, Joachim Kraftmayer - ceph ambassador 
> <joachim.kraftma...@clyso.com> wrote:
> 
> Hi,
> 
> another short note regarding the documentation, the paths are designed for a 
> package installation.
> 
> the paths for container installation look a bit different e.g.: 
> /var/lib/ceph/<fsid>/osd.y/
> 
> Joachim
> 
> ___________________________________
> ceph ambassador DACH
> ceph consultant since 2012
> 
> Clyso GmbH - Premier Ceph Foundation Member
> 
> https://www.clyso.com/
> 
> Am 02.11.23 um 12:02 schrieb Robert Sander:
>> Hi,
>> 
>> On 11/2/23 11:28, Mohamed LAMDAOUAR wrote:
>> 
>>>    I have 7 machines on CEPH cluster, the service ceph runs on a docker
>>> container.
>>>   Each machine has 4 hdd of data (available) and 2 nvme sssd (bricked)
>>>    During a reboot, the ssd bricked on 4 machines, the data are available on
>>> the HDD disk but the nvme is bricked and the system is not available. is it
>>> possible to recover the data of the cluster (the data disk are all
>>> available)
>> 
>> You can try to recover the MON db from the OSDs, as they keep a copy of it:
>> 
>> https://docs.ceph.com/en/reef/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures
>>  
>> 
>> Regards
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to