[ceph-users] Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-07-03 Thread Rafael Diaz Maurin
Hello, I've just upgraded a Pacific cluster into Quincy, and all my osd have the low value osd_mclock_max_capacity_iops_hdd : 315.00. The manuel does not explain how to benchmark the OSD with fio or ceph bench with good options. Can someone have the good ceph bench options or fio options

[ceph-users] Re: Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-06-30 Thread Rafael Diaz Maurin
sts where a lot more consistent than ceph bench or fio. Hope this will help you. Luis Domingues Proton AG --- Original Message --- On Friday, June 30th, 2023 at 12:15, Rafael Diaz Maurin wrote: Hello, I've just upgraded a Pacific cluster into Quincy, and all

[ceph-users] Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-06-30 Thread Rafael Diaz Maurin
Hello, I've just upgraded a Pacific cluster into Quincy, and all my osd have the low value osd_mclock_max_capacity_iops_hdd : 315.00. The manuel does not explain how to benchmark the OSD with fio or ceph bench with good options. Can someone have the good ceph bench options or fio options

[ceph-users] Re: 16.2.10 Cephfs with CTDB, Samba running on Ubuntu

2022-09-14 Thread Rafael Diaz Maurin
Many thanks. Marco ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- Rafael Diaz Maurin DSI de l'Université de Rennes 1 Pôle Gestion des Infrastructures, Équip

[ceph-users] Re: Pacific : ceph -s Data: Volumes: 1/1 healthy

2022-03-22 Thread Rafael Diaz Maurin
Le 22/03/2022 à 11:26, Eugen Block a écrit : How about this one? https://docs.ceph.com/en/latest/cephfs/fs-volumes/ Great :) It's exactly the information I need. Thank you Eugen !! Rafael Zitat von Rafael Diaz Maurin : Hi cephers, Under Pacific, I just noticed a new info

[ceph-users] Pacific : ceph -s Data: Volumes: 1/1 healthy

2022-03-22 Thread Rafael Diaz Maurin
k you, Rafael -- Rafael Diaz Maurin DSI de l'Université de Rennes 1 Pôle Gestion des Infrastructures, Équipe Systèmes 02 23 23 71 57 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph osd purge => 4 PGs stale+undersized+degraded+peered

2022-01-18 Thread Rafael Diaz Maurin
} Thank you for your answers. Rafael Le 17/01/2022 à 15:24, Rafael Diaz Maurin a écrit : Hello, All my pools on the cluster are replicated (x3). I purged some OSD (after I stopped them) and remove the disks from the servers, and now I have 4 PGs in stale+undersized+degraded+peered. Reduced

[ceph-users] ceph osd purge => 4 PGs stale+undersized+degraded+peered

2022-01-17 Thread Rafael Diaz Maurin
p e355789 pg 1.af2 (1.af2) -> up [189,74,184] acting [189,74,184] How can I succeed in reparing my 4 PGs ? This affect the cephfs-metadata pool, and the filesystem is degraded because the rank0 mds node stuck in rejoin state. Thank you. Rafael -- Rafael Diaz Maurin DSI de l'Unive

[ceph-users] Re: librbd::DiffIterate: diff_object_map: failed to load object map rbd_object_map.

2021-01-15 Thread Rafael Diaz Maurin
Le 15/01/2021 à 16:29, Jason Dillaman a écrit : On Fri, Jan 15, 2021 at 10:12 AM Rafael Diaz Maurin wrote: Le 15/01/2021 à 15:39, Jason Dillaman a écrit : 4. But the error is still here : 2021-01-15 09:33:58.775 7fa088e350c0 -1 librbd::DiffIterate: diff_object_map: failed to load object map

[ceph-users] Re: librbd::DiffIterate: diff_object_map: failed to load object map rbd_object_map.

2021-01-15 Thread Rafael Diaz Maurin
hots of all the images in a loop, where I get all the SNAPIDs. But none of these IDs is *46912* ! Please, can you tell me how did you get this number *46912* or a link which explain that ? Many thanks, Rafael -- Rafael Diaz Maurin DSI de l'Université de Rennes 1 Pôle Infrastructure

[ceph-users] librbd::DiffIterate: diff_object_map: failed to load object map rbd_object_map.

2021-01-15 Thread Rafael Diaz Maurin
Hello cephers, I run Nautilus (14.2.15) Here is my context : each night a script take a snapshot from each RBD volume in a pool (all the disks of the VMs hosted) on my ceph production cluster. Then each snapshot is exported (rbd export-diff | rbd import-diff in SSH) towards my ceph bakup clus