Hello,
I've just upgraded a Pacific cluster into Quincy, and all my osd have
the low value osd_mclock_max_capacity_iops_hdd : 315.00.
The manuel does not explain how to benchmark the OSD with fio or ceph
bench with good options.
Can someone have the good ceph bench options or fio options
sts where a lot more consistent than ceph bench or fio.
Hope this will help you.
Luis Domingues
Proton AG
--- Original Message ---
On Friday, June 30th, 2023 at 12:15, Rafael Diaz Maurin
wrote:
Hello,
I've just upgraded a Pacific cluster into Quincy, and all
Hello,
I've just upgraded a Pacific cluster into Quincy, and all my osd have
the low value osd_mclock_max_capacity_iops_hdd : 315.00.
The manuel does not explain how to benchmark the OSD with fio or ceph
bench with good options.
Can someone have the good ceph bench options or fio options
Many thanks.
Marco
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Rafael Diaz Maurin
DSI de l'Université de Rennes 1
Pôle Gestion des Infrastructures, Équip
Le 22/03/2022 à 11:26, Eugen Block a écrit :
How about this one?
https://docs.ceph.com/en/latest/cephfs/fs-volumes/
Great :)
It's exactly the information I need.
Thank you Eugen !!
Rafael
Zitat von Rafael Diaz Maurin :
Hi cephers,
Under Pacific, I just noticed a new info
k you,
Rafael
--
Rafael Diaz Maurin
DSI de l'Université de Rennes 1
Pôle Gestion des Infrastructures, Équipe Systèmes
02 23 23 71 57
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
}
Thank you for your answers.
Rafael
Le 17/01/2022 à 15:24, Rafael Diaz Maurin a écrit :
Hello,
All my pools on the cluster are replicated (x3).
I purged some OSD (after I stopped them) and remove the disks from the
servers, and now I have 4 PGs in stale+undersized+degraded+peered.
Reduced
p e355789 pg 1.af2 (1.af2) -> up [189,74,184] acting [189,74,184]
How can I succeed in reparing my 4 PGs ?
This affect the cephfs-metadata pool, and the filesystem is degraded
because the rank0 mds node stuck in rejoin state.
Thank you.
Rafael
--
Rafael Diaz Maurin
DSI de l'Unive
Le 15/01/2021 à 16:29, Jason Dillaman a écrit :
On Fri, Jan 15, 2021 at 10:12 AM Rafael Diaz Maurin
wrote:
Le 15/01/2021 à 15:39, Jason Dillaman a écrit :
4. But the error is still here :
2021-01-15 09:33:58.775 7fa088e350c0 -1 librbd::DiffIterate: diff_object_map:
failed to load object map
hots of all the images in a loop, where I get
all the SNAPIDs.
But none of these IDs is *46912* !
Please, can you tell me how did you get this number *46912* or a link
which explain that ?
Many thanks,
Rafael
--
Rafael Diaz Maurin
DSI de l'Université de Rennes 1
Pôle Infrastructure
Hello cephers,
I run Nautilus (14.2.15)
Here is my context : each night a script take a snapshot from each RBD
volume in a pool (all the disks of the VMs hosted) on my ceph production
cluster. Then each snapshot is exported (rbd export-diff | rbd
import-diff in SSH) towards my ceph bakup clus
11 matches
Mail list logo