[ceph-users] Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-27 Thread duluxoz
@Eugen, @Cedric DOH! Sorry lads, my bad! I had a typo in my lv name - that was the cause of my issues. My apologises for being so stupid - and *thank you* for the help; having a couple of fresh brains on things help to eliminate possibilities and so narrow down onto the cause of the issue.

[ceph-users] Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-27 Thread P Wagner-Beccard
Hey Dulux-Oz, Care to share how you did it now? vg/lv syntax or :/dev/vg_osd/lvm_osd ? On Mon, 27 May 2024 at 09:49, duluxoz wrote: > @Eugen, @Cedric > > DOH! > > Sorry lads, my bad! I had a typo in my lv name - that was the cause of > my issues. > > My apologises for being so stupid - and *tha

[ceph-users] Re: Lousy recovery for mclock and reef

2024-05-27 Thread Sridhar Seshasayee
With mClock, osd_max_backfills and osd_recovery_max_active can be modified at runtime after setting osd_mclock_override_recovery_settings to true. See the docs for more info

[ceph-users] Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-27 Thread duluxoz
Hi PWB Both ways (just to see if both ways would work) - remember, this is a brand new box, so I had the luxury of "blowing away" the first iteration to test the second: * ceph orch daemon add osd ceph1:/dev/vg_osd/lv_osd * ceph orch daemon add osd ceph1:vg_osd/lv_osd Cheers Dulux-Oz ___

[ceph-users] Safe method to perform failback for RBD on one way mirroring.

2024-05-27 Thread Saif Mohammad
Hello Everyone We have Clusters in production with the following configuration: Cluster-A : quincy v17.2.5 Cluster-B : quincy v17.2.5 All images in a pool have the snapshot feature enabled and are mirrored. Each site has 3 daemons. We're testing disaster recovery with one-way mirroring in our b

[ceph-users] Re: tuning for backup target cluster

2024-05-27 Thread Lukasz Borek
Anthony, Darren Thanks for response. Answering your questions: What is the network you have for this cluster? 25GB/s > Is this a chassis with universal slots, or is that NVMe device maybe M.2 > or rear-cage? 12 * HDD via LSI jbod + 1 PCI NVME. Now it's 1.6TB, for the production plan is to use

[ceph-users] Re: Ceph ARM providing storage for x86

2024-05-27 Thread Mark Nelson
Once upon a time there were some endian issues running clusters with mixed hardware, though I don't think it affected clients.  As far as I know those were all resolved many years ago. Mark On 5/25/24 08:46, Anthony D'Atri wrote: Why not? The hwarch doesn't matter. On May 25, 2024, at 07

[ceph-users] Re: tuning for backup target cluster

2024-05-27 Thread Anthony D'Atri
> >> Is this a chassis with universal slots, or is that NVMe device maybe M.2 >> or rear-cage? > > 12 * HDD via LSI jbod + 1 PCI NVME. All NVMe devices are PCI ;). > Now it's 1.6TB, for the production plan > is to use 3.2TB. > > > `ceph df` >> `ceph osd dump | grep pool` >> So we can see wh

[ceph-users] Re: Lousy recovery for mclock and reef

2024-05-27 Thread Mazzystr
I suspect my initial spike in performance was pg's balancing between the three osd of the one host. host load is very low, under 1. hdd iops on the three discs hover around 80 +/- 5. atop shows about 20% business. Gig-ethernet shows about 20% utilized according to atop. I find it extremely har

[ceph-users] Re: Lousy recovery for mclock and reef

2024-05-27 Thread Anthony D'Atri
> > hdd iops on the three discs hover around 80 +/- 5. Each or total? I wouldn’t expect much more than 80 per drive. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] [rbd mirror] integrity of journal-based image mirror

2024-05-27 Thread Tony Liu
Hi, Say, the source image is being updated and data is mirrored to destination continuously. Suddenly, networking of source is down and destination will be promoted and used to restore the VM. Is that going to cause any FS issue and, for example, fsck needs to be invoked to check and repair FS?

[ceph-users] Re: does the RBD client block write when the Watcher times out?

2024-05-27 Thread Yuma Ogami
Hi all, I understood that the watcher cannot prevent multiple mounts. Based on the feedback I received, I will consider countermeasures. Thank you for your valuable insights. Yuma. 2024年5月23日(木) 21:15 Frank Schilder : > > Hi, we run into the same issue and there is actually another use case: >