[ceph-users] Stretch Cluster with rgw and cephfs?

2021-08-19 Thread Sean Matheny
rds of wisdom. :) Sean Matheny New Zealand eScience Infrastructure (NeSI) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-10-20 Thread Sean Matheny
ity. :) Anyone hear of any bad experiences, or any reason not to use over jerasure? Any reason to use cauchy-good instead of reed-solomon for the use case above? Ngā mihi, Sean Matheny HPC Cloud Platform DevOps Lead New Zealand eScience Infrastructure (NeSI) e: sean

[ceph-users] Cephadm - db and osd partitions on same disk

2022-11-07 Thread Sean Matheny
We have a new cluster being deployed using cephadm. We have 24 18TB HDDs and 4x 2.9TB NVMs per storage node, and are wanting to use the flash drives for both rocks.db/WAL for the 24 spinners as well as flash OSDs. From first inspection it seems like cephadm only supports using a device for a sin

[ceph-users] Re: Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-11-16 Thread Sean Matheny
erasure (either in normal write and read, or in recovery scenarios)? Ngā mihi, Sean Matheny HPC Cloud Platform DevOps Lead New Zealand eScience Infrastructure (NeSI) e: sean.math...@nesi.org.nz > On 12/11/2022, at 9:43 AM, Jeremy Austin wrote: > > I'm running 16.2.9 and have b

[ceph-users] Odd 10-minute delay before recovery IO begins

2022-12-05 Thread Sean Matheny
ph: root@ /]# ceph config get osd osd_recovery_sleep_hdd 60.10 7[ceph: root@ /]# ceph config get osd osd_recovery_sleep_ssd 80.00 9[ceph: root@ /]# ceph config get osd osd_recovery_sleep_hybrid 100.025000 Thanks in advance. Ngā mihi, Sean Matheny HPC Cloud Platform DevOps Lead New Zealan

[ceph-users] Re: Odd 10-minute delay before recovery IO begins

2022-12-05 Thread Sean Matheny
een the case in nautilus as well. > > Respectfully, > > Wes Dillingham > w...@wesdillingham.com <mailto:w...@wesdillingham.com> > LinkedIn <http://www.linkedin.com/in/wesleydillingham> > > > On Mon, Dec 5, 2022 at 5:20 PM Sean Matheny <mailto:sean.math

[ceph-users] Demystify EC CLAY and LRC helper chunks?

2022-12-12 Thread Sean Matheny
) doesn't need to communicate with the other, assuming the matching CRUSH hierarchy is in place). Anyone have any good resources on this beyond the documentation, or at a minimum can explain or confirm the slightly spooky nature of the "helper chunks" mentioned above? With

[ceph-users] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-18 Thread Sean Matheny
Hi folks, Our entire cluster is down at the moment. We started upgrading from 12.2.13 to 14.2.7 with the monitors. The first monitor we upgraded crashed. We reverted to luminous on this one and tried another, and it was fine. We upgraded the rest, and they all worked. Then we upgraded the firs

[ceph-users] Re: [FORGED] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-18 Thread Sean Matheny
degraded (41.236%) 10667 unknown 2869 active+undersized+degraded 885 down 673 peering 126 active+undersized On 19/02/2020, at 10:18 AM, Sean Matheny mailto:s.math...@auckland.ac.nz>> wrote: Hi folks, Our entire clus

[ceph-users] Re: [FORGED] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-18 Thread Sean Matheny
Thanks, If the OSDs have a newer epoch of the OSDMap than the MON it won't work. How can I verify this? (i.e the epoch of the monitor vs the epoch of the osd(s)) Cheers, Sean On 19/02/2020, at 7:25 PM, Wido den Hollander mailto:w...@42on.com>> wrote: On 2/19/20 5:45 AM, Sean Ma

[ceph-users] Re: [FORGED] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-19 Thread Sean Matheny
$host < On 19/02/2020, at 11:42 PM, Wido den Hollander wrote: > > > > On 2/19/20 10:11 AM, Paul Emmerich wrote: >> On Wed, Feb 19, 2020 at 10:03 AM Wido den Hollander wrote: >>> >>> >>> >>> On 2/19/20 8:49 AM, Sean Matheny wrote: >>