rds of wisdom. :)
Sean Matheny
New Zealand eScience Infrastructure (NeSI)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ity. :) Anyone hear of any bad experiences,
or any reason not to use over jerasure? Any reason to use cauchy-good instead
of reed-solomon for the use case above?
Ngā mihi,
Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure (NeSI)
e: sean
We have a new cluster being deployed using cephadm. We have 24 18TB HDDs and 4x
2.9TB NVMs per storage node, and are wanting to use the flash drives for both
rocks.db/WAL for the 24 spinners as well as flash OSDs. From first inspection
it seems like cephadm only supports using a device for a sin
erasure (either in normal write
and read, or in recovery scenarios)?
Ngā mihi,
Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure (NeSI)
e: sean.math...@nesi.org.nz
> On 12/11/2022, at 9:43 AM, Jeremy Austin wrote:
>
> I'm running 16.2.9 and have b
ph: root@ /]# ceph config get osd osd_recovery_sleep_hdd
60.10
7[ceph: root@ /]# ceph config get osd osd_recovery_sleep_ssd
80.00
9[ceph: root@ /]# ceph config get osd osd_recovery_sleep_hybrid
100.025000
Thanks in advance.
Ngā mihi,
Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealan
een the case in nautilus as well.
>
> Respectfully,
>
> Wes Dillingham
> w...@wesdillingham.com <mailto:w...@wesdillingham.com>
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>
>
> On Mon, Dec 5, 2022 at 5:20 PM Sean Matheny <mailto:sean.math
) doesn't need to communicate with the other, assuming the matching CRUSH
hierarchy is in place).
Anyone have any good resources on this beyond the documentation, or at a
minimum can explain or confirm the slightly spooky nature of the "helper
chunks" mentioned above?
With
Hi folks,
Our entire cluster is down at the moment.
We started upgrading from 12.2.13 to 14.2.7 with the monitors. The first
monitor we upgraded crashed. We reverted to luminous on this one and tried
another, and it was fine. We upgraded the rest, and they all worked.
Then we upgraded the firs
degraded (41.236%)
10667 unknown
2869 active+undersized+degraded
885 down
673 peering
126 active+undersized
On 19/02/2020, at 10:18 AM, Sean Matheny
mailto:s.math...@auckland.ac.nz>> wrote:
Hi folks,
Our entire clus
Thanks,
If the OSDs have a newer epoch of the OSDMap than the MON it won't work.
How can I verify this? (i.e the epoch of the monitor vs the epoch of the osd(s))
Cheers,
Sean
On 19/02/2020, at 7:25 PM, Wido den Hollander
mailto:w...@42on.com>> wrote:
On 2/19/20 5:45 AM, Sean Ma
$host < On 19/02/2020, at 11:42 PM, Wido den Hollander wrote:
>
>
>
> On 2/19/20 10:11 AM, Paul Emmerich wrote:
>> On Wed, Feb 19, 2020 at 10:03 AM Wido den Hollander wrote:
>>>
>>>
>>>
>>> On 2/19/20 8:49 AM, Sean Matheny wrote:
>>
11 matches
Mail list logo