gt; osd_op_queue = wpq
> > osd_op_queue_cut_off = high
>
>
> Afaik, the default for osd_op_queue_cut_off was set to low by mistake
> prior to Octopus.
>
>
> Peter
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.
o
> assure maximum resilience.
>
> -Dvae
>
> --
> Dave Hall
> Binghamton University
> kdh...@binghamton.edu
> 607-760-2328 (Cell)
> 607-777-4641 (Office)
>
>
> On Thu, Mar 11, 2021 at 2:20 PM Steven Pine
> wrote:
>
>> Setting domain failure on a pe
;
> >> > ________
> >> > This message is confidential and is for the sole use of the intended
> >> > recipient(s). It may also be privileged or otherwise protected by
> >> copyright
> >> > or other legal rules. If you h
> > by reply email and delete it from your system. It is prohibited to copy
> > this message or disclose its content to anyone. Any confidentiality or
> > privilege is not waived or lost by any mistaken delivery or unauthorized
> > disclosure of the message. All messages
llocated.
>
> In terms of making this easier, we're looking to automate rolling format
> changes across a cluster with cephadm in the future.
>
> Josh
>
> On 2/16/21 9:58 AM, Steven Pine wrote:
> > Will there be a well documented strategy / method for changing block
>
t;> [1] https://docs.ceph.com/en/latest/radosgw/layout/
> >> [2] https://github.com/ceph/ceph/pull/32809
> >> [3] https://www.spinics.net/lists/ceph-users/msg45755.html
> >
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
>
S3 gateways have much more computer power and bandwidth to internet then
> it is used right now.
>
> Thank you
>
> Regards
> Michal Strnad
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to
imilar number of NVMe drives
> will bottleneck. Unless perhaps you have the misfortune of a chassis
> manufacturer who for some reason runs NVMe PCI lanes *though* an HBA.
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
>
older ceph version not accessible anymore on docs.ceph.com
> >>
> >> It's changed UI because we're hosting them on readthedocs.com now. See
> >> the dropdown in the lower right corner.
> >>
> >
> ___
>
; > ...
> >
> > The USED = 3 * STORED in 3-replica mode is completely right, but for EC
> 4+2 pool
> > (for default-fs-data0 )
> >
> > the USED is not equal 1.5 * STORED, why...:(
> >
> >
> > ___
> > ceph-users mailing list -- ceph-users@c
pg repair attempt?
Thank you for any suggestions or advice,
--
Steven Pine
webair.com
*P* 516.938.4100 x
*E * steven.p...@webair.com
<https://www.facebook.com/WebairInc/>
<https://www.linkedin.com/company/webair>
___
ceph-users mailing list
gt; > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> _______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscrib
; I posted the same message in the issue tracker,
> https://tracker.ceph.com/issues/44731
>
> --
> Vitaliy Filippov
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an ema
13 matches
Mail list logo