Hi Xiubo,
Did you get a chance to work on this? I am curious to test out the
improvements.
Thanks and Regards,
Ashu Pachauri
On Fri, Mar 17, 2023 at 3:33 PM Frank Schilder wrote:
> Hi Ashu,
>
> thanks for the clarification. That's not an option that is easy to change.
>
ze is not sufficient; one
needs to change the corresponding configurations that control
maximum/minimum readahead for ceph clients.
Thanks and Regards,
Ashu Pachauri
On Fri, Mar 17, 2023 at 2:14 PM Xiubo Li wrote:
>
> On 15/03/2023 17:20, Frank Schilder wrote:
> > Hi Ashu,
> >
(and then discarding most of the pulled data) even if you set
readahead to zero. So, the solution for us was to set a lower stripe size,
which aligns better with our workloads.
Thanks and Regards,
Ashu Pachauri
On Fri, Mar 10, 2023 at 9:41 PM Ashu Pachauri wrote:
> Also, I am able to repro
data.
Thanks and Regards,
Ashu Pachauri
On Fri, Mar 10, 2023 at 9:22 PM Ashu Pachauri wrote:
> We have an internal use case where we back the storage of a proprietary
> database by a shared file system. We noticed something very odd when
> testing some workload with a local block devi
led for this test):
/mnt/cephfs type ceph (rw,relatime,name=cephfs,secret=,acl,rasize=0)
Any help or pointers are appreciated; this is a major performance issue for
us.
Thanks and Regards,
Ashu Pachauri
___
ceph-users mailing list -- ceph-users@ceph.io