I'm am running a Ceph Cluster on 5 Servers, all with a single osd and
acting as a client (kernel) for nearly half a year now and didn't encounter
a lockup yet. Total storage is 3.25TB with about 600GB raw storage used, if
that matters.

Dan van der Ster <d...@vanderster.com> schrieb am Di., 23. Apr. 2019, 09:33:

> On Mon, 22 Apr 2019, 22:20 Gregory Farnum, <gfar...@redhat.com> wrote:
>
>> On Sat, Apr 20, 2019 at 9:29 AM Igor Podlesny <ceph-u...@poige.ru> wrote:
>> >
>> > I remember seeing reports in regards but it's being a while now.
>> > Can anyone tell?
>>
>> No, this hasn't changed. It's unlikely it ever will; I think NFS
>> resolved the issue but it took a lot of ridiculous workarounds and
>> imposes a permanent memory cost on the client.
>>
>
> On the other hand, we've been running osds and local kernel mounts through
> some ior stress testing and managed to lock up only one node, only once
> (and that was with a 2TB shared output file).
>
> Maybe the necessary memory pressure conditions get less likely as the
> number of clients and osds gets larger? (i.e. it's probably easy to trigger
> with one single node/osd because all IO is local, but for large clusters
> most IO is remote).
>
> .. Dan
>
>
> -Greg
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to