Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-14 Thread Zack Brenton
On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman wrote: > Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to > increase the amount of OSDs (partitions) like Patrick suggested. By > default it will take 4 GiB per OSD ... Make sure you set the > "osd_memory_target" parameter accordin

Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-07 Thread Zack Brenton
On Thu, Mar 7, 2019 at 2:38 PM Patrick Donnelly wrote: > Is this with one active MDS and one standby-replay? The graph is odd > to me because the session count shows sessions on fs-b and fs-d but > not fs-c. Or maybe max_mds=2 and fs-d has no activity and fs-c is > standby-replay? > The graphs w

Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-07 Thread Zack Brenton
1 myfs-metadata 2 myfs-data0 ``` Let me know if there's any other information I can provide that would be helpful. Thanks, Zack On Wed, Mar 6, 2019 at 9:49 PM Patrick Donnelly wrote: > Hello Zack, > > On Wed, Mar 6, 2019 at 1:18 PM Zack Brenton wrote: > > > > H

[ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-06 Thread Zack Brenton
Hello, We're running Ceph on Kubernetes 1.12 using the Rook operator ( https://rook.io), but we've been struggling to scale applications mounting CephFS volumes above 600 pods / 300 nodes. All our instances use the kernel client and run kernel `4.19.23-coreos-r1`. We've tried increasing the MDS m