Re: [ceph-users] Help understanding EC object reads

2019-09-09 Thread Gregory Farnum
On Thu, Aug 29, 2019 at 4:57 AM Thomas Byrne - UKRI STFC wrote: > > Hi all, > > I’m investigating an issue with our (non-Ceph) caching layers of our large EC > cluster. It seems to be turning users requests for whole objects into lots of > small byte range requests reaching the OSDs, but I’m not

Re: [ceph-users] iostat and dashboard freezing

2019-09-09 Thread Konstantin Shalygin
On 8/29/19 9:56 PM, Reed Dier wrote: "config/mgr/mgr/balancer/active", "config/mgr/mgr/balancer/max_misplaced", "config/mgr/mgr/balancer/mode", "config/mgr/mgr/balancer/pool_ids", This is useless keys, you may to remove it. https://pastebin.com/bXPs28h1 Issues that you have: 1. Multi-root.

Re: [ceph-users] iostat and dashboard freezing

2019-09-09 Thread Konstantin Shalygin
On 9/2/19 5:47 PM, Jake Grimmett wrote: Hi Konstantin, To confirm, disabling the balancer allows the mgr to work properly. I tried re-enabling the balancer, it briefly worked, then locked up the mgr again. Here it's working OK... [root@ceph-s1 ~]# time ceph balancer optimize new real0m1.6

[ceph-users] AutoScale PG Questions - EC Pool

2019-09-09 Thread Ashley Merrick
I have a EC Pool (8+2) which has 30 OSD (3 Nodes), grown from the orginal 10 OSD (1 Node). I originally set the pool with a PG_NUM of 300, however the AutoScale PG is showing a warn saying I should set this to 2048, I am not sure if this is a good suggestion or if the Autoscale currently is n

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-09 Thread Konstantin Shalygin
I have a EC Pool (8+2) which has 30 OSD (3 Nodes), grown from the orginal 10 OSD (1 Node). I originally set the pool with a PG_NUM of 300, however the AutoScale PG is showing a warn saying I should set this to 2048, I am not sure if this is a good suggestion or if the Autoscale currently is

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-09 Thread Ashley Merrick
So I am correct in 2048 being a very high number and should go for either 256 or 512 like you said for a cluster of my size with the EC Pool of 8+2? Thanks On Tue, 10 Sep 2019 14:12:58 +0800 Konstantin Shalygin wrote You should not use 300PG because

Re: [ceph-users] perf dump and osd perf will cause the performance of ceph if I run it for each service?

2019-09-09 Thread Romit Misra
Hi Lin, My 2 cents:- 1. The Perf Dump Stats for the Ceph Dameons can be collected via the admin socket itself. 2. Since all the Daemons themselves run in a distributed fashion, you would need to collect it at per host/daemon level. 3. There is no performance impact collecting every

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-09 Thread Konstantin Shalygin
On 9/10/19 1:17 PM, Ashley Merrick wrote: So I am correct in 2048 being a very high number and should go for either 256 or 512 like you said for a cluster of my size with the EC Pool of 8+2? Indeed. I suggest stay at 256. k ___ ceph-users mailin