[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Amudhan P
No, ping with MTU size 9000 didn't work. On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar wrote: > Does your ping work or not? > > > On Sun, May 24, 2020 at 6:53 AM Amudhan P wrote: > >> Yes, I have set setting on the switch side also. >> >> On Sat 23 May, 2020, 6:47 PM Khodayar Doustar, >> w

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Amudhan P
it's a Dell S4048T-ON Switch using 10G Ethernet. On Sat, May 23, 2020 at 11:05 PM apely agamakou wrote: > Hi, > > Please check you MTU limit at the switch level, cand check other > ressources with icmp ping. > Try to add 14Byte for ethernet header at your switch level mean an MTU of > 9014 ? are

[ceph-users] Re: Cephfs IO halt on Node failure

2020-05-24 Thread Amudhan P
Sorry for the late reply. I have pasted crush map in below url : https://pastebin.com/ASPpY2VB and this my osd tree output and this issue are only when i use it with filelayout. ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF -1 327.48047 root default -3 109.16016 hos

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Khodayar Doustar
So this is your problem, it has nothing to do with Ceph. Just fix the network or rollback all changes. On Sun, May 24, 2020 at 9:05 AM Amudhan P wrote: > No, ping with MTU size 9000 didn't work. > > On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar > wrote: > > > Does your ping work or not? > >

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Amudhan P
I didn't do any changes but started working now with jumbo frames. On Sun, May 24, 2020 at 1:04 PM Khodayar Doustar wrote: > So this is your problem, it has nothing to do with Ceph. Just fix the > network or rollback all changes. > > On Sun, May 24, 2020 at 9:05 AM Amudhan P wrote: > >> No, pin

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Suresh Rama
Ping with 9000 MTU won't get response as I said and it should be 8972. Glad it is working but you should know what happened to avoid this issue later. On Sun, May 24, 2020, 3:04 AM Amudhan P wrote: > No, ping with MTU size 9000 didn't work. > > On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar

[ceph-users] Re: question on ceph node count

2020-05-24 Thread tim taler
yep, my fault I meant replication = 3 > > but aren't PGs checksummed so from the remaining PG (given the > > checksum would be right) two new copies could be created? > > Assuming again 3R on 5 nodes, failure domain of host, if 2 nodes go down, > there will be 1/3 copies available. Normally

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Martin Verges
Just save yourself the trouble. You won't have any real benefit from MTU 9000. It has some smallish, but it is not worth the effort, problems, and loss of reliability for most environments. Try it yourself and do some benchmarks, especially with your regular workload on the cluster (not the maximum

[ceph-users] RGW REST API failed request with status code 403

2020-05-24 Thread apely agamakou
Hi, Since my upgrade from 15.2.1 to 15.2.2 i've got this error message at the "Object Gateway" section of the dashboard. RGW REST API failed request with status code 403 (b'{"Code":"InvalidAccessKeyId","RequestId":"tx00017-005ecac06c' b'-e349-eu-west-1","HostId":"e349-eu-west-1-def

[ceph-users] RGW Garbage Collector

2020-05-24 Thread EDH - Manuel Rios
Hi, Im looking for any experience optimizing garbage collector with the next configs: global advanced rgw_gc_obj_min_wait global advanced rgw_gc_processor_max_time global advanced rgw_gc_processor_

[ceph-users] Re: RGW Garbage Collector

2020-05-24 Thread Matt Benjamin
Hi Manuel, rgw_gc_obj_min_wait -- yes, this is how you control how long rgw waits before removing the stripes of deleted objects the following are more gc performance and proportion of available iops: rgw_gc_processor_max_time -- controls how long gc runs once scheduled; a large value might be 3

[ceph-users] Re: RGW Garbage Collector

2020-05-24 Thread EDH - Manuel Rios
Thx Mat for fast response, today night at datacenter adding more OSD for S3. Will change the params and come back for share experience. Regards Manuel -Mensaje original- De: Matt Benjamin Enviado el: domingo, 24 de mayo de 2020 22:47 Para: EDH - Manuel Rios CC: ceph-users@ceph.io Asu

[ceph-users] Re: [External Email] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Dave Hall
Amudhan, Here is a trick I've used to test and evaluate Jumbo Frames without breaking production traffic: * Open a couple root ssh sessions on each of the two systems you want to test with. o  In one window start a continuous ping to the other system. * On both test systems: o

[ceph-users] Re: [External Email] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Dave Hall
All, Regarding Martin's observations about Jumbo Frames I have recently been gathering some notes from various internet sources regarding Linux network performance, and Linux performance in general, to be applied to a Ceph cluster I manage but also to the rest of the Linux server farm I'm

[ceph-users] Re: RGW resharding

2020-05-24 Thread lin yunfan
Can you store your data in different buckets? linyunfan Adrian Nicolae 于2020年5月19日周二 下午3:32写道: > > Hi, > > I have the following Ceph Mimic setup : > > - a bunch of old servers with 3-4 SATA drives each (74 OSDs in total) > > - index/leveldb is stored on each OSD (so no SSD drives, just SATA) > >

[ceph-users] Re: remove secondary zone from multisite

2020-05-24 Thread Zhenshi Zhou
Did anyone deal with it? Can I just remove the secondary zone from the cluster? I'm not sure if this action has any effect on the master zone. Thanks Zhenshi Zhou 于2020年5月22日周五 上午11:22写道: > Hi all, > > I'm gonna make my secondary zone offline. > How to remove the secondary zone from a mutisite?