I had a similar issue when migrating from SSD to NVMe using Ubuntu. Read 
performance tanked using NVMe. Iostat showed each NVMe performing 30x more 
physical reads compared to SSD, but the MB/s was 1/6 the speed of the SSD. I 
set "blockdev --setra 128 /dev/nvmeX” and now performance is much better with 
NVMe than using SSD. With our SSD and pcie flash cards, we used —setra 0 since 
these devices handle read look ahead internally. Our NVMe devices benefit from 
setting —setra.
Rick
> On Jul 26, 2016, at 8:09 PM, Somnath Roy <somnath....@sandisk.com> wrote:
> 
> << Ceph performance in general (without read_ahead_kb) will be lower 
> specially in all flash as the requests will be serialized within a PG
>  
> I meant to say Ceph sequential performance..Sorry for the spam..
>  
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com 
> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of Somnath Roy
> Sent: Tuesday, July 26, 2016 5:08 PM
> To: EP Komarla; ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Ceph performance pattern
>  
> Not exactly, but, we are seeing some drop with 256K compare to 64K. This is 
> with random reads though in Ubuntu. We had to bump up read_ahead_kb from 
> default 128KB to 512KB to work around that.
> But, in RHEL we saw all sorts of issues with read_ahead_kb for small block 
> random reads and I think it is already default to 4MB or so..If so, try to 
> reduce it to 512KB and see..
> Generally, for sequential reads, you need to play with read_ahead_kb to 
> achieve better performance. Ceph performance in general (without 
> read_ahead_kb) will be lower specially in all flash as the requests will be 
> serialized within a PG.
> Our test is with all flash though and take my comments with a grain of salt 
> in case of ceph + HDD..
>  
> Thanks & Regards
> Somnath
>  
>  
> From: EP Komarla [mailto:ep.koma...@flextronics.com 
> <mailto:ep.koma...@flextronics.com>] 
> Sent: Tuesday, July 26, 2016 4:50 PM
> To: Somnath Roy; ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: RE: Ceph performance pattern
>  
> Thanks Somnath.  
>  
> I am running with CentOS7.2.  Have you seen this pattern before?
>  
> - epk
>  <> 
> From: Somnath Roy [mailto:somnath....@sandisk.com 
> <mailto:somnath....@sandisk.com>] 
> Sent: Tuesday, July 26, 2016 4:44 PM
> To: EP Komarla <ep.koma...@flextronics.com 
> <mailto:ep.koma...@flextronics.com>>; ceph-users@lists.ceph.com 
> <mailto:ceph-users@lists.ceph.com>
> Subject: RE: Ceph performance pattern
>  
> Which OS/kernel you are running with ?
> Try setting bigger read_ahead_kb for sequential runs.
>  
> Thanks & Regards
> Somnath
>  
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com 
> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of EP Komarla
> Sent: Tuesday, July 26, 2016 4:38 PM
> To: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: [ceph-users] Ceph performance pattern
>  
> Hi,
>  
> I am showing below fio results for Sequential Read on my Ceph cluster.  I am 
> trying to understand this pattern:
>  
> - why there is a dip in the performance for block sizes 32k-256k?
> - is this an expected performance graph?
> - have you seen this kind of pattern before
>  
> <image001.png>
>  
> My cluster details:
> Ceph: Hammer release
> Cluster: 6 nodes (dual Intel sockets) each with 20 OSDs and 4 SSDs (5 OSD 
> journals on one SSD)
> Client network: 10Gbps
> Cluster network: 10Gbps
> FIO test:
> - 2 Client servers
> - Sequential Read
> - Run time of 600 seconds
> - Filesize = 1TB
> - 10 rbd images per client
> - Queue depth=16
>  
> Any ideas on tuning this cluster?  Where should I look first?
>  
> Thanks,
>  
> - epk
>  
> 
> Legal Disclaimer:
> The information contained in this message may be privileged and confidential. 
> It is intended to be read only by the individual or entity to whom it is 
> addressed or by their designee. If the reader of this message is not the 
> intended recipient, you are on notice that any distribution of this message, 
> in any form, is strictly prohibited. If you have received this message in 
> error, please immediately notify the sender and delete or destroy any copy of 
> this message!
> PLEASE NOTE: The information contained in this electronic mail message is 
> intended only for the use of the designated recipient(s) named above. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this message in error and that any review, 
> dissemination, distribution, or copying of this message is strictly 
> prohibited. If you have received this communication in error, please notify 
> the sender by telephone or e-mail (as shown above) immediately and destroy 
> any and all copies of this message in your possession (whether hard copies or 
> electronically stored copies).
> 
> Legal Disclaimer:
> The information contained in this message may be privileged and confidential. 
> It is intended to be read only by the individual or entity to whom it is 
> addressed or by their designee. If the reader of this message is not the 
> intended recipient, you are on notice that any distribution of this message, 
> in any form, is strictly prohibited. If you have received this message in 
> error, please immediately notify the sender and delete or destroy any copy of 
> this message!
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to