>> We are seeing some significant I/O delays on the disks causing a “SCSI Task 
>> Abort” from the OS. This seems to be triggered by the drive receiving a 
>> “Synchronize cache command”.
>> 
>> 


How exactly do you know this is the cause? This is usually just an effect of 
something going wrong and part of error recovery process.
Preceding this event should be the real error/root cause...

It is _supposedly_ safe to disable barriers in this scenario, but IMO the 
assumptions behind that are deeply flawed, and from what I've seen it is not 
necessary with fast drives (such as S3700).

Take a look in the mailing list archives, I elaborated on this quite a bit in 
the past, including my experience with Kingston drives + XFS + LSI (and the 
effect is present even on Intels, but because they are much faster it shouldn't 
cause any real problems).

Jan


> On 04 Sep 2015, at 21:55, Richard Bade <hitr...@gmail.com> wrote:
> 
> Hi Everyone,
> 
> We have a Ceph pool that is entirely made up of Intel S3700/S3710 enterprise 
> SSD's.
> 
> We are seeing some significant I/O delays on the disks causing a “SCSI Task 
> Abort” from the OS. This seems to be triggered by the drive receiving a 
> “Synchronize cache command”.
> 
> My current thinking is that setting nobarriers in XFS will stop the drive 
> receiving a sync command and therefore stop the I/O delay associated with it.
> 
> In the XFS FAQ it looks like the recommendation is that if you have a Battery 
> Backed raid controller you should set nobarriers for performance reasons.
> 
> Our LSI card doesn’t have battery backed cache as it’s configured in HBA mode 
> (IT) rather than Raid (IR). Our Intel s37xx SSD’s do have a capacitor backed 
> cache though.
> 
> So is it recommended that barriers are turned off as the drive has a safe 
> cache (I am confident that the cache will write out to disk on power failure)?
> 
> Has anyone else encountered this issue?
> 
> Any info or suggestions about this would be appreciated. 
> 
> Regards,
> 
> Richard
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to