> -----Original Message-----
> From: Anand Bhat [mailto:anand.b...@gmail.com]
> Sent: 07 July 2016 13:46
> To: n...@fisk.me.uk
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] RBD - Deletion / Discard - IO Impact
> 
> These are known problem.
> 
> Are you doing mkfs.xfs on SSD? If so, please check SSD data sheets whether 
> UNMAP is supported. To avoid unmap during mkfs, use
> mkfs.xfs -K


Thanks for your reply

The RBD's are on normal spinners (+SSD Journals)

> 
> Regards,
> Anand
> 
> On Thu, Jul 7, 2016 at 5:23 PM, Nick Fisk <mailto:n...@fisk.me.uk> wrote:
> Hi All,
> 
> Does anybody else see a massive (ie 10x) performance impact when either 
> deleting a RBD or running something like mkfs.xfs against
> an existing RBD, which would zero/discard all blocks?
> 
> In the case of deleting a 4TB RBD, I’m seeing latency in some cases rise up 
> to 10s.
> 
> It looks like it the XFS deletions on the OSD which are potentially 
> responsible for the massive drop in performance as I see random
> OSD’s in turn peak to 100% utilisation.
> 
> I’m not aware of any throttling than can be done to reduce this impact, but 
> would be interested to here from anyone else that may
> experience this.
> 
> Nick
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> mailto:ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> --
> ----------------------------------------------------------------------------
> Never say never.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to