There has been talks on the subject in the mailing list before [1] which concur with Nick's experience as long as you use AES-XTS.
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008444.html On Tue, Jan 3, 2017 at 2:30 PM, Nick Fisk <n...@fisk.me.uk> wrote: > > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Kent Borg > *Sent:* 03 January 2017 12:47 > *To:* M Ranga Swami Reddy <swamire...@gmail.com> > *Cc:* ceph-users <ceph-users@lists.ceph.com> > *Subject:* Re: [ceph-users] performance with/without dmcrypt OSD > > > > On 01/03/2017 06:42 AM, M Ranga Swami Reddy wrote: > > On Tue, Jan 3, 2017 at 6:17 AM, Kent Borg <kentb...@borg.org> wrote: > > > > Assuming I am understanding the question... > > If there isn't too big a performance hit, it makes disk disposal (we > expect disks to die, right?) much simpler. > > > > > > OK. Thanks. But if I have a big volumes in TB size (10 TB volume) and > writing/reading from the big volumes - will impact on performance > like write and read speed? > > > I'd like to know, too. > > -kb > > > > Not specifically related to Ceph, but I built a 14 disk RAID 6 array > (mdadm) for a recent “secure high performance seeding device in a > briefcase” project and used dmcrypt on it. I could easily obtain over 1GB/s > reads and writes. From tests there was no noticeable performance impact and > CPU usage on a Xeon E3 was nothing to be concerned about. All modern CPU’s > will HW accelerate the process if you use the AES-XTS cipher, I suspect > there might be a severe performance impact without. > > > > Also as Ceph+network itself brings a fair amount of overhead, I wouldn’t > suspect that dmcrypt would introduce any noticeable overhead of its own. > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com