On Sat, 16 Feb 2013, Linda Walsh wrote:
I wondered about that as well -- could speed things
up by 30% over going through the slow linux buffers.
One thing that the 'dd' people found out though was
that if you do direct I/O, memory and your I/O really
do have to line up -- it may be that only 512 byte alignment
is necessary (or 4096 on some newer disks)...but ideally,
you look at the stat's claim for write size since the last
param in stat isn't the allocation size, but the smallest
optimal write size -- i.e. the "stripe size" if you have
a RAID...as there, you want to write whole strips at once,
otherwise you get into a read/modify/write cycle that slows
down your disk I/O with 200% overhead -- *ouch!*...
True, the patch can be improved. But even without alignment this avoids
excessive buffering when transferring huge files on systems with a lot of
free memory. The behavior I noticed (and this patch fixes) is only reads
until the buffer is filled, and then only writes until the buffers have
been written. With direct-io you have reads and writes happening at the
same time.
Since it seems you know what is needed to improve, can you propose a patch ?
(I got some hints from iozone wrt. alignment and portability)
Another solution is fadvise(), although I still had the behavior mentioned
above using --drop-cache, so it didn't fix my use-case which is why I
wrote this patch.
Kind regards,
--
-- dag wieers, d...@wieers.com, http://dag.wieers.com/
-- dagit linux solutions, i...@dagit.net, http://dagit.net/
[Any errors in spelling, tact or fact are transmission errors]
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html