On Wednesday 30 Mar 2016 19:36:57 meino.cra...@gmx.de wrote:
> Neil Bothwick <n...@digimed.co.uk> [16-03-30 17:12]:
> > On Wed, 30 Mar 2016 06:36:15 +0100, Mick wrote:
> > > Also worth mentioning is dcfldd which unlike dd can show progress of
> > > the bit stream and also produce hashes of the transferred output.  It
> > > has the same performance as the dd command though.
> > 
> > I can't find the reference right now, but I did read that dcfldd
> > determines the best block size on the fly if none is given. It's
> > certainly faster than dd when copying images to USB frives (my main use
> > for it) when given no block size.

This is good to know!  I'll try to remember to use it more often.


> Sounds it will be the tool of choice for that purpose, Neil ! :)
> 
> Best regards,
> Meino

dd defaults to bs=512, so it is going to be slow on transferring anything more 
than a few megabytes.  To find the optimum block size with dd you could run 
something like this:

$ dd if=/dev/zero bs=512 count=2000000 of=~/1GB.file
2000000+0 records in
2000000+0 records out
1024000000 bytes (1.0 GB) copied, 13.8359 s, 74.0 MB/s

$ dd if=/dev/zero bs=1024 count=1000000 of=~/1GB.file 
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 10.4439 s, 98.0 MB/s

$ dd if=/dev/zero bs=2048 count=500000 of=~/1GB.file
500000+0 records in
500000+0 records out
1024000000 bytes (1.0 GB) copied, 9.57416 s, 107 MB/s

$ dd if=/dev/zero bs=4096 count=250000 of=~/1GB.file
250000+0 records in
250000+0 records out
1024000000 bytes (1.0 GB) copied, 9.0178 s, 114 MB/s

$ dd if=/dev/zero bs=8192 count=125000 of=~/1GB.file
125000+0 records in
125000+0 records out
1024000000 bytes (1.0 GB) copied, 9.47107 s, 108 MB/s

$ rm 1GB.file

On an old spinning disk of mine it seems that bs=4096 is a good size to select 
for writing data to it.


NOTES:
======

1. If you rm the 1GB.file in between tests you'll get a better number, but 
I've been lazy and other than the first test with bs=512, the comparative 
result between remaining tests remains consistent.

2. In the above test the dcfldd command gives similar transfer times with dd, 
if you use the same block size.  For larger files some difference may be 
apparent.

3. In your case you can use the intended input/output devices, rather than 
/dev/zero, in order to get representative read and write cumulative times.

-- 
Regards,
Mick

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to