Joe Koberg wrote:

[EMAIL PROTECTED] wrote:

On Saturday 25 March 2006 04:42, Mike Meyer wrote:
One thing: 1m is a bit small for modern systems. Or for not-so-modern
systems. Since nothing else is running, you might as well use all the
memory you've got, or as big as you can get a process to be. 128m or
more is perfectly reasonable.

It won't go any faster..

In a modern system the CPU is so much faster than the disk than anything above about 16k would be enough.


I found 64k to be optimal (e.g, max performance) on most machines


I heard its faster if you use two dd's; i.e:

   # dd if=/dev/ad0 bs=64k | dd of=/dev/ad1 bs=64k

allowing read and write to proceed in parallel.

that's what ddd and 'team' are for.
I don't know if ddd is in the ports as it may clash inname with teh debugger ddd
They internally fork and use several processes synchronised in some manner.


Joe Koberg
joe at osoft dot us


_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to