Well, in this case, the rsync sent data is about the size of the "USED" column
in "zfs list -t snapshot" while the zfs stream is 4 times bigger. Also, with
rsync, if it fails in the middle, i don't have to start over.
--
This message posted from opensolaris.org
__
Richard Elling writes:
> In my experience, this looks like a set of devices sitting behind an
> expander. I have seen one bad disk take out all disks sitting behind
> an expander. I have also seen bad disk firmware take out all disks
> behind an expander. I once saw a bad cable take out everyth
Ed,
Thank you for sharing the calculations. In lay terms, for Sha256, how many
blocks of data would be needed to have one collision?
Assuming each block is 4K is size, we probably can calculate the final data
size beyond which the collision may occur. This would enable us to make the
following
I am posting this once again as my previous post went into the middle of the
thread and may go unnoticed.
Ed,
Thank you for sharing the calculations. In lay terms, for Sha256, how many
blocks of data would be needed to have one collision?
Assuming each block is 4K is size, we probably can calc
On Thu, Jan 13, 2011 at 8:09 AM, Stephan Budach wrote:
> Actually mbuffer does a great job for that, too. Whenever I am using mbuffer
> I am achieving much higher throughput then using ssh.
Agreed, mbuffer seems to be required to get decent throughput. Using
it on both ends of an SSH pipe (or at
On Jan 14, 2011, at 14:32, Peter Taps wrote:
> Also, another related question. Why 256 bits was chosen and not 128 bits or
> 512 bits? I guess Sha512 may be an overkill. In your formula, how many blocks
> of data would be needed to have one collision using Sha128?
There are two ways to get 128