> You never know when end-to-end data consistency will start to really
> matter. Just the other day I attended the cloud conference where 
> some Amazon EC2 customers were swapping stories of Amazon's networking
> "stack" malfunctioning and silently corrupting data that was written
> onto EBS. All of sudden, something like ZFS started to sound like 
> a really good idea to them.

i know we need to bow down before zfs's greatness, but i still have
some questions. ☺

does ec2 corrupt all one's data en mass?  how do you do meaningful
redundency in a cloud where one controls none of the failure-prone
pieces.

finally, if p is the probability of a lost block, when does p become too
large for zfs' redundency to overcome failures?  does this depend on
the amount of i/o one does on the data or does zfs scrub at a minimum
rate anyway.  if it does, that would be expensive.  

maybe ec2 is heads amazon wins, tails you loose?

- erik

Reply via email to