Matt Beebe wrote:

> But what happens to the secondary server?  Specifically to its bit-for-bit 
> copy of Drive #2... presumably it is still good, but ZFS will offline that 
> disk on the primary server, replicate the metadata, and when/if I "promote" 
> the seconday server, it will also be running in a degraded state (ie: 3 out 
> of 4 drives).  correct?



Correct.

> In this scenario, my replication hasn't really bought me any increased 
> availablity... or am I missing something?  



No. You have an increase of availability when the entire primary node 
goes down, but you're not particularly safer when it comes to decreased 
zpools.


> Also, if I do chose to fail over to the secondary, can I just to a scrub the 
> "broken" drive (which isn't really broken, but the zpool would be 
> inconsistent at some level with the other "online" drives) and get back to 
> "full speed" quickly? or will I always have to wait until one of the servers 
> resilvers itself (from scratch?), and re-replicates itself??


I have not tested this scenario, so I can't say anything about this.

-- 

Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA

Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/

1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe

Amtsgericht Montabaur HRB 6484

Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas 
Gottschlich, Matthias Greve, Robert Hoffmann, Markus Huhn, Oliver Mauss, 
Achim Weiss
Aufsichtsratsvorsitzender: Michael Scheeren
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to