>>>>> "m" == mike  <[EMAIL PROTECTED]> writes:

     m> that could only be accomplished through combinations of pools?
     m> i don't really want to have to even think about managing two
     m> separate "partitions" - i'd like to group everything together
     m> into one large 13tb instance

You're not misreading the web pages.  You can do that.  I suggested
two pools because of problems like Erik's, and other known bugs.  two
pools will also protect you from operator mistakes, like if you
cut-and-paste the argument to 'zfs destroy' and end up with an
embedded newline in the cut buffer at exactly the wrong spot, or
mistakenly add an unredundant vdev, or get confused about how the
snapshot tree works, or upgrade your on-disk-format then want to
downgrade Solaris, or whatever.  This mailing list is a catalog of
reasons you need to have an offline backup pool, and you have enough
disks to do it.  The datacenter crowd on the list doesn't need this
because they have tape, or they have derived datasets which can be
recreated.  You do need it.

I think you've gotten to the ``try it and see'' stage.  Why not try
making pools in a bunch of different combinations and loading them
with throwaway data.  You can test performance.  try scrubbing.  test
the redundancy by pulling drives, and see how the hot sparing works.
Try rebooting during a hot-spare resilver, because during the actual
rebuild this will probably happen a few times until you track down the
driver's poor error handling of a marginal drive.  deliberately
include marginal drives in the pool if you have some.  You can get
really better information this way, especially if the emails are too
long to read.

if you want a list of things to test.... :)

seriously though if you have sixteen empty drives, that's a fantastic
situation.  I never had that---I had to move my sixteen drives into
zfs, 2 - 4 drives at a time.  I think you ought to use your array for
testing for at least a month.  You need to burn in the drives for a
month anyway because of infant mortality.

     m> you cannot add more vdevs to a zpool once the zpool is
     m> created. Is that right?  That's what it sounded like someone
     m> said earlier.

I didn't mean to say that.  If you have empty devices, of course you
can add them to a pool as a new vdev (though, as Darern said, once a
vdev is added, you're stuck with a vdev of that type, and if it's
raidz{,2} of that stripe-width, for the life of the pool.  you can
never remove a vdev.).  you asked:

     m> can you combine two zpools together?

you can't combine two existing pools and keep the data in tact.  You
have to destroy one pool and add its devices to the other.  I'm
repeating myself:

     m> can you combine two zpools together?

     c> no.  You can have many vdevs in one pool.  for example you can
     c> have a mirror vdev and a raidz2 vdev in the same pool.  You
     c> can also destroy pool B, and add its (now empty) devices to
     c> pool A.  but once two separate pools are created you can't
     c> later smush them together.

so, I am not sure how this sounds.  I usually rely on you to read the
whole paragraph not the first word, but I guess the messages are just
too long.  How about ``yes, you can combine two pools.  But before
combining them you have to destroy one of the pools and all the data
inside it.  then, you can add the empty space to the other pool.''

Attachment: pgp0J42oSwd9v.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to