On Aug 6, 2009, at 7:59 AM, Ross wrote:
But why do you have to attach to a pool? Surely you're just
attaching to the root filesystem anyway? And as Richard says, since
filesystems can be shrunk easily and it's just as easy to detach a
filesystem from one machine and attach to it from another, why the
emphasis on pools?
For once I'm beginning to side with Richard, I just don't understand
why data has to be in separate pools to do this.
welcome to the dark side... bwahahahaa :-)
The way I've always done such migrations in the past is to get
everything ready
in parallel, then restart the service pointing to the new data. The
cost is a tiny bit
and a restart, which isn't a big deal for most modern system
architectures. If you
have a high availability cluster, just add it to the list of things to
do when you
do a weekly/monthly/quarterly failover.
Now, if I was to work in a shrink, I would do the same because
shrinking moves data
and moving data is risky. Perhaps someone could explain how they do a
rollback
from a shrink? Snapshots?
I think the problem at the example company is that they make storage so
expensive that the (internal) customers spend way too much time and
money
trying to figure out how to optimally use it. The storage market is
working
against this model by reducing the capital cost of storage. ZFS is
tackling many
of the costs related to managing storage. Clearly, there is still work
to be done,
but the tide is going out and will leave expensive storage solutions
high and dry.
Consider how different the process would be as the total cost of
storage approaches
zero. Would shrink need to exist? The answer is probably no. But the
way shrink is
being solved in ZFS has another application. Operators can still make
mistakes with
"add" vs "attach" so the ability to remove a top-level vdev is needed.
Once this is
solved, shrink is also solved.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss