ek> If you were able to send over your complete pool, destroy the
ek> existing one and re-create a new one using recv, then that should
ek> help with fragmentation. That said, that's a very poor man's
ek> defragger. The defragmentation should happen automatically or at
ek> least while the pool is online.
I was rather thinking about sending all file systems to another
server/pool (I'm in a middle of the process), then deleting source
file systems and send the file systems back. Of course no problem with
destroying the pool but I wonder why do you think it's needed? Just
deleting file systems won't be enough?
Yeah, destroying the filesystems should be enough (i was equating
destroying all filesystems to a zpool destroy).
btw: I've already migrated three file systems that way to x4500 and
so far they are working great - no cpu usage, much less read IOs
(both in # of IO and in volume) and everything is committed
exactly every 5s. So I guess there's high degree of probability
it will stay the same once I migrate them back to cluster.
ek> In the absence of a built-in defragger and without a fix for
6495013,
ek> i think the best thing you could do is either add more storage or
ek> remove some data (such as removing some old snapshots or move some
ek> unneeded storage to another system/backup). Not sure if either of
ek> those are applicable to you.
Some time ago removing snapshots helped. Then we stopped creating them
and not there's nothing really to remove left so I'm doing above.
btw: 6495013 - what is it exactly (bugs.opensolaris.org doesn't show
it either sunsolve).
Hmm, should be working now:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6495013
6495013 Loops and recursion in metaslab_ff_alloc can kill
performance, even on a pool with lots of free data
eric
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss