On Fri, 2009-08-07 at 19:33, Richard Elling wrote:

> This is very unlikely to be a "fragmentation problem." It is a  
> scalability problem
> and there may be something you can do about it in the short term.

You could be right.

Out test mail server consists of the exact same design, same hardware
(SUN4V)  but in a smaller configuration (less memory and 4 x 25g san
luns) has a backup/copy thoughput of 30GB/hour. Data used for testing
was "copied" from our production mail server.

> > Adding another pool and copying all/some data over to it would only  
> > a short term solution.
> 
> I'll have to disagree.

What is the point of a filesystem the can grow to such a huge size and
not have functionality built in to optimize data layout?  Real world
implementations of filesystems that are intended to live for
years/decades need this functionality, don't they?

Our mail system works well, only the backup doesn't perform well.
All the features of ZFS that make reads perform well (prefetch, ARC)
have little effect.
 
We think backup is quite important. We do quite a few restores of months
old data. Snapshots help in the short term, but for longer term restores
we need to go to tape. 

Of course, as you can tell, I'm kinda stuck on this idea that "file and
directory fragmentation" is causing our issues with the backup. I don't
know how to analyze the pool to better understand the problem.

If we did chop the pool up into lets say 7 pools (one for each current
filesystem) then over time these 7 pools would grow and we would end up
with the same issues. Thats why it seems to me to be a short term
solution.

If our issues with zfs are scalability then you could say zfs is not
scalable. Is that true?
(It certianly is if the solution is too create more pools!).  

-- 
Ed 


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to