Adam> If you have enough money to play with, you might want to look
Adam> into NetApp's FlexCache setup -- a filer at home with the
Adam> primary data, and another head with some disk remotely to cache
Adam> the data from home and serve it locally.

Adam> It was designed for specifically this sort of application, but
Adam> of course it's NetApp so it's not cheap.

It also sucks rocks.  We're tried it with just RO remote offices for a
tools volume and gave up on it.  We've also tried SnapVault and the
performance sucked.  SnapMirror is a much better performing product
and is working well.

But a note of caution, if you screw up and upgrade the Source filer to
a higher version than the destination filer, you can get into all
kinds of problems.  Such as snapshots happening every minute on the
source filer.  Make sure you don't delete a needed source snapshot, or
you're in a world of pain and re-replication.  I speak from
experience. 

We're also using SilverPeak WAN accelerators and they do work well for
our NFS heavy environment.  But they're still not perfect by any
means.  Latency will just kill you no matter what.

I would strongly suggest that you shard your data as much as possible,
so that the commonly used files are local to the users.  Then try to
minimize the number of files which need to be edited remotely.
Changing user processes (no more shared spreadsheets for example)
could also be a big win.  

John
_______________________________________________
Discuss mailing list
Discuss@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to