Jeremy Charles wrote:
> I'd like to hear from those who have had to manage IT resources for offices 
> that are located on opposite sides of
> oceans.
> 
> Our primary challenge right now is this:   Our original offices, and our CIFS 
> file servers, are located in Wisconsin,
> USA.  We also have an office in the Netherlands.  The folks in the 
> Netherlands interact heavily with files on the
> CIFS file servers in Wisconsin - as does the entire company.
> 
> Performance was understandably painful for the Netherlands folks until we 
> added a Riverbed system to optimize the
> CIFS traffic.  It's better now, but the Netherlands folks are still pointing 
> out productivity losses due to slowness
> working with the CIFS file servers.  Note that the link between the 
> Netherlands and Wisconsin offices has never been
> strained for capacity - this purely seems to be a latency issue.
> 
> 
> One of the obvious thoughts is to set up file servers (and backups) in the 
> Netherlands office, and move "their" files
> over to "their" file servers.  Of course, there is no clear division between 
> what the Wisconsin folks use and what
> the Netherlands folks use, so somebody will always be the winner and somebody 
> will always be the loser when it comes
> to performance in accessing a particular piece of data.
> 
> It's also worth noting that we may have one or more other office(s) popping 
> up in the eastern hemisphere at some
> point.  If we add a file server and backups to each office to store each 
> office's most heavily utilized data, we'll
> end up with a lot of systems to manage.   (Yes, we've thought of renting 
> space in a data center that's a reasonable
> compromise location among the eastern hemisphere offices - wherever they end 
> up being.)
> 
> 
> Is our best bet to simply try to locate data closest to the people who use it 
> the most (and set expectations
> accordingly with those who are not near a given piece of data), or is there a 
> better way to deal with this?

Yup, you're losing out to latency.  More bandwidth won't help.

CIFS (and NFS for that matter) don't handle high-latency links very well.  
Unless they've added some streaming to the
protocol, you're stuck in lockstep between the client and the server.  Each 
file transaction (eg. data write) will
require the client to stop and wait until the server acks the call.  For what 
you're looking at from WI to NL, I'd guess
that you're seeing 120+ ms of latency (each way).  I'm seeing 148 ms pretty 
consistently from San Diego to Amsterdam
these days.

I don't do CIFS anymore but I seem to recall that the default block size is 
4K??.  You basically can't write more than 1
block per 240ms (RTT), so call it 4 blocks per second, which would give you 
about 16Kbyte/sec.  Of course if you can use
a larger block size, that can help, some.  I've heard that the protocol allows 
up to 64Kb writes?

We're facing similar problems and we've either cached the data locally (web 
caches), used WAN accelerators, or just
taken Einstein's name in vain ("curse you, speed of light!").  With locations 
in JP, EU and US, our data is *never* in
the right location :-(

Perhaps a distributed repository system with local mirrors (caches)?  Give each 
region their own servers that they can
talk to directly, locally and quickly, and then have the repos sync up in the 
background?

I bet AFS has already solved this problem (rolls eyes) :-)

--tep



_______________________________________________
Discuss mailing list
Discuss@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to