On Mon, Jan 11, 2010 at 3:46 PM, Jeremy Charles <jchar...@epic.com> wrote:
> I'd like to hear from those who have had to manage IT resources for offices 
> that are located on opposite sides of oceans.
>
> Performance was understandably painful for the Netherlands folks until we 
> added a Riverbed system to optimize the CIFS traffic.  It's better now, but 
> the Netherlands folks are still pointing out productivity losses due to 
> slowness working with the CIFS file servers.  Note that the link between the 
> Netherlands and Wisconsin offices has never been strained for capacity - this 
> purely seems to be a latency issue.

We are located on the West Coast, have a site in Singapore, and will
soon be opening another in Spain. This is a huge issue for us.
Especially once the Spain office opens, you can't centrally locate
those three locations.

Although most of our data is published through web services (https),
we are also experiencing performance issues due to latency. (200ms to
Singapore means up to 140 seconds just waiting for the packets to
travel back and forth on a 1MB transfer.)

> One of the obvious thoughts is to set up file servers (and backups) in the 
> Netherlands office, and move "their" files over to "their" file servers.  Of 
> course, there is no clear division between what the Wisconsin folks use and 
> what the Netherlands folks use, so somebody will always be the winner and 
> somebody will always be the loser when it comes to performance in accessing a 
> particular piece of data.

Audit your data use. You may find that there is a clear distinction
between which files are used by each office. That would certainly make
it easier to "shard" the data. Just make sure your management knows
what managing multiple data centers really means: additional staff,
facilities, etc.

Microsoft DFS, afs, and other methods of synchronizing files across
locations all suffer from the same problems: locking a file in another
location to prevent conflicting edits and resolving the conflicts when
they inevitably occur. I have yet to see any of these methods actually
work they way they are supposed to over high latency links.

> Is our best bet to simply try to locate data closest to the people who use it 
> the most (and set expectations accordingly with those who are not near a 
> given piece of data), or is there a better way to deal with this?

Moving the data closer to the customer is really the only solution
that we have been able to come up with. For us, there are several
options that we are still investigating:
* using an https proxy to cache data closer to the clients.
* moving data to servers closer to the clients, where appropriate.
* some kind of http accelerator (I'm skeptical.)

We noticed that when we have an IPsec VPN between our two offices, the
performance increases. I'm not exactly sure why, but it is noticeable.

Is this the same Riverbed that you implemented? How well does it work?
What does it actually do? (Some kind of packet-level proxy?)
http://whitepapers.theregister.co.uk/paper/view/835/

-- 
Perfection is just a word I use occasionally with mustard.
--Atom Powers--

_______________________________________________
Discuss mailing list
Discuss@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to