On Sat, Oct 22, 2011 at 11:03:49PM +0200, Hetz Ben Hamo wrote: > Hi, > > Here is a theoretical question: > > Lets say I have a Linux server in Israel, and I have a block of storage > (lets say iSCSI partition for this example) in USA, and I want to mount it > on my server in Israel. > iSCSI over such a long distance and with big latency (thanks to our ISP's)
Not sure it's mainly the ISPs, BTW. You do also depend on the physics of speed of light. > is a big no no, it's too slow. NFS is also not a good idea (here's > why<http://goo.gl/vn4GM> > ). > > I can take this storage, format it and export it from my server in USA, but > which protocol would give me: > > 1. All (or almost all) functionality of a local mounted device Do you need it read/write on both sides? If so, you are going to have big problems if the link is cut. > 2. Can work with long distance latencies > 3. won't "kill" the machine if the remote directory is disconnected / > "disappeared" > 4. If possible - supported (either directly or using 3rd party driver) on > Windows 2008 (Linux is the main concern, Windows is optional) I used drbd on a LAN, and know that it can theoretically work rather well on larger distance when used as read-write on one side only. They also have a pay-for tool to do this asyncronously called drbd proxy. This implies using a local copy and have drbd sync it. You can choose between three what they call "Protocols" to affect the perceived local latency. -- Didi _______________________________________________ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il