if the link is stable, cfs(4) might be useful.

-Skip

On Wed, Aug 17, 2011 at 2:00 PM, Bakul Shah <ba...@bitblocks.com> wrote:
> On Wed, 17 Aug 2011 13:09:47 +0300 =?UTF-8?B?QXJhbSBIxIN2xINybmVhbnU=?= 
> <ara...@mgk.ro>  wrote:
>> Hello,
>>
>> I'm looking for advice on how to build a small network of two file
>> servers. I'm hoping most servers to be Plan9, clients are Windows and
>> Mac OS X.
>>
>> I have 2 houses separated by about 40ms of network latency. I want to
>> set some servers in each location and have all data accessible from
>> anywhere. I'll have about 2TB of data at each location, one location
>> will probably scale up.
>        ...
>> Is 9p suitable for this? How will the 40ms latency affect 9p
>> operation? (I have 100Mbit).
>
> With a strict request/response protocol you will get no more
> than 64KB once every 80ms so your throughput at best will be
> 6.55Mbps or about 15 times slower than using HTTP/FTP on
> 100Mbps link for large files.  [John, what was the link speed
> for the tests in your thesis?]
>
>> Right now (only one location) I am using a Solaris server with ZFS
>> that serves SMB and iSCSI.
>
> Using venti in place of (or over) ZFS on spinning disks would
> incur further performance degradation.
>
>> Any tips are welcomed :-),
>
> Since you want everything accessble from both sites, how about
> temporarily caching remote files locally?  There was a usenix
> paper about `nache', a caching proxy for nfs4 that may be of
> interest. Or may be ftpfs with a local cache if remote access
> is readonly?
>
>

Reply via email to