On Wed, Apr 25, 2012 at 8:57 PM, Paul Kraus wrote:
> On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote:
>> Nothing's changed. Automounter + data migration -> rebooting clients
>> (or close enough to rebooting). I.e., outage.
>
> Uhhh, not if you design your automounter architecture correc
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote:
> On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
> wrote:
>> On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
>> > I disagree vehemently. automount is a disaster because you need to
>> > synchronize changes with all those clients. That's
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
wrote:
> On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
> > I disagree vehemently. automount is a disaster because you need to
> > synchronize changes with all those clients. That's not realistic.
>
> Really? I did it with NIS automount maps an
On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
> On Wed, Apr 25, 2012 at 5:22 PM, Richard Elling
> wrote:
>> Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents).
>> FWIW,
>> automounters were invented 20+ years ago to handle this in a nearly seamless
>> manner.
>> Today,
On Wed, Apr 25, 2012 at 5:22 PM, Richard Elling
wrote:
> Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents).
> FWIW,
> automounters were invented 20+ years ago to handle this in a nearly seamless
> manner.
> Today, we have DFS from Microsoft and NFS referrals that almost el
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different
mounts/directory
On Apr 25, 2012, at 2:26 PM, Paul Archer wrote:
> 2:20pm, Richard Elling wrote:
>
>> On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
>>
>>Interesting, something more complex than NFS to avoid the
>> complexities of NFS? ;-)
>>
>> We have data coming in on multiple nodes (with
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would prevent
one from having, on a single file server, /exports/nodes/node[0-15], and then
having each node NFS-mount /exports/nodes from the server? Much simplier
than
your example, and all
On Wed, Apr 25, 2012 at 4:26 PM, Paul Archer wrote:
> 2:20pm, Richard Elling wrote:
>> Ignoring lame NFS clients, how is that architecture different than what
>> you would have
>> with any other distributed file system? If all nodes share data to all
>> other nodes, then...?
>
> Simple. With a dis
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different mounts/directory
structures, instead of bein
2:20pm, Richard Elling wrote:
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
Interesting, something more complex than NFS to avoid the
complexities of NFS? ;-)
We have data coming in on multiple nodes (with local storage) that is
needed on other multiple nodes. The only w
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
> 11:26am, Richard Elling wrote:
>
>> On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
>>
>> The point of a clustered filesystem was to be able to spread our data
>> out among all nodes and still have access
>> from any node without hav
scuss mailing list
> Subject: Re: [zfs-discuss] cluster vs nfs (was: Re: ZFS on Linux vs
FreeBSD)
>
> I agree, you need something like AFS, Lustre, or pNFS. And/or an NFS
proxy
> to those.
>
> Nico
> --
> ___
> zfs-discuss mail
I agree, you need something like AFS, Lustre, or pNFS. And/or an NFS
proxy to those.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
11:26am, Richard Elling wrote:
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
The point of a clustered filesystem was to be able to spread our data out
among all nodes and still have access
from any node without having to run NFS. Size of the data set (once you
get past the p
15 matches
Mail list logo