I jump into this loop with different alternative -- ip-based block device.
And I saw few successful cases with "HAST + UCARP + ZFS + FreeBSD".
If zfsonlinux is robust enough, trying "DRBD + PACEMAKER + ZFS + LINUX" is
definitely encouraged.
Thanks.
Fred
> -Original Message-
> From: zfs-
On Thu, Apr 26, 2012 at 12:10 AM, Richard Elling
wrote:
> On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
> Reboot requirement is a lame client implementation.
And lame protocol design. You could possibly migrate read-write NFSv3
on the fly by preserving FHs and somehow updating the clients to
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
> On 4/25/12 6:57 PM, Paul Kraus wrote:
>> On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote:
>>> On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
>>> wrote:
>
>>>
>>> Nothing's changed. Automounter + data migration -> rebooting clients
>>
On Wed, Apr 25, 2012 at 8:57 PM, Paul Kraus wrote:
> On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote:
>> Nothing's changed. Automounter + data migration -> rebooting clients
>> (or close enough to rebooting). I.e., outage.
>
> Uhhh, not if you design your automounter architecture correc
On 4/25/12 6:57 PM, Paul Kraus wrote:
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote:
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
wrote:
Nothing's changed. Automounter + data migration -> rebooting clients
(or close enough to rebooting). I.e., outage.
Uhhh, not if you
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote:
> On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
> wrote:
>> On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
>> > I disagree vehemently. automount is a disaster because you need to
>> > synchronize changes with all those clients. That's
Tomorrow, Ian Collins wrote:
On 04/26/12 10:34 AM, Paul Archer wrote:
That assumes the data set will fit on one machine, and that machine won't
be a
performance bottleneck.
Aren't those general considerations when specifying a file server?
I suppose. But I meant specifically that our data w
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
wrote:
> On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
> > I disagree vehemently. automount is a disaster because you need to
> > synchronize changes with all those clients. That's not realistic.
>
> Really? I did it with NIS automount maps an
On Wed, Apr 25, 2012 at 5:42 PM, Ian Collins wrote:
> Aren't those general considerations when specifying a file server?
There are Lustre clusters with thousands of nodes, hundreds of them
being servers, and high utilization rates. Whatever specs you might
have for one server head will not meet
On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
> On Wed, Apr 25, 2012 at 5:22 PM, Richard Elling
> wrote:
>> Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents).
>> FWIW,
>> automounters were invented 20+ years ago to handle this in a nearly seamless
>> manner.
>> Today,
On 04/26/12 10:34 AM, Paul Archer wrote:
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your
On Wed, Apr 25, 2012 at 5:22 PM, Richard Elling
wrote:
> Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents).
> FWIW,
> automounters were invented 20+ years ago to handle this in a nearly seamless
> manner.
> Today, we have DFS from Microsoft and NFS referrals that almost el
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different
mounts/directory
On Apr 25, 2012, at 2:26 PM, Paul Archer wrote:
> 2:20pm, Richard Elling wrote:
>
>> On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
>>
>>Interesting, something more complex than NFS to avoid the
>> complexities of NFS? ;-)
>>
>> We have data coming in on multiple nodes (with
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would prevent
one from having, on a single file server, /exports/nodes/node[0-15], and then
having each node NFS-mount /exports/nodes from the server
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would prevent
one from having, on a single file server, /exports/nodes/node[0-15], and then
having each node NFS-mount /exports/nodes from the server? Much simplier
than
your example, and all
On Wed, Apr 25, 2012 at 4:26 PM, Paul Archer wrote:
> 2:20pm, Richard Elling wrote:
>> Ignoring lame NFS clients, how is that architecture different than what
>> you would have
>> with any other distributed file system? If all nodes share data to all
>> other nodes, then...?
>
> Simple. With a dis
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different mounts/directory
structures, instead of bein
2:20pm, Richard Elling wrote:
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
Interesting, something more complex than NFS to avoid the
complexities of NFS? ;-)
We have data coming in on multiple nodes (with local storage) that is
needed on other multiple nodes. The only w
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
> 11:26am, Richard Elling wrote:
>
>> On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
>>
>> The point of a clustered filesystem was to be able to spread our data
>> out among all nodes and still have access
>> from any node without hav
And he will still need an underlying filesystem like ZFS for them :)
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
> Sent: 25 April 2012 20:32
> To: Paul Archer
> Cc: ZFS-Discuss mailing list
>
I agree, you need something like AFS, Lustre, or pNFS. And/or an NFS
proxy to those.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
As I understand it LLNL has very large datasets on ZFS on Linux. You
could inquire with them, as well as
http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/topics?pli=1
. My guess is that it's quite stable for at least some use cases
(most likely: LLNL's!), but that may not be yours. Yo
9:08pm, Stefan Ring wrote:
Sorry for not being able to contribute any ZoL experience. I've been
pondering whether it's worth trying for a few months myself already.
Last time I checked, it didn't support the .zfs directory (for
snapshot access), which you really don't want to miss after getting
>To put it slightly differently, if I used ZoL in production, would I be likely
to experience performance or stability
problems?
I saw one team revert from ZoL (CentOS 6) back to ext on some backup servers
for an application project, the killer was
stat times (find running slow etc.), perhaps
> I saw one team revert from ZoL (CentOS 6) back to ext on some backup servers
> for an application project, the killer was
> stat times (find running slow etc.), perhaps more layer 2 cache could have
> solved the problem, but it was easier to deploy ext/lvm2.
But stat times (think directory trav
11:26am, Richard Elling wrote:
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
The point of a clustered filesystem was to be able to spread our data out
among all nodes and still have access
from any node without having to run NFS. Size of the data set (once you
get past the p
>To put it slightly differently, if I used ZoL in production, would I be
likely to experience performance or stability problems?
I saw one team revert from ZoL (CentOS 6) back to ext on some backup
servers for an application project, the killer was
stat times (find running slow etc.), perhaps mor
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
> 9:59am, Richard Elling wrote:
>
>> On Apr 25, 2012, at 5:48 AM, Paul Archer wrote:
>>
>> This may fall into the realm of a religious war (I hope not!), but
>> recently several people on this list have
>> said/implied that ZFS was only
Hey again, I'm back with some news from my situation.
I tried taking out the faulty disk 5 and replacing it with a new disk, but
the pool showed up as FAULTED. So I plugged the faulting disk back keeping
the new disk in the machine, then ran a zpool replace.
After the new disk resilvered complete
9:59am, Richard Elling wrote:
On Apr 25, 2012, at 5:48 AM, Paul Archer wrote:
This may fall into the realm of a religious war (I hope not!), but
recently several people on this list have
said/implied that ZFS was only acceptable for production use on FreeBSD
(or Solaris, of course
On Apr 25, 2012, at 5:48 AM, Paul Archer wrote:
> This may fall into the realm of a religious war (I hope not!), but recently
> several people on this list have said/implied that ZFS was only acceptable
> for production use on FreeBSD (or Solaris, of course) rather than Linux with
> ZoL.
>
> I
On Apr 25, 2012, at 8:14 AM, Eric Schrock wrote:
> ZFS will always track per-user usage information even in the absence of
> quotas. See the the zfs 'userused@' properties and 'zfs userspace' command.
tip: zfs get -H -o value -p userused@username filesystem
Yes, and this is the logical size, no
On Wed, Apr 25, 2012 at 05:48:57AM -0700, Paul Archer wrote:
> This may fall into the realm of a religious war (I hope not!), but
> recently several people on this list have said/implied that ZFS was
> only acceptable for production use on FreeBSD (or Solaris, of course)
> rather than Linux with Zo
This may fall into the realm of a religious war (I hope not!), but recently
several people on this list have said/implied that ZFS was only acceptable for
production use on FreeBSD (or Solaris, of course) rather than Linux with ZoL.
I'm working on a project at work involving a large(-ish) amoun
ZFS will always track per-user usage information even in the absence of
quotas. See the the zfs 'userused@' properties and 'zfs userspace' command.
- Eric
2012/4/25 Fred Liu
> Missing an important ‘NOT’:
>
> >OK. I see. And I agree such quotas will **NOT** scale well. From users'
> side, they a
Missing an important ‘NOT’:
>OK. I see. And I agree such quotas will **NOT** scale well. From users' side,
>they always
> ask for more space or even no quotas at all. One of the main purposes behind
> such quotas
> is that we can account usage and get the statistics. Is it possible to do it
>
On Apr 24, 2012, at 2:50 PM, Fred Liu wrote:
Yes.
Thanks.
I am not aware of anyone looking into this.
I don't think it is very hard, per se. But such quotas don't fit well with the
notion of many file systems. There might be some restricted use cases
where it makes good sense, but I'm not c
38 matches
Mail list logo