s? Does it
>>>> suggest anything else to look into for getting some more
>>>> potentially useful evidence?
>>> Well, all I can do is describe the most common TSO related
>>> failure:
>>> - When a read RPC reply (including NFS/RPC/TCP/IP headers)
>>&g
otentially useful evidence?
>> Well, all I can do is describe the most common TSO related
>> failure:
>> - When a read RPC reply (including NFS/RPC/TCP/IP headers)
>> is slightly less than 64K bytes (many TSO implementations are
>> limited to 64K or 32 discontiguous segme
;> vast majority)? Any clue how could the problems possibly
>> be unique to the handling of file names/paths? Does it
>> suggest anything else to look into for getting some more
>> potentially useful evidence?
> Well, all I can do is describe the most common TSO related
> fai
the handling of file names/paths? Does it
>suggest anything else to look into for getting some more
>potentially useful evidence?
Well, all I can do is describe the most common TSO related
failure:
- When a read RPC reply (including NFS/RPC/TCP/IP headers)
is slightly less than 64K bytes (man
3.0.0 based? Is it okay to
>> stick to the base version things are now based on --or do you
>> want me to update to more recent? (That last only applies if
>> main or stable/13 is to be put to use.)
> Well, it sounds like you've isolated it to the genet interface.
> Good
gt;want me to update to more recent? (That last only applies if
>main or stable/13 is to be put to use.)
Well, it sounds like you've isolated it to the genet interface.
Good sluething.
Unfortunately, NFS is only as good as the network fabric under it.
However, it's usually hangs or po
[Looks like the RPi4B genet0 handling is involved.]
On 2021-May-20, at 22:56, Mark Millard wrote:
>
> On 2021-May-20, at 22:19, Rick Macklem wrote:
>
>> Ok, so it isn't related to "soft".
>> I am wondering if it is something specific to what
>> "diff -r" does?
>>
>> Could you try:
>> # cd /us
On 2021-May-20, at 22:19, Rick Macklem wrote:
> Ok, so it isn't related to "soft".
> I am wondering if it is something specific to what
> "diff -r" does?
>
> Could you try:
> # cd /usr/ports
> # ls -R > /tmp/x
> # cd /mnt
> # ls -R > /tmp/y
> # cd /tmp
> # diff -u -p x y
> --> To see if "ls -
___
From: Mark Millard
Sent: Friday, May 21, 2021 12:40 AM
To: Rick Macklem
Cc: FreeBSD-STABLE Mailing List
Subject: Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in
a zfs file systems context)
CAUTION: This email originated from outside of the University of Gue
; allow eliminating some potential alternatives.]
>>
>> On 2021-May-20, at 19:38, Mark Millard wrote:
>>>
>>> On 2021-May-20, at 18:09, Rick Macklem wrote:
>>>>
>>>> Oh, one additional thing that I'll dare to top post...
>>>> r
On 2021-May-20, at 18:09, Rick Macklem wrote:
>>>
>>> Oh, one additional thing that I'll dare to top post...
>>> r367492 broke the TCP upcalls that the NFS server uses, such
>>> that intermittent hangs of NFS mounts to FreeBSD13 servers can occur.
>>>
gt;> r367492 broke the TCP upcalls that the NFS server uses, such
>> that intermittent hangs of NFS mounts to FreeBSD13 servers can occur.
>> This has not yet been resolved in "main" etc and could explain
>> why an RPC could time out for a soft mount.
>
> See la
> On 2021-May-20, at 18:09, Rick Macklem wrote:
>
> Oh, one additional thing that I'll dare to top post...
> r367492 broke the TCP upcalls that the NFS server uses, such
> that intermittent hangs of NFS mounts to FreeBSD13 servers can occur.
> This has not yet been res
Oh, one additional thing that I'll dare to top post...
r367492 broke the TCP upcalls that the NFS server uses, such
that intermittent hangs of NFS mounts to FreeBSD13 servers can occur.
This has not yet been resolved in "main" etc and could explain
why an RPC could time out for a s
Mark Millard wrote:
>[I warn that I'm a fairly minimal user of NFS
>mounts, not knowing all that much. I'm mostly
>reporting this in case it ends up as evidence
>via eventually matching up with others observing
>possibly related oddities.]
>
>I got the following
[I warn that I'm a fairly minimal user of NFS
mounts, not knowing all that much. I'm mostly
reporting this in case it ends up as evidence
via eventually matching up with others observing
possibly related oddities.]
I got the following odd sequence (that I've
mixed notes into). It
hi,
> On 11 Nov 2020, at 12:45, Ronald Klop wrote:
>
> Hi,
>
> I don't think NFS has the possibility to push notifications about changes in
> the filesystem to the clients. NFSv3 is stateless so the server does not even
> know about the clients. NFSv4 I don
Hi,
I don't think NFS has the possibility to push notifications about changes in
the filesystem to the clients. NFSv3 is stateless so the server does not even
know about the clients. NFSv4 I don't know much about, but I have never heard
of notifications.
So for NFS kqueue would on
Hi,
I have a vague recollection that kqueue does not work for NFS files,
any chance that this will be made possible?
cheers,
danny
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To
> On 9 Jan 2020, at 05:24, Rick Macklem wrote:
>
> The attached patch changes the xid to be a global for all "connections" for
> the krpc UDP client.
>
> You could try it if you'd like. It passed a trivial test, but I don't know why
> there is that "misfeature" comment means, so I don't know i
Braniss
Sent: Wednesday, January 8, 2020 12:08 PM
To: Rick Macklem
Cc: Richard P Mackerras; Adam McDougall; freebsd-stable@freebsd.org
Subject: Re: nfs lockd errors after NetApp software upgrade.
top posting NetAPP reply:
…
Here you can see transaction ID (0x5e15f77a) being used over
niqueness of this field.
rick
From: Daniel Braniss
Sent: Wednesday, January 8, 2020 12:08 PM
To: Rick Macklem
Cc: Richard P Mackerras; Adam McDougall; freebsd-stable@freebsd.org
Subject: Re: nfs lockd errors after NetApp software upgrade.
top posting NetA
Dougall; freebsd-stable@freebsd.org
Subject: Re: nfs lockd errors after NetApp software upgrade.
top posting NetAPP reply:
…
Here you can see transaction ID (0x5e15f77a) being used over port 886 and the
NFS server successfully responds.
44806952020-01-08 12:20:54 132.
top posting NetAPP reply:
…
Here you can see transaction ID (0x5e15f77a) being used over port 886 and the
NFS server successfully responds.
44806952020-01-08 12:20:54 132.65.116.111
132.65.60.56 NLM 0x5e15f77a (1578497914) 886
x ONTAP. If anyone says it’s not then they last
>looked on 8.x. So I suggest you QOS the IMAP workload.
>
> Nobody should be using UDP with NFS unless they have a very specific set
>of circumstances. TCP was a real step forward.
Well, I can't argue with this, considering I did the f
looked on 8.x. So I suggest you QOS the IMAP workload.
Nobody should be using UDP with NFS unless they have a very specific set
of circumstances. TCP was a real step forward.
Cheers
Richard
___
freebsd-stable@freebsd.org mailing list
https
On 12/22/19 12:01 PM, Rick Macklem wrote:
> Well, I've noted the flawed protocol. Here's an example (from my limited
> understanding of these protocols, where there has never been a published
> spec) :
> - The NLM supports a "blocking lock request" that goes something like this...
>- client
I avoid
> fiddling with. As such, it won't have change much since around FreeBSD7.
> and we haven’t had any issues with it for years, so you must have done
> something good
>
> cheers,
> danny
>
>
> rick
>
> cheers,
> danny
>
> rick
>
> Cheers
we haven’t had any issues with it for years, so you must have done
> something good
>
> cheers,
> danny
>
>
> rick
>
> cheers,
> danny
>
> rick
>
> Cheers
>
> Richard
> (NetApp admin)
>
> On Wed, 18 Dec 2019 at 15:46, Daniel B
18 Dec 2019 at 15:46, Daniel Braniss
mailto:da...@cs.huji.ac.il><mailto:da...@cs.huji.ac.il>>
wrote:
On 18 Dec 2019, at 16:55, Rick Macklem
mailto:rmack...@uoguelph.ca><mailto:rmack...@uoguelph.ca>>
wrote:
Daniel Braniss wrote:
Hi,
The server with the problems is runni
ith it for years, so you must have done
>> something good
>>
>> cheers,
>> danny
>>
>>>
>>> rick
>>>
>>> cheers,
>>> danny
>>>
>>>> rick
>>>>
>>>> Cheers
>&
; (NetApp admin)
>>>
>>> On Wed, 18 Dec 2019 at 15:46, Daniel Braniss
>>> mailto:da...@cs.huji.ac.il>> wrote:
>>>
>>>
>>>> On 18 Dec 2019, at 16:55, Rick Macklem
>>>> mailto:rmack...@uoguelph.ca>> wrote:
>>>>
> (NetApp admin)
>>>
>>> On Wed, 18 Dec 2019 at 15:46, Daniel Braniss
>>> mailto:da...@cs.huji.ac.il>> wrote:
>>>
>>>
>>>> On 18 Dec 2019, at 16:55, Rick Macklem
>>>> mailto:rmack...@uoguelph.ca>> wrote:
>>&
Hi,
At ONTAP 9.3P6 there is a possible LACP group issue after upgrade. Have you
checked any LACP groups,
These should not be a problem but I assume network interfaces are at the
home ports, not on slower ports or something silly. It is marginally better
if the traffic goes direct to the node where
gt;
>>>> Hi,
>>>> The server with the problems is running FreeBSD 11.1 stable, it was
>>>> working fine for >several months,
>>>> but after a software upgrade of our NetAPP server it’s reporting many
>>>> lockd errors >and becomes c
; mailto:rmack...@uoguelph.ca>> wrote:
>>
>> Daniel Braniss wrote:
>>
>>> Hi,
>>> The server with the problems is running FreeBSD 11.1 stable, it was working
>>> fine for >several months,
>>> but after a software upgrade of our NetAPP
> On 19 Dec 2019, at 02:22, Rick Macklem wrote:
>
> Richard P Mackerras wrote:
>
>> Hi,
>> What software version is the NetApp using?
>> Is the exported volume big?
>> Is the vserver configured for 64bit identifiers?
>>
>> If you enable NFS V4
Richard P Mackerras wrote:
>Hi,
>What software version is the NetApp using?
>Is the exported volume big?
>Is the vserver configured for 64bit identifiers?
>
>If you enable NFS V4.0 or 4.1 other NFS clients using defaults might mount
>NFSv4.x >unexpectedly after a reboot s
t running the web app - moodle.
>
> Is the vserver configured for 64bit identifiers
>
> what the issue here?
>
> ?
>
> If you enable NFS V4.0 or 4.1 other NFS clients using defaults might mount
> NFSv4.x unexpectedly after a reboot so you need to watch that.
>
> Cheers
&g
eb app - moodle.
> Is the vserver configured for 64bit identifiers
what the issue here?
> ?
>
> If you enable NFS V4.0 or 4.1 other NFS clients using defaults might mount
> NFSv4.x unexpectedly after a reboot so you need to watch that.
>
> Cheers
>
> Richard
>
Hi,
What software version is the NetApp using?
Is the exported volume big?
Is the vserver configured for 64bit identifiers?
If you enable NFS V4.0 or 4.1 other NFS clients using defaults might mount
NFSv4.x unexpectedly after a reboot so you need to watch that.
Cheers
Richard
(NetApp admin)
On
’s reporting many lockd
>> errors >and becomes catatonic,
>> ...
>> Dec 18 13:11:02 moo-09 kernel: nfs server fr-06:/web/www: lockd not
>> responding
>> Dec 18 13:11:45 moo-09 last message repeated 7 times
>> Dec 18 13:12:55 moo-09 last message repeated 8 tim
Daniel Braniss wrote:
>Hi,
>The server with the problems is running FreeBSD 11.1 stable, it was working
>fine for >several months,
>but after a software upgrade of our NetAPP server it’s reporting many lockd
>errors >and becomes catatonic,
>...
>Dec 18 13:11:02 moo-
Hi,
The server with the problems is running FreeBSD 11.1 stable, it was working
fine for several months,
but after a software upgrade of our NetAPP server it’s reporting many lockd
errors and becomes catatonic,
...
Dec 18 13:11:02 moo-09 kernel: nfs server fr-06:/web/www: lockd not responding
Hi!
I've seen a strange effect: NFS via IPv6 between 11.2-REL amd64
boxes failed for directories with more than 45 files or directories.
Small directories worked. It seems to be an issue with
ipv6 fragmentation (?), as can be seen by tcpdump:
17:54:16.855978 IP6 nfs-serv > nfs-client
Hi,
Recent work has determines that ESXi 6.7 works much better with the FreeBSD NFS
server for NFSv4.1 than ESXi 6.5 does. (Once a patch not yet in head/current is
committed.)
I don't think support for ESXi 6.5 for NFSv4.1 is practical. ESXi 6.5 can use
NFSv3 or
iSCSI to use a FreeBSD s
tting up/cloning 80 VMs that are stored on the NFS datastore
I can just report that the setup performs well and seems to be stable. Only
thing that happened twice while working with ZFS snapshots/clones was that the
ESXi host lost the connection to the NFS datastore. Don't know if it
Rick,
Thanks for the comments. I'm running a small "home lab" environment, so the
ESXi client is the only one I'm concerned with right now. I'll keep using the
ReclaimComplete patch as is. Definitely had problems with the NFS server
rebooting before I applied th
ns after a server reboot. The recovery
code
seemed to be badly broken in the 6.5 client. (All sorts of fun stuff like the
client
looping doiing ExchangeID operations forever. VM crashes...)
>That completely fixed the connection instability, but the NFS share was still
>mounting >read-only wit
Daniel Engel wrote:
>I am setting up an environment with FreeBSD 11.1 sharing a ZFS datastore to
>vmware >ESXI 6.7. There were a number of errors with NFS 4.1 sharing that I
>didn't >understand until I found the following thread.
>
><https://lists.freebsd.org/
Hi,
I am setting up an environment with FreeBSD 11.1 sharing a ZFS datastore to
vmware ESXI 6.7. There were a number of errors with NFS 4.1 sharing that I
didn't understand until I found the following thread.
<https://lists.freebsd.org/pipermail/freebsd-stable/2018-March/088
NAGY Andreas wrote:
>Thanks! Please keep me updated if you find put more or when a updated version
>is available.
Will try to remember to do so.
>As I now know it is working, I will start tomorrow to build up a testsystem
>with 3 NFS servers >(two of them in a ha with CARP and HA
Thanks! Please keep me updated if you find put more or when a updated version
is available.
As I now know it is working, I will start tomorrow to build up a testsystem
with 3 NFS servers (two of them in a ha with CARP and HAST) and several ESXi
hosts which will all access there NFS datastores
se grief.)
>These only appear several times after a the NFS share is mounted or remounted
>after a >connection loss.
>Everything works fine, but haven't seen them till I applied the last patch.
>
>andi
Ok. Thanks for testing all of these patches. I will probably get cleane
1 c 2
9f22ad6d 0 0 0 0 0]: Stale file handle
These only appear several times after a the NFS share is mounted or remounted
after a connection loss.
Everything works fine, but haven't seen them till I applied the last patch.
andi
___
freebsd-stable@f
x27;t seem to have any impact. It looks
>like they >only appeare when files are created or modified on the datastore
>from the >datastore browser or from shell, have not seen this warnings when
>working in a >VM on a virtual disk that is stored on the nfs datastore.
I've a
, but I think I
will still use ZFS, but only as filesystem on a single hw raid disk. Must check
what are the best setting for this on the hw raid + zfs for nfs.
>This one is a mystery to me. It seemed to be upset that the directory is
>changing (I assume either the Change or ModifyTime a
NAGY Andreas wrote:
>Actually I have only made some quick benchmarks with ATTO in a Windows VM
>>which has a vmdk on the NFS41 datastore which is mounted over two 1GB links
>in >different subnets.
>Read is nearly the double of just a single connection and write is just a bit
>faster. >Don't know
NAGY Andreas wrote:
>- after a reboot of the FreeBSD machine the ESXi does not restore the NFS
>>datastore again with following warning (just disconnecting the links is fine)
>2018-03-08T12:39:44.602Z cpu23:66484)WARNING: NFS41:
> >NFS41_Bug:2
ttributes are supposed to change.
I might try posting on nf...@ietf.org in case somebody involved with this
client reads
that list and can explain what this is?
>- after a reboot of the FreeBSD machine the ESXi does not restore the NFS
>datastore again >with following warning (just discon
, suggest retry
2018-03-08T11:34:00.352Z cpu1:67981 opID=f5159ce3)WARNING:
UserFile: 2155: hostd-worker: Directory changing too often to perform readdir
operation (11 retries), returning busy
- after a reboot of the FreeBSD machine the ESXi does not restore the NFS
datastore again with fo
eleg
>patches so >far.
That's fine. I don't think that matters much.
>I think this is related to the BIND_CONN_TO_SESSION; after a disconnect the
>ESXi >cannot connect to the NFS also with this warning:
>2018-03-07T16:55:11.227Z cpu21:66484)WARNING: NFS41: NFS41_Bug:2361
connect to the NFS also with this warning:
2018-03-07T16:55:11.227Z cpu21:66484)WARNING: NFS41: NFS41_Bug:2361: BUG -
Invalid BIND_CONN_TO_SESSION error: NFS4ERR_NOTSUPP
Another thing I noticed today is that it is not possible to delete a folder
with the ESXi datastorebrowser on the NFS mount. Maybe
NAGY Andreas wrote:
>Okay, that was the main reason for using NFS 4.1.
>Is it planned to implement it, or is the focus on pNFS?
I took a quick look and implementing this for some cases will be pretty
easy. Binding a FORE channel is implied, so for that case all the server
does is reply OK
NAGY Andreas wrote:
>Compiling with the last patch also failed:
>
>error: use of undeclared identifier 'NFSV4OPEN_WDSUPPFTYPE
If you apply the attached patch along with wantdeleg.patch, it should
build. At most, this will get rid of the warnings about invalid reason for
not issuing a delegation, s
NAGY Andreas wrote:
>Okay, that was the main reason for using NFS 4.1.
>Is it planned to implement it, or is the focus on pNFS?
Do the VMware people claim that this improves performance?
(I know nothing about the world of VMs, but for real hardware
I can't see any advantage of havin
Okay, that was the main reason for using NFS 4.1.
Is it planned to implement it, or is the focus on pNFS?
Thanks,
Andi
Von: Rick Macklem
Gesendet: 05.03.2018 11:49 nachm.
An: NAGY Andreas; 'freebsd-stable@freebsd.org'
Betreff: Re: NFS 4.1 RECLAIM_C
Nope, that isn't supported, rick
(Hope no one is too upset by a top post.)
From: NAGY Andreas
Sent: Monday, March 5, 2018 8:22:10 AM
To: Rick Macklem; 'freebsd-stable@freebsd.org'
Subject: RE: NFS 4.1 RECLAIM_COMPLETE FS failed error in c
Compiling with the last patch also failed:
error: use of undeclared identifier 'NFSV4OPEN_WDSUPPFTYPE'
-Original Message-
From: NAGY Andreas
Sent: Montag, 5. März 2018 14:22
To: Rick Macklem ; 'freebsd-stable@freebsd.org'
Subject: RE: NFS 4.1 RECLAIM_COMPLET
Thanks, I am actually compiling with both patches.
I try now to get NFS 4.1 multipathing working. So I have now two connection on
different subnets between the ESXi host and the FreeBSD host with exports for
the same mountpoint on both subnets.
Now I get the following errors in the vmkernel.log
NAGY Andreas wrote:
[stuff snipped]
>In the source I saw nfs_async = 0; is it right that NFS will work in async
>mode if I >compile the kernel with nfs_async = 1?
>
>I know the risk of running it async, but is it not the same risk having the
>datastore >connected via iSCSI w
Okay, the slow write was not a NFS problem, it was the hw raid controller which
switched to write through because of a broken battery.
In the source I saw nfs_async = 0; is it right that NFS will work in async mode
if I compile the kernel with nfs_async = 1?
I know the risk of running it async
ESXi host and the FreeBSD host,
but as soon as I figure out what Is the right way to configure multiple paths
for NFS I will do more testing.
I need also to check out what can be tuned. I expected that writes to the NFS
datastore will be slower than iSCSI but not as slow as it is now.
andi
NAGY Andreas wrote:
>Hi and thanks!
>
>First time using/needing a patch could you give me a short advise how to use
>it >and for which version?
The only difference with kernel versions will be the line#s.
>So far I have made a fresh FreeBSD 11.1 RELEASE install as a VM on a ESXi host
>>updated th
v.c 4102
lines.
andi
-Original Message-
From: Rick Macklem [mailto:rmack...@uoguelph.ca]
Sent: Samstag, 3. März 2018 03:01
To: NAGY Andreas ; freebsd-stable@freebsd.org
Subject: Re: NFS 4.1 RECLAIM_COMPLETE FS failed error in combination with ESXi
client
NAGY Andreas wrote:
>I am
NAGY Andreas wrote:
>I am trying to get a FreeBSD NFS 4.1 export working with VMware Esxi 6.5u1,
>but >it is always mounted as read only.
>
>After some research, I found out that this is a known problem, and there are
>>threads about this from 2015 also in the mailinglist ar
Hi,
I am trying to get a FreeBSD NFS 4.1 export working with VMware Esxi 6.5u1, but
it is always mounted as read only.
After some research, I found out that this is a known problem, and there are
threads about this from 2015 also in the mailinglist archive.
As it seems VMware will not change
On Thu, Mar 09, 2017 at 10:49:01PM +, Rick Macklem wrote:
> Konstantin Belousov wrote:
> > I did not touched unionfs, and have no plans to. It is equally broken in
> > all relevant versions of FreeBSD.
> Heh, heh. I chuckled when I read this. I think he's trying to say
> "it probably won't eve
Konstantin Belousov wrote:
> I did not touched unionfs, and have no plans to. It is equally broken in
> all relevant versions of FreeBSD.
Heh, heh. I chuckled when I read this. I think he's trying to say "it probably
won't ever be fixed". My understanding is that it would require a major redesign
Bezüglich Konstantin Belousov's Nachricht vom 08.03.2017 00:55 (localtime):
> On Tue, Mar 07, 2017 at 10:49:01PM +, Rick Macklem wrote:
>> Hmm, this is going to sound dumb, but I don't recall generating any
>> unionfs patch;-)
>> I'll go look for it. Maybe it was Kostik's?
> I did not touched u
bugs, a partial patch and some comments [Was: Re:
> 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @
> /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c:1905]
>
> Bez?glich Harry Schmalzbauer's Nachricht vom 07.03.2017 19:44 (localtime):
&
FreeBSD Stable; Mark Johnston; k...@freebsd.org
Subject: Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3
Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @
/usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c:1905]
Bezüglich Harry Schmalzbauer&
Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 19:44 (localtime):
> Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 13:42 (localtime):
> …
>> Something ufs related seems to have tightened the unionfs locking
>> problem in stable/11. Now the machine instantaniously panics during
Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 13:42 (localtime):
…
> Something ufs related seems to have tightened the unionfs locking
> problem in stable/11. Now the machine instantaniously panics during
> boot after mounting root with Rick's latest patch.
>
> Unfortunately I don't hav
ith this patch, one reproducable panic can still be easily triggered:
>> I have directory A unionfs_mounted under directory B.
>>Then I mount_unionfs the same directory A below another directory C.
>>panic: __lockmgr_args: downgrade a recursed lockmgr nfs @
>>/usr/local/s
te:
> > > I inherited a lab that has a few hundred hosts running FreeBSD 7.2.
> > > These hosts run test scripts that access files that are stored on
> > > FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
> > >
> > > [...]
> > >
lab that has a few hundred hosts running FreeBSD 7.2.
These hosts run test scripts that access files that are stored on
FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
On the 7.2 hosts, I can see the exported directory:
$ showmount -e 6.3-host
Exports list on 6.3-host
/data
From: owner-freebsd-sta...@freebsd.org on
behalf of Daniel Braniss
Sent: Friday, January 13, 2017 3:06:56 AM
To: Karl Young
Cc: FreeBSD Stable Mailing List
Subject: Re: NFS and amd on older FreeBSD
> On 12 Jan 2017, at 21:01, Karl Young wrote:
>
> Daniel Braniss(da...@cs.huji.ac.il)@2017
;>>>
>>>> I inherited a lab that has a few hundred hosts running FreeBSD 7.2.
>>>> These hosts run test scripts that access files that are stored on
>>>> FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
>>>>
>>>&
ng wrote:
>>>>
>>>> I inherited a lab that has a few hundred hosts running FreeBSD 7.2.
>>>> These hosts run test scripts that access files that are stored on
>>>> FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
>>>>
>>
g FreeBSD 7.2.
> >> These hosts run test scripts that access files that are stored on
> >> FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
> >>
> >>
> >> On the 7.2 hosts, I can see the exported directory:
> >>
>
gt; These hosts run test scripts that access files that are stored on
> > FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
> >
> > [...]
> >
> > $ showmount -e 9.3-host
> > Exports list on 9.3-host:
> > /data Eve
eeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
>>
>>
>> On the 7.2 hosts, I can see the exported directory:
>>
>> $ showmount -e 6.3-host
>> Exports list on 6.3-host
>> /data Everyone
>>
>>
On Wed, Jan 11, 2017 at 03:47:37PM -0800, Karl Young wrote:
> I inherited a lab that has a few hundred hosts running FreeBSD 7.2.
> These hosts run test scripts that access files that are stored on
> FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
>
> [...]
>
> On 12 Jan 2017, at 1:47 AM, Karl Young wrote:
>
> I inherited a lab that has a few hundred hosts running FreeBSD 7.2.
> These hosts run test scripts that access files that are stored on
> FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
>
>
> On
I inherited a lab that has a few hundred hosts running FreeBSD 7.2.
These hosts run test scripts that access files that are stored on
FreeBSD 6.3 host. The 6.3 host exports a /data directory with NFS
On the 7.2 hosts, I can see the exported directory:
$ showmount -e 6.3-host
Exports list on
Op 29/12/2016 om 14:48 schreef Marko Cupać:
> Hi,
>
> a few years ago I opted to replace our NFS server from FreeBSD to
> CentOS, because I was getting sub-optimal speeds between ESXi client
> and FreeBSD server.
>
> I never tried the workaround described here:
Hi,
a few years ago I opted to replace our NFS server from FreeBSD to
CentOS, because I was getting sub-optimal speeds between ESXi client
and FreeBSD server.
I never tried the workaround described here:
[http://christopher-technicalmusings.blogspot.rs/2011/06/speeding-up-freebsds-nfs-on-zfs-for
> On 21 Dec 2016, at 07:24, Garrett Wollman wrote:
>
> I don't know the ZFS code well enough to understand what running out
> of quota has to do with this situation (you'd think it would just
> return immediately with [EDQUOT]) but perhaps it matters that the
> clients are not well-behaved and th
I've opened a bug about this before, which I can't cite by number
because bugzilla appears to be down at the moment. But I had this
problem recur tonight under otherwise idle conditions, so I was able
to get a set of kernel stacks without any confounding RPC activity
going on. This is on 10.2; we
1 - 100 of 1626 matches
Mail list logo