When trying to start the NFS-server on a just installed F40 system it stops
immediately. What is wrong? Below I give the systemctl status and the relevant
part of the journalctl.
root@s185:~# systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/usr
On 1/12/23 23:09, Stephen Morris wrote:
On 1/12/23 22:54, Stephen Morris wrote:
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs
nfs users,nconnect=2
On 1/12/23 22:54, Stephen Morris wrote:
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This results in the following definition in /etc
rying
to impose ownership restrictions through how you log in. It seems like
their way of handling the conflicting mess of Windows/Mac/Linux usage
on the same device.
For old-school NFS, you need to be the same user on the NAS as the
local system. This isn't the user name, but your user ID number.
On 11/30/23 03:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in /etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This results in the following definition in /etc
The user used to connect on the network drive has to have write capability.
Log in as that user and try to write at the directory of the mount point.
On Thu, Nov 30, 2023 at 8:02 AM Roger Heflin wrote:
> you specified "nfs" as the mount. And that should mount nfs4 with
> tcp,
you specified "nfs" as the mount. And that should mount nfs4 with
tcp, but mounted with nfs and udp so whatever is on the other end is
old and/or has tcp/nfsv4 disabled.
That being said, I don't know that users and/or owner options *WORK*
for network disks. Those options lik
I have a network drive being mounted from the following entry in /etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This results in the following definition in /etc/mtab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs
On Sat, 2022-12-24 at 04:46 -0800, Jonathan Ryshpan wrote:
> BTW: I have adopted the suggestion of Tom Horsley to put "amito" in
> /etc/hosts, which seems to work well. However this could fail if my
> dns server changes amito's IP address -- unlikely but possible.
It *would* be a problem. Most DH
ited by Jon : Tue Nov 16 10:30:08 AM PST 2021
> > # / amito(rw,no_subtree_check,no_root_squash)
> > / amito(rw)
> > /home amito(rw)
> >
> > The problem seems to happen because nfs is started before amito can
> > be
> > resolved:
>
> Do you have "Net
home amito(rw)
>
> The problem seems to happen because nfs is started before amito can be
> resolved:
Do you have "NetworkManager-wait-online.service" enabled?
Not sure if this would fix it, but it *seems* like it should.
___
users mailing l
On Mon, 07 Nov 2022 09:08:12 -0800
Jonathan Ryshpan wrote:
> The problem seems to happen because nfs is started before amito can be
> resolved:
I usually solve problems like this by putting "amito" in /etc/hosts
(and making sure networking believes it should lo
# /etc/exportfs created by Jon : Fri 2021-11-05 02:33:51 PM PDT
# edited by Jon : Tue Nov 16 10:30:08 AM PST 2021
# / amito(rw,no_subtree_check,no_root_squash)
/ amito(rw)
/home amito(rw)
The problem seems to happen because nfs is started before amito can be
resolved:
Nov 0
On 5/28/22 10:18, Chris Adams wrote:
This is from Red Hat's RHEL 8 docs, but works the same on Fedora (at
least version 35). Set 'vers3=n' in the '[nfsd]' section of
/etc/nfs.conf, mask the RPC services, and restart NFS:
systemctl mask --now rpc-statd.service rpcbi
Once upon a time, Ian Pilcher said:
> I don't need rpcbind, as I only use NFSv4. Is there any way to set up
> or configure the NFS server-related units (nfs-server.service, etc.) to
> not start rpcbind?
This is from Red Hat's RHEL 8 docs, but works the same on Fedora (at
leas
I don't need rpcbind, as I only use NFSv4. Is there any way to set up
or configure the NFS server-related units (nfs-server.service, etc.) to
not start rpcbind?
--
Google Where S
On 3/16/22 22:16, Richard Kimberly Heck wrote:
Fresh install of F35. I have these lines in /etc/fstab:
192.168.1.2://home/rikiheck/files /home/rikiheck/files nfs
auto,nouser,rw,dev,nosuid,exec,_netdev 0 0
192.168.1.2://multi/ /mnt/mail/multi nfs
auto,user,noauto,rw
Fresh install of F35. I have these lines in /etc/fstab:
192.168.1.2://home/rikiheck/files /home/rikiheck/files nfs
auto,nouser,rw,dev,nosuid,exec,_netdev 0 0
192.168.1.2://multi/ /mnt/mail/multi nfs
auto,user,noauto,rw,dev,nosuid,noexec,_netdev 0 0
192.168.1.2://home
On Thu, 17 Feb 2022 10:19:34 + Alex Gurenko via users wrote:
> Happens to the best of us :) I assume that worked for you?
Yes of course. Thanks again.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to us
Happens to the best of us :) I assume that worked for you?
---
Best regards, Alex
--- Original Message ---
On Thursday, February 17th, 2022 at 11:07, Frank Elsner
wrote:
> On Thu, 17 Feb 2022 09:46:35 + Alex Gurenko via users wrote:
>
> > I would think that adding `sudo` to your c
On Thu, 17 Feb 2022 09:46:35 + Alex Gurenko via users wrote:
> I would think that adding `sudo` to your command would fix your problem.
Oh shit, I'm getting old.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an
I would think that adding `sudo` to your command would fix your problem.
---
Best regards, Alex
--- Original Message ---
On Thursday, February 17th, 2022 at 10:37, Frank Elsner via users
wrote:
> On Thu, 17 Feb 2022 10:28:11 +0100 Frank Elsner via users wrote:
>
> > Hello,
> >
> > on
On Thu, 17 Feb 2022 10:28:11 +0100 Frank Elsner via users wrote:
> Hello,
>
> on my Fedora 36 system I've the following (strange) mount error:
^^
Typo, Fedora 35 of course.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
Hello,
on my Fedora 36 system I've the following (strange) mount error:
$ mount -t nfs christo:/misc/Backups /mnt
mount.nfs: failed to apply fstab options
In /etc/fstab there is no entry relating to /misc/Backups.
What options?
Kind regards, Frank E
On 1/31/22 02:11, Tim via users wrote:
And at some stage people are going to stop making devices look for DHCP
and fallback on Avahi, they'll decide to simplify things and just
follow the latest fad. You'll end up with a gadget that only does
Avahi.
You have this quite confused. DHCP and mdns
On Tue, 2022-02-01 at 22:38 +, Barry wrote:
> I thought that mDNS that Avahi implements only uses multicast on the
> LAN. You could set up multicast across multiple LAN segments.
>
> How does that end up getting answers from the internet?
> Especially when all ISPs block multicast it seems.
> On 1 Feb 2022, at 18:59, Tim via users wrote:
>
> If it doesn't already know the IP, then your computer can end up trying
> to query public servers outside your LAN for the answers.
I thought that mDNS that Avahi implements only uses multicast on the LAN.
You could set up multicast across mu
On Mon, 2022-01-31 at 21:52 +1030, Tim via users wrote:
> ".arpa" is owned, and they're able to set rules about its usage (so
> home.arpa was possible). Trying to set up a new top level domain,
> such as .home, would require getting a plethora of organisations to
> agree to something new, and requ
On 1/31/22 06:27, Ed Greshko wrote:
I needed no such change to my F35's host file for it to function
properly. Probably will never find out why
yours did.
Probably because the server configuration was modified. Robert reported:
> On the server:
> [plugh-3g ~]# cat /sys/module/nfsd/paramete
Ed Greshko wrote:
> On 31/01/2022 14:39, Tim via users wrote:
>> Not long ago, 16 Nov 2021, I had one of their email press releases
>> stating that the latest version of 8 had just been released and that
>> it's EOL would be 31 Dec 2021. I had to check that wasn't a typo.
>
> I do need to see wha
On 31/01/2022 22:19, Robert Nichols wrote:
On 1/30/22 11:24 PM, Ed Greshko wrote:
On 31/01/2022 00:13, Robert Nichols wrote:
FINALLY!!
I can get it all to work by putting "fedora.local" in /etc/hostname _and_ editing
/etc/hosts to have "fedora.local" as the _first_ name for 127.0.0.1 .
I ins
On 1/30/22 11:24 PM, Ed Greshko wrote:
On 31/01/2022 00:13, Robert Nichols wrote:
FINALLY!!
I can get it all to work by putting "fedora.local" in /etc/hostname _and_ editing
/etc/hosts to have "fedora.local" as the _first_ name for 127.0.0.1 .
I installed a Centos7 system and during the insta
On Mon, 2022-01-31 at 20:41 +1030, Tim via users wrote:
> Linux had an interesting quirk of using ".localdomain" as its LAN
> domain (at least on the few distros I've played with). Microsoft may
> have used .mshome or .home (as my router uses, actually it also uses
> .router, not that it tells you
On Mon, 2022-01-31 at 16:59 +0800, Ed Greshko wrote:
> I don't know much about Avahi/Bonjour/mDNS/ZeroConf I think it is/was
> a way to shoehorn Linux into some Windows environments. I hardly had
> to deal with that.
>
> I also didn't deal much with SMB as only ever had a couple of Windows
> syst
On 31/01/2022 14:39, Tim via users wrote:
On Mon, 2022-01-31 at 13:24 +0800, Ed Greshko wrote:
I installed a Centos7 system and during the install process called it
fedora.local. By default this was placed in etc/hostname.
Wasn't ".local" and Avahi/Bonjour/mDNS/ZeroConf non-traditional
DHCP an
On Mon, 2022-01-31 at 13:24 +0800, Ed Greshko wrote:
> I installed a Centos7 system and during the install process called it
> fedora.local. By default this was placed in etc/hostname.
Wasn't ".local" and Avahi/Bonjour/mDNS/ZeroConf non-traditional
DHCP and DNS thing? Does it still require diffe
On 31/01/2022 00:13, Robert Nichols wrote:
FINALLY!!
I can get it all to work by putting "fedora.local" in /etc/hostname _and_ editing
/etc/hosts to have "fedora.local" as the _first_ name for 127.0.0.1 .
I installed a Centos7 system and during the install process called it
fedora.local. By
On 1/30/22 1:34 AM, Ed Greshko wrote:
On 30/01/2022 12:36, Robert Nichols wrote:
On 1/29/22 8:25 PM, Gordon Messmer wrote:
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the clie
, but "hostname
--fqdn" responds with "hostname: Name or service not known", and
"mount -t nfs ..." causes the various "... does not map into domain
'localdomain'" messages to be logged.
I *think* the "name or service not known" message m
On 30/01/2022 12:36, Robert Nichols wrote:
On 1/29/22 8:25 PM, Gordon Messmer wrote:
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the client doesn't use rpc.idmapd, it
On 1/29/22 8:25 PM, Gordon Messmer wrote:
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the client doesn't use rpc.idmapd, it uses
"nfsidmap". The fact that
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the client doesn't use rpc.idmapd, it uses
"nfsidmap". The fact that rpc.idmapd isn't running doesn't real
On 30/01/2022 07:44, Gordon Messmer wrote:
On 1/28/22 23:32, Ed Greshko wrote:
But I do have nfs-idmapd.service with "Domain = localdomain" in its
configuration file.
If I change that to "Domain = local" and restart nfs-idmapd.service I do get
[root@fedora ~]# nf
On 1/28/22 23:32, Ed Greshko wrote:
But I do have nfs-idmapd.service with "Domain = localdomain" in its
configuration file.
If I change that to "Domain = local" and restart nfs-idmapd.service I
do get
[root@fedora ~]# nfsidmap -d
local
But everything works no mat
with
[root@fedora ~]# nfsidmap -d
localdomain
[root@fedora ~]# hostnamectl hostname
fedora.local
But I do have nfs-idmapd.service with "Domain = localdomain" in its
configuration file.
If I change that to "Domain = local" and restart nfs-idmapd.service I do get
: Name or service not known
[fedora ~]# hostname
fedora.local
[fedora ~]# mount -t nfs plugh-3g:/srv/shared /Public
[fedora ~]# ll -d /Public
drwxrws--x. 11 nobody nobody 4096 2022-01-28 17:20:50 /Public
[fedora ~]# journalctl SYSLOG_IDENTIFIER=nfsidmap | tail -4
Jan 28 21:19:14 fedora.local nfsidmap
On 1/28/22 06:08, Robert Nichols wrote:
Where does Fedora get its domain name? When I type "hostname --fqdn"
I get "hostname: Name or service not known". The CentOS 8 VM
apparently gets its domain name from the /etc/hostname file, which
contains "cent9-vm.local". This does not appear to work in
On 1/27/22 9:13 AM, francis.montag...@inria.fr wrote:
Hi.
On Thu, 27 Jan 2022 08:10:53 -0600 Robert Nichols wrote:
On 1/26/22 7:15 PM, Gordon Messmer wrote:
What does the entry for that filesystem in /proc/mounts look like? It should
have negotiated mount options that shed some light.
On 1/28/22 8:53 AM, Ed Greshko wrote:
On 28/01/2022 22:08, Robert Nichols wrote:
On 1/28/22 1:03 AM, Ed Greshko wrote:
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and n
On 28/01/2022 22:53, Ed Greshko wrote:
Example: For the host that I mentioned I used "hostnamectl f35ser.greshko.com"
Correction.
hostnamectl hostname f35ser.greshko.com
Too late in my day.
--
Did 황준호 die?
___
users mailing list -- users@lists.
On 28/01/2022 22:08, Robert Nichols wrote:
On 1/28/22 1:03 AM, Ed Greshko wrote:
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server a
On 1/28/22 1:03 AM, Ed Greshko wrote:
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client. The messages logged are o
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client. The messages logged are of the form:
"name '@loca
Hi.
On Thu, 27 Jan 2022 08:10:53 -0600 Robert Nichols wrote:
> On 1/26/22 7:15 PM, Gordon Messmer wrote:
>> What does the entry for that filesystem in /proc/mounts look like? It
>> should have negotiated mount options that shed some light.
>> Maybe add the "sec=sys" mount option to the clien
On 1/26/22 7:15 PM, Gordon Messmer wrote:
On 1/25/22 08:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client. The messages logged are o
On 1/25/22 08:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem
are mapped to "nobody" even though the names and numeric IDs are the
same on the server and client. The messages logged are of the form:
"name '@loca
On Tue, 25 Jan 2022 10:35:27 -0600
Robert Nichols wrote:
> In a Fedora 35 VM, all users and groups in an NFS mounted filesystem
> are mapped to "nobody" even though the names and numeric IDs are the
> same on the server and client. The messages logged are of the form:
>
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client. The messages logged are of the form:
"name '@local' does not map into domain 'local
Since some Fedora33 update in the last couple of weeks the problem has
gone away. I haven't changed anything as far as I am aware.
One change is that the kernel moved from 5.13.x to 5.14.x ...
Terry
On 21/10/2021 23:36, Reon Beon via users wrote:
https://release-monitoring.org/project/2081/
We
https://release-monitoring.org/project/2081/
Well it is a pre-release version. 2.5.5.rc3
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedo
Hi Roger,
Thanks for looking.
I will try NFS v3 with my latency tests running. I did try NFS v3 before
and I "think" there were still desktop lockups but for a much shorter
time. But this is just a feeling.
Current kernel on both systems is: 5.13.19-100.fc33.x86_64.
If I find the ti
debugging would help (without having
really verbose kernel debugging).
What is the current kernel you are running and trying a new one might
be worth it. Though I don't see nfs changes/fixes listed in the
5.14.* or 5.13.* kernels changelog in the rpm file (rpm -q
--changelog) and there are only
23.98
6.99 10.26 0.00 0.00 0.00 0.01 10:54:04
lo 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 Also NFV4 uses TCP/IP I think by default
and TCP/IP retries would be much quicker than 45 seconds. I do feel
there is an i
0.000.000.00 0.00
> zram00.00 0.00 0.00 0.000.00 0.000.00
> 0.00 0.00 0.000.00 0.000.00 0.00 0.00 0.00
> 0.00 0.000.000.000.00 0.00
>
> So nothing obvious on the disks of the server.
0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00
So nothing obvious on the disks of the server. I am pretty sure this is
an NFS issue ...
___
users mailing list -- users
My disklatencytest showed a longish (14 secs) NFS file system
directoty/stat lookup again today on a desktop:
2021-10-04T05:26:19 0.069486 0.069486 0.000570 /home/...
2021-10-04T05:28:19 0.269743 0.538000 0.001019 /home/...
2021-10-04T09:48:00 1.492158 0.003314
On 04/10/2021 00:51, Roger Heflin wrote:
With 10 minute samples anything that happened gets averaged enough
that even the worst event is almost impossible to see.
Sar will report the same as date ie local time. And a 12:51 event
would be in the 13:00 sample (started at about 12:50 and ended a
1
> 12:40:02 4804132 1 0 1
>
> 12:40:02 totscktcpsckudpsckrawsck ip-fragtcp-tw
> 12:50:02 4804132 1 0 1
> 13:00:00 4804132 1
01
13:10:00 490 43 32 101
sar -n NFS on the client
11:10:02 19.82 0.00 0.28 0.34 0.71 15.13
11:20:03 16.53 0.00 0.27 0.15 0.34 13.80
11:30:04 17.20 0.00 0.13 0.08
45 second event happened at: 2021-10-02T11:51:02 UTC. Not sure what sar
time is based on (maybe local time BST rather than UTC so would be
2021-10-02T12:51:02 BST.
"sar -d" on the server:
11:50:02 dev8-0 4.67 0.01 46.62 0.00 9.99
0.12 14.03 5.75
11:50:0
his because we expanded a nfs setup from 240 nodes (ran for months) to 270
or so and after that v3/tcp never worked right and tcpdumps and other info
shows that the server was harvesting the "unused" connections once it had
too many and the client was never handling it.
It could
fines in a dir. if you do ls -ld
you will see how big the dir is if the dir is really big under
some condition that can be slow, but usually not 45 seconds.
On Sat, Oct 2, 2021 at 12:00 PM Terry Barnaby wrote:
> I am getting more sure this is an NFS/networking issue rather than an
> issu
I am getting more sure this is an NFS/networking issue rather than an
issue with disks in the server.
I created a small test program that given a directory finds a random
file in a random directory three levels below, opens it and reads up to
a block (512 Bytes) of data from it and times how
On Fri, 1 Oct 2021 at 16:20, Terry Barnaby wrote:
>
> Thanks for the info, I am using MDraid. There are no "mddevice" messages
> in /var/log/messages and smartctl -a lists no errors on any of the
> disks. The disks are about 3 years old, I change them in servers between
> 3 and 4 years old.
>
Wh
gt; >
> > So if nonzero note the number, and next pause look again and see if
> > the numbers changed.
> > ___
>
> Thanks for the info, I am using MDraid. There are no "mddevice" messages
> in /var/log/messages
n
3 and 4 years old.
I will create a program to measure the effective sars output and detect
any discrepancies as this problem only occurs now and then along with
measuring iolatency on NFS accesses on the clients to see if I can track
down if it is a server disk issue or an NFS issue. T
it will show latency. await is average iotime in ms, and %util is
calced based in await and iops/sec. So long as your turn sar down to
1 minute samples it should tell you which of the 2 disks had higher
await/util%.With a 10 minute sample the 40sec pause may get spread
out across enough iops
the drive self-tests.
Is the pause long enough for you to figure out what is hanging? On either
side? (I haven't used NFS for a couple of decades so I'm pretty rusty on
the tooling.)
___
users mailing list -- users@lists.fedoraproject.org
To uns
timeout, or it hits the disk firmware timeout.
The sar data will show if one of the disks is being slow on the server end.
On the client end you are unlikely to get anything useful from any
samples as it seems pretty likely the server is not responding to nfs
and/or the disks are not responding.
It
ough for you to figure out what is hanging? On either
side? (I haven't used NFS for a couple of decades so I'm pretty rusty on
the tooling.)
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le..
.
The sar data will show if one of the disks is being slow on the server end.
On the client end you are unlikely to get anything useful from any
samples as it seems pretty likely the server is not responding to nfs
and/or the disks are not responding.
It could be as simple as on login it tries to
On Thu, 30 Sep 2021 17:50:01 +0100
Terry Barnaby wrote:
> Yes, problems often occur due to you having done something, but I am
> pretty sure nothing has changed apart from Fedora updates.
But hardware is sneaky. It waits for you to install software updates,
the breaks itself to make you think th
On 30/09/2021 11:42, Roger Heflin wrote:
On mine when I first access the NFS volume it takes 5-10 seconds for
the disks to spin up. Mine will spin down later in the day if little
or nothing is going on and I will get another delay.
I have also seen delays if a disk gets bad blocks and
Ethernet network is fine including cables,
switches Ethernet adapters etc. Pings are fine etc. It just appears
that the client programs get a huge (> 20 secs) delayed response to
accesses to /home every now and then which points to NFS issues. Most
of the system stats counters just give the amount of
On mine when I first access the NFS volume it takes 5-10 seconds for the
disks to spin up. Mine will spin down later in the day if little or
nothing is going on and I will get another delay.
I have also seen delays if a disk gets bad blocks and corrects them. About
1/2 of time that does have a
es Ethernet
adapters etc. Pings are fine etc. It just appears that the client programs get a
huge (> 20 secs) delayed response to accesses to /home every now and then which
points to NFS issues. Most of the system stats counters just give the amount of
access, not the latency of an access w
es
Ethernet adapters etc. Pings are fine etc. It just appears that the
client programs get a huge (> 20 secs) delayed response to accesses to
/home every now and then which points to NFS issues. Most of the system
stats counters just give the amount of access, not the latency of an
access which
$ ping -m DO -s 8972 ip.address
>
> This example should be the highest value to work in the case of MTU size
> 9000, there is 28 byte overhead for IPv4 packets.
>
> Second, are you sure no one is attaching to the network and duplicating the
> MAC address of your NFS server or perha
example should be the highest value to work in the case of MTU size
9000, there is 28 byte overhead for IPv4 packets.
Second, are you sure no one is attaching to the network and duplicating the
MAC address of your NFS server or perhaps the system that is stalled? If
the switches are manageable you would
On Sun, 26 Sep 2021 10:26:19 -0300
George N. White III wrote:
> If you have cron jobs that use a lot of network bandwidth it may work
> fine until some network issue causing lots of retransmits bogs it down.
Which is why you should check the dumb stuff first! Has a critter
chewed on the ethernet
On Sun, 26 Sept 2021 at 01:44, Tim via users
wrote:
> On Sat, 2021-09-25 at 06:04 +0100, Terry Barnaby wrote:
> > in the last month or so all of the client computers are getting KDE
> > GUI lockups every few hours that last for around 40 secs.
>
> Might one of them have a cron job that's scouring
On Sat, 2021-09-25 at 06:04 +0100, Terry Barnaby wrote:
> in the last month or so all of the client computers are getting KDE
> GUI lockups every few hours that last for around 40 secs.
Might one of them have a cron job that's scouring the network?
e.g. locate databasing
--
uname -rsvp
Linux
On Sat, 25 Sept 2021 at 02:04, Terry Barnaby wrote:
> Hi,
>
> I use NFS mount (defaults so V4) /home directories with a simple server
> over Gigabit Ethernet all running Fedora33. This has been working fine
> for 25+ years through various Fedora versions. However in the last month
On 25/09/2021 09:00, Ed Greshko wrote:
On 25/09/2021 14:07, Terry Barnaby wrote:
A few questions.
1. Are you saying your NFS server HW is the same for the past 25
years. Couldn't have been all Fedora, right?
No ( :) ) was using previous Linux and Unix systems before then.
Certain
On 25/09/2021 14:07, Terry Barnaby wrote:
A few questions.
1. Are you saying your NFS server HW is the same for the past 25 years.
Couldn't have been all Fedora, right?
No ( :) ) was using previous Linux and Unix systems before then. Certainly OS
versions and hardware has changed
On 25/09/2021 06:42, Ed Greshko wrote:
On 25/09/2021 13:04, Terry Barnaby wrote:
Hi,
I use NFS mount (defaults so V4) /home directories with a simple
server over Gigabit Ethernet all running Fedora33. This has been
working fine for 25+ years through various Fedora versions. However
in the
On 25/09/2021 13:04, Terry Barnaby wrote:
Hi,
I use NFS mount (defaults so V4) /home directories with a simple server over Gigabit
Ethernet all running Fedora33. This has been working fine for 25+ years through various
Fedora versions. However in the last month or so all of the client
Hi,
I use NFS mount (defaults so V4) /home directories with a simple server
over Gigabit Ethernet all running Fedora33. This has been working fine
for 25+ years through various Fedora versions. However in the last month
or so all of the client computers are getting KDE GUI lockups every few
Roger Heflin wrote:
>Fedora 33 shows this on recent kernels:
># CONFIG_NFS_V2 is not set
>So disabled in the kernel seems likely for 34 also.
>
>You would have to rebuild a kernel with that set to =m and boot that
>for v2 to work.
Thanks. You are absolutely correct.
My first thought was that the
1 - 100 of 1456 matches
Mail list logo