On 1/12/23 23:09, Stephen Morris wrote:
On 1/12/23 22:54, Stephen Morris wrote:
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs
nfs users,nconnect=2,own
On 1/12/23 22:54, Stephen Morris wrote:
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This resu
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This results in the following definition in /etc/m
On Thu, 2023-11-30 at 07:01 -0600, Roger Heflin wrote:
> That being said, I don't know that users and/or owner options *WORK*
> for network disks. Those options likely do not also work as any disk
> that has actual owners info stored on them. They are usually used
> with dos/fat/fat32 type fses
On 11/30/23 03:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in /etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This results in the following definition in /etc/m
The user used to connect on the network drive has to have write capability.
Log in as that user and try to write at the directory of the mount point.
On Thu, Nov 30, 2023 at 8:02 AM Roger Heflin wrote:
> you specified "nfs" as the mount. And that should mount nfs4 with
> tcp, but mounted with n
you specified "nfs" as the mount. And that should mount nfs4 with
tcp, but mounted with nfs and udp so whatever is on the other end is
old and/or has tcp/nfsv4 disabled.
That being said, I don't know that users and/or owner options *WORK*
for network disks. Those options likely do not also work
On 3/16/22 22:16, Richard Kimberly Heck wrote:
Fresh install of F35. I have these lines in /etc/fstab:
192.168.1.2://home/rikiheck/files /home/rikiheck/files nfs
auto,nouser,rw,dev,nosuid,exec,_netdev 0 0
192.168.1.2://multi/ /mnt/mail/multi nfs
auto,user,noauto,rw,de
On Thu, 17 Feb 2022 10:19:34 + Alex Gurenko via users wrote:
> Happens to the best of us :) I assume that worked for you?
Yes of course. Thanks again.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to us
Happens to the best of us :) I assume that worked for you?
---
Best regards, Alex
--- Original Message ---
On Thursday, February 17th, 2022 at 11:07, Frank Elsner
wrote:
> On Thu, 17 Feb 2022 09:46:35 + Alex Gurenko via users wrote:
>
> > I would think that adding `sudo` to your c
On Thu, 17 Feb 2022 09:46:35 + Alex Gurenko via users wrote:
> I would think that adding `sudo` to your command would fix your problem.
Oh shit, I'm getting old.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an
I would think that adding `sudo` to your command would fix your problem.
---
Best regards, Alex
--- Original Message ---
On Thursday, February 17th, 2022 at 10:37, Frank Elsner via users
wrote:
> On Thu, 17 Feb 2022 10:28:11 +0100 Frank Elsner via users wrote:
>
> > Hello,
> >
> > on
On Thu, 17 Feb 2022 10:28:11 +0100 Frank Elsner via users wrote:
> Hello,
>
> on my Fedora 36 system I've the following (strange) mount error:
^^
Typo, Fedora 35 of course.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
On 1/31/22 02:11, Tim via users wrote:
And at some stage people are going to stop making devices look for DHCP
and fallback on Avahi, they'll decide to simplify things and just
follow the latest fad. You'll end up with a gadget that only does
Avahi.
You have this quite confused. DHCP and mdns
On Tue, 2022-02-01 at 22:38 +, Barry wrote:
> I thought that mDNS that Avahi implements only uses multicast on the
> LAN. You could set up multicast across multiple LAN segments.
>
> How does that end up getting answers from the internet?
> Especially when all ISPs block multicast it seems.
> On 1 Feb 2022, at 18:59, Tim via users wrote:
>
> If it doesn't already know the IP, then your computer can end up trying
> to query public servers outside your LAN for the answers.
I thought that mDNS that Avahi implements only uses multicast on the LAN.
You could set up multicast across mu
On Mon, 2022-01-31 at 21:52 +1030, Tim via users wrote:
> ".arpa" is owned, and they're able to set rules about its usage (so
> home.arpa was possible). Trying to set up a new top level domain,
> such as .home, would require getting a plethora of organisations to
> agree to something new, and requ
On 1/31/22 06:27, Ed Greshko wrote:
I needed no such change to my F35's host file for it to function
properly. Probably will never find out why
yours did.
Probably because the server configuration was modified. Robert reported:
> On the server:
> [plugh-3g ~]# cat /sys/module/nfsd/paramete
Ed Greshko wrote:
> On 31/01/2022 14:39, Tim via users wrote:
>> Not long ago, 16 Nov 2021, I had one of their email press releases
>> stating that the latest version of 8 had just been released and that
>> it's EOL would be 31 Dec 2021. I had to check that wasn't a typo.
>
> I do need to see wha
On 31/01/2022 22:19, Robert Nichols wrote:
On 1/30/22 11:24 PM, Ed Greshko wrote:
On 31/01/2022 00:13, Robert Nichols wrote:
FINALLY!!
I can get it all to work by putting "fedora.local" in /etc/hostname _and_ editing
/etc/hosts to have "fedora.local" as the _first_ name for 127.0.0.1 .
I ins
On 1/30/22 11:24 PM, Ed Greshko wrote:
On 31/01/2022 00:13, Robert Nichols wrote:
FINALLY!!
I can get it all to work by putting "fedora.local" in /etc/hostname _and_ editing
/etc/hosts to have "fedora.local" as the _first_ name for 127.0.0.1 .
I installed a Centos7 system and during the insta
On Mon, 2022-01-31 at 20:41 +1030, Tim via users wrote:
> Linux had an interesting quirk of using ".localdomain" as its LAN
> domain (at least on the few distros I've played with). Microsoft may
> have used .mshome or .home (as my router uses, actually it also uses
> .router, not that it tells you
On Mon, 2022-01-31 at 16:59 +0800, Ed Greshko wrote:
> I don't know much about Avahi/Bonjour/mDNS/ZeroConf I think it is/was
> a way to shoehorn Linux into some Windows environments. I hardly had
> to deal with that.
>
> I also didn't deal much with SMB as only ever had a couple of Windows
> syst
On 31/01/2022 14:39, Tim via users wrote:
On Mon, 2022-01-31 at 13:24 +0800, Ed Greshko wrote:
I installed a Centos7 system and during the install process called it
fedora.local. By default this was placed in etc/hostname.
Wasn't ".local" and Avahi/Bonjour/mDNS/ZeroConf non-traditional
DHCP an
On Mon, 2022-01-31 at 13:24 +0800, Ed Greshko wrote:
> I installed a Centos7 system and during the install process called it
> fedora.local. By default this was placed in etc/hostname.
Wasn't ".local" and Avahi/Bonjour/mDNS/ZeroConf non-traditional
DHCP and DNS thing? Does it still require diffe
On 31/01/2022 00:13, Robert Nichols wrote:
FINALLY!!
I can get it all to work by putting "fedora.local" in /etc/hostname _and_ editing
/etc/hosts to have "fedora.local" as the _first_ name for 127.0.0.1 .
I installed a Centos7 system and during the install process called it
fedora.local. By
On 1/30/22 1:34 AM, Ed Greshko wrote:
On 30/01/2022 12:36, Robert Nichols wrote:
On 1/29/22 8:25 PM, Gordon Messmer wrote:
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the client doesn't
On 1/29/22 20:36, Robert Nichols wrote:
If I could find any way to set the client's domain name, I would.
Nothing I try has any effect on the domain name.
When I try to set a FQDN with hostnamectl, then "hostnamectl" (with no
arguments) shows that FQDN as the static hostname, but "hostname
--fq
On 30/01/2022 12:36, Robert Nichols wrote:
On 1/29/22 8:25 PM, Gordon Messmer wrote:
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the client doesn't use rpc.idmapd, it uses
"nfsidmap". T
On 1/29/22 8:25 PM, Gordon Messmer wrote:
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the client doesn't use rpc.idmapd, it uses
"nfsidmap". The fact that rpc.idmapd isn't running doesn'
On 1/29/22 17:20, Ed Greshko wrote:
In the initial posting by Robert he wrote:
"I have no nfs-idmapd service running"
Right, but on recent kernels, the client doesn't use rpc.idmapd, it uses
"nfsidmap". The fact that rpc.idmapd isn't running doesn't really tell
us anything.
"all users a
On 30/01/2022 07:44, Gordon Messmer wrote:
On 1/28/22 23:32, Ed Greshko wrote:
But I do have nfs-idmapd.service with "Domain = localdomain" in its
configuration file.
If I change that to "Domain = local" and restart nfs-idmapd.service I do get
[root@fedora ~]# nfsidmap -d
local
But everything
On 1/28/22 23:32, Ed Greshko wrote:
But I do have nfs-idmapd.service with "Domain = localdomain" in its
configuration file.
If I change that to "Domain = local" and restart nfs-idmapd.service I
do get
[root@fedora ~]# nfsidmap -d
local
But everything works no matter what the setting
https
On 29/01/2022 13:44, Gordon Messmer wrote:
On 1/28/22 06:08, Robert Nichols wrote:
Where does Fedora get its domain name? When I type "hostname --fqdn" I get "hostname: Name or
service not known". The CentOS 8 VM apparently gets its domain name from the /etc/hostname file, which
contains "cent
On 29/01/2022 11:44, Robert Nichols wrote:
No change:
[fedora ~]# hostnamectl hostname fedora.local
[fedora ~]# hostnamectl
Static hostname: fedora.local
Icon name: computer-vm
Chassis: vm
Machine ID: 6e701e7fa0dc4996984b6509b40eb940
Boot ID: 256c0a3fa9dd4b88bd611
On 1/28/22 06:08, Robert Nichols wrote:
Where does Fedora get its domain name? When I type "hostname --fqdn"
I get "hostname: Name or service not known". The CentOS 8 VM
apparently gets its domain name from the /etc/hostname file, which
contains "cent9-vm.local". This does not appear to work in
On 1/27/22 9:13 AM, francis.montag...@inria.fr wrote:
Hi.
On Thu, 27 Jan 2022 08:10:53 -0600 Robert Nichols wrote:
On 1/26/22 7:15 PM, Gordon Messmer wrote:
What does the entry for that filesystem in /proc/mounts look like? It should
have negotiated mount options that shed some light.
On 1/28/22 8:53 AM, Ed Greshko wrote:
On 28/01/2022 22:08, Robert Nichols wrote:
On 1/28/22 1:03 AM, Ed Greshko wrote:
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs a
On 28/01/2022 22:53, Ed Greshko wrote:
Example: For the host that I mentioned I used "hostnamectl f35ser.greshko.com"
Correction.
hostnamectl hostname f35ser.greshko.com
Too late in my day.
--
Did 황준호 die?
___
users mailing list -- users@lists.
On 28/01/2022 22:08, Robert Nichols wrote:
On 1/28/22 1:03 AM, Ed Greshko wrote:
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client.
On 1/28/22 1:03 AM, Ed Greshko wrote:
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client. The messages logged are of the form:
"
On 26/01/2022 00:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client. The messages logged are of the form:
"name '@local' does not map into dom
Hi.
On Thu, 27 Jan 2022 08:10:53 -0600 Robert Nichols wrote:
> On 1/26/22 7:15 PM, Gordon Messmer wrote:
>> What does the entry for that filesystem in /proc/mounts look like? It
>> should have negotiated mount options that shed some light.
>> Maybe add the "sec=sys" mount option to the clien
On 1/26/22 7:15 PM, Gordon Messmer wrote:
On 1/25/22 08:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem are mapped to
"nobody" even though the names and numeric IDs are the same on the server and
client. The messages logged are of the form:
On 1/25/22 08:35, Robert Nichols wrote:
In a Fedora 35 VM, all users and groups in an NFS mounted filesystem
are mapped to "nobody" even though the names and numeric IDs are the
same on the server and client. The messages logged are of the form:
"name '@local' does not map into domain
On Tue, 25 Jan 2022 10:35:27 -0600
Robert Nichols wrote:
> In a Fedora 35 VM, all users and groups in an NFS mounted filesystem
> are mapped to "nobody" even though the names and numeric IDs are the
> same on the server and client. The messages logged are of the form:
>
> "name '@local
Since some Fedora33 update in the last couple of weeks the problem has
gone away. I haven't changed anything as far as I am aware.
One change is that the kernel moved from 5.13.x to 5.14.x ...
Terry
On 21/10/2021 23:36, Reon Beon via users wrote:
https://release-monitoring.org/project/2081/
We
https://release-monitoring.org/project/2081/
Well it is a pre-release version. 2.5.5.rc3
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedo
Hi Roger,
Thanks for looking.
I will try NFS v3 with my latency tests running. I did try NFS v3 before
and I "think" there were still desktop lockups but for a much shorter
time. But this is just a feeling.
Current kernel on both systems is: 5.13.19-100.fc33.x86_64.
If I find the time, I will
That network looks fine to me
I would try v3. I have had bad luck many times with v4 on a variety
of different kernels. If the code is recovering from something
related to a bug 45 seconds might be right to decide something that
was working is no longer working.
I am not sure any amount of debu
sar -n EDEV reports all 0's all around then. There are somerxdrop/s of 0.02 occasionally on eno1 through the day (about 20 of these
with minute based sampling). Today ifconfig lists 39 dropped RX packets
out of 2357593. Not sure why there are some dropped packets. "ethtool -S
eno1" doesn't seem
Since it is recovering from it, maybe it is losing packets inside the
network, what does "sar -n DEV" and "sar -n EDEV" look like during
that time on both client seeing the pause and the server.
EDEV is typically all zeros unless something is lost. if something is
being lost and it matches the ti
and iostats:
04/10/21 10:51:14
avg-cpu: %user %nice %system %iowait %steal %idle
2.09 0.00 1.56 0.02 0.00 96.33
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s
wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s
%drqm d
My disklatencytest showed a longish (14 secs) NFS file system
directoty/stat lookup again today on a desktop:
2021-10-04T05:26:19 0.069486 0.069486 0.000570 /home/...
2021-10-04T05:28:19 0.269743 0.538000 0.001019 /home/...
2021-10-04T09:48:00 1.492158 0.003314
On 04/10/2021 00:51, Roger Heflin wrote:
With 10 minute samples anything that happened gets averaged enough
that even the worst event is almost impossible to see.
Sar will report the same as date ie local time. And a 12:51 event
would be in the 13:00 sample (started at about 12:50 and ended a
With 10 minute samples anything that happened gets averaged enough that
even the worst event is almost impossible to see.
Sar will report the same as date ie local time. And a 12:51 event would be
in the 13:00 sample (started at about 12:50 and ended at 1300).
What I do see is that during that w
45 second event happened at: 2021-10-02T11:51:02 UTC. Not sure what sar
time is based on (maybe local time BST rather than UTC so would be
2021-10-02T12:51:02 BST.
Continuing info ...
sar -n NFSD on the server
11:00:01 24.16 0.00 24.16 0.00 24.16 0.00
0.00
45 second event happened at: 2021-10-02T11:51:02 UTC. Not sure what sar
time is based on (maybe local time BST rather than UTC so would be
2021-10-02T12:51:02 BST.
"sar -d" on the server:
11:50:02 dev8-0 4.67 0.01 46.62 0.00 9.99
0.12 14.03 5.75
11:50:0
You might retest with nfsv3, the code handling v3 should be significantly
different since v3 is stateless and does not maintain long-term connections.
And if the long-term connection had some sort of issue then 45 seconds may
be how long it takes to figure that out and re-initiate the connection.
What did the sar -d look like for the 2 minutes before and 2 minutes
afterward?
If it is slow or not may depend on if the directory/file fell out of cache
and had to be reread from the disk.
I have also seen really large dirs take a really long time to find, but
typically that takes thousands of
I am getting more sure this is an NFS/networking issue rather than an
issue with disks in the server.
I created a small test program that given a directory finds a random
file in a random directory three levels below, opens it and reads up to
a block (512 Bytes) of data from it and times how l
On Fri, 1 Oct 2021 at 16:20, Terry Barnaby wrote:
>
> Thanks for the info, I am using MDraid. There are no "mddevice" messages
> in /var/log/messages and smartctl -a lists no errors on any of the
> disks. The disks are about 3 years old, I change them in servers between
> 3 and 4 years old.
>
Wh
You need to replace mddevice with the name of your mddevice.
probably md0.
3-5 years is about when they start to go. I have 2-3TB wd-reds
sitting on the floor because their correctable/offline uncorr kept
happening and blipping my storage (a few second pause). I even
removed the disks from the
On 01/10/2021 19:05, Roger Heflin wrote:
it will show latency. await is average iotime in ms, and %util is
calced based in await and iops/sec. So long as your turn sar down to
1 minute samples it should tell you which of the 2 disks had higher
await/util%.With a 10 minute sample the 40sec p
it will show latency. await is average iotime in ms, and %util is
calced based in await and iops/sec. So long as your turn sar down to
1 minute samples it should tell you which of the 2 disks had higher
await/util%.With a 10 minute sample the 40sec pause may get spread
out across enough iops
On 01/10/2021 13:31, D. Hugh Redelmeier wrote:
Trivial thoughts from reading this thread. Please don't take the
triviality as an insult.
Perhaps the best way to determine if the problem is from a software update
is to downgrade likely packages. In the case of the kernel, you can just
boot an o
On 30/09/2021 19:27, Roger Heflin wrote:
Raid0, so there is no redundancy on the data?
And what kind of underlying hard disks? The desktop drives will try
for a long time (ie a minute or more) to read any bad blocks. Those
disks will not report an error unless it gets to the default os
timeou
Trivial thoughts from reading this thread. Please don't take the
triviality as an insult.
Perhaps the best way to determine if the problem is from a software update
is to downgrade likely packages. In the case of the kernel, you can just
boot an older one (assuming that an old enough one is s
Raid0, so there is no redundancy on the data?
And what kind of underlying hard disks? The desktop drives will try
for a long time (ie a minute or more) to read any bad blocks. Those
disks will not report an error unless it gets to the default os
timeout, or it hits the disk firmware timeout.
T
On Thu, 30 Sep 2021 17:50:01 +0100
Terry Barnaby wrote:
> Yes, problems often occur due to you having done something, but I am
> pretty sure nothing has changed apart from Fedora updates.
But hardware is sneaky. It waits for you to install software updates,
the breaks itself to make you think th
On 30/09/2021 11:42, Roger Heflin wrote:
On mine when I first access the NFS volume it takes 5-10 seconds for
the disks to spin up. Mine will spin down later in the day if little
or nothing is going on and I will get another delay.
I have also seen delays if a disk gets bad blocks and correct
On 30/09/2021 11:32, Ed Greshko wrote:
On 30/09/2021 16:35, Terry Barnaby wrote:
This is a very lightly loaded system with just 3 users ATM and very
little going on across the network (just editing code files etc). The
problem occurred again yesterday. For about 10 minutes my KDE desktop
locke
On mine when I first access the NFS volume it takes 5-10 seconds for the
disks to spin up. Mine will spin down later in the day if little or
nothing is going on and I will get another delay.
I have also seen delays if a disk gets bad blocks and corrects them. About
1/2 of time that does have a m
On 30/09/2021 16:35, Terry Barnaby wrote:
This is a very lightly loaded system with just 3 users ATM and very little going on
across the network (just editing code files etc). The problem occurred again yesterday.
For about 10 minutes my KDE desktop locked up in 20 second bursts and then the pr
Thanks for the feedback everyone.
This is a very lightly loaded system with just 3 users ATM and very
little going on across the network (just editing code files etc). The
problem occurred again yesterday. For about 10 minutes my KDE desktop
locked up in 20 second bursts and then the problem w
Make sure you have sar/sysstat enabled and changed to do 1 minute samples.
sar -d will show disk perf. If one of the disks "blips" at the
firmware level (working on a hard to read block maybe), the util% on
that device will be significantly higher than all other disks so will
stand out. Then you
Are there network switches under your control? It sounds similar to what
happens when MTU on the systems MTU do not match or one system MTU is set
above the value on the switch ports.
Next time the issue occurs use ping with the do not fragment flag.
ex $ ping -m DO -s 8972 ip.address
This exampl
On Sun, 26 Sep 2021 10:26:19 -0300
George N. White III wrote:
> If you have cron jobs that use a lot of network bandwidth it may work
> fine until some network issue causing lots of retransmits bogs it down.
Which is why you should check the dumb stuff first! Has a critter
chewed on the ethernet
On Sun, 26 Sept 2021 at 01:44, Tim via users
wrote:
> On Sat, 2021-09-25 at 06:04 +0100, Terry Barnaby wrote:
> > in the last month or so all of the client computers are getting KDE
> > GUI lockups every few hours that last for around 40 secs.
>
> Might one of them have a cron job that's scouring
On Sat, 2021-09-25 at 06:04 +0100, Terry Barnaby wrote:
> in the last month or so all of the client computers are getting KDE
> GUI lockups every few hours that last for around 40 secs.
Might one of them have a cron job that's scouring the network?
e.g. locate databasing
--
uname -rsvp
Linux
On Sat, 25 Sept 2021 at 02:04, Terry Barnaby wrote:
> Hi,
>
> I use NFS mount (defaults so V4) /home directories with a simple server
> over Gigabit Ethernet all running Fedora33. This has been working fine
> for 25+ years through various Fedora versions. However in the last month
> or so all of
On 25/09/2021 09:00, Ed Greshko wrote:
On 25/09/2021 14:07, Terry Barnaby wrote:
A few questions.
1. Are you saying your NFS server HW is the same for the past 25
years. Couldn't have been all Fedora, right?
No ( :) ) was using previous Linux and Unix systems before then.
Certainly OS v
On 25/09/2021 14:07, Terry Barnaby wrote:
A few questions.
1. Are you saying your NFS server HW is the same for the past 25 years.
Couldn't have been all Fedora, right?
No ( :) ) was using previous Linux and Unix systems before then. Certainly OS
versions and hardware has changed over th
On 25/09/2021 06:42, Ed Greshko wrote:
On 25/09/2021 13:04, Terry Barnaby wrote:
Hi,
I use NFS mount (defaults so V4) /home directories with a simple
server over Gigabit Ethernet all running Fedora33. This has been
working fine for 25+ years through various Fedora versions. However
in the la
On 25/09/2021 13:04, Terry Barnaby wrote:
Hi,
I use NFS mount (defaults so V4) /home directories with a simple server over Gigabit
Ethernet all running Fedora33. This has been working fine for 25+ years through various
Fedora versions. However in the last month or so all of the client computer
Roger Heflin wrote:
>Fedora 33 shows this on recent kernels:
># CONFIG_NFS_V2 is not set
>So disabled in the kernel seems likely for 34 also.
>
>You would have to rebuild a kernel with that set to =m and boot that
>for v2 to work.
Thanks. You are absolutely correct.
My first thought was that the
grep -i nfs /boot/config- should tell you what is
configured in the kernel.
Fedora 33 shows this on recent kernels:
grep -i nfs /boot/config-5.11.22-100.fc32.x86_64 | grep -i v2
# CONFIG_NFS_V2 is not set
CONFIG_NFSD_V2_ACL=y
So disabled in the kernel seems likely for 34 also.
You would hav
Historical background below.
I have now confirmed that the NAS device NFS server is working properly.
I am able to mount it from a Raspberry Pi running kernel 5.4.72-v7+ #1356
and nfs-common/oldstable,now 1:1.3.4-2.5+deb10u1. I still cannot mount
it from Fedora 34 running nfs-utils-2.5.4-0. The er
On 31Aug2021 18:43, Stephen Morris wrote:
>I have an old nas that is nfs version 1, and in order to mount it in
>Fedora 34 I have to specify vers=3. Vers=1 or vers=2 wouldn't work for
>me, the same may apply to your device.
We've a PVR here and need to use "-o nolock" to mount our media server
On 30/8/21 13:45, Dave Close wrote:
Tom Horsley wrote:
The NAS only works with NFS v2.
If it is that old, you may need "proto=udp" as well as "vers=2"
in the mount options.
I've tried it both ways. The -v output shows which protocol gets used.
Joe Zeff wrote:
I've even tried with my firew
Tom Horsley wrote:
>> The NAS only works with NFS v2.
>
>If it is that old, you may need "proto=udp" as well as "vers=2"
>in the mount options.
I've tried it both ways. The -v output shows which protocol gets used.
Joe Zeff wrote:
>> I've even tried with my firewall disabled but no luck.
>
>Two
On 8/29/21 7:07 PM, Dave Close wrote:
I've even tried with my firewall disabled but no luck.
Two questions: first, why did you suspect the firewall and second, have
you re-enabled it?
___
users mailing list -- users@lists.fedoraproject.org
To unsubs
On Sun, 29 Aug 2021 18:07:06 -0700
Dave Close wrote:
> The NAS only works with NFS v2.
If it is that old, you may need "proto=udp" as well as "vers=2"
in the mount options.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an e
On Sat, Nov 28, 2020 at 4:20 AM Richard Kimberly Heck
wrote:
> On 11/26/20 3:26 PM, Tom H wrote:
>>
>> You have to be careful when using "fsid=0".
>>
>> 1) If you don't set it for any of the shares:
>>
>> a) "/" is the "fsid=0" filesystem by default (this wasn't the
>> case in early nfsv4 implemen
On 11/26/20 3:26 PM, Tom H wrote:
On Wed, Nov 25, 2020 at 11:41 PM Richard Kimberly Heck
wrote:
This problem seems to have been solved. I believe that the issue
was a misconfiguration of the NFS server. I had:
/home/rikiheck/files 192.168.1.0/24(rw,sync,no_subtree_check,fsid=0)
/home/nancy/fil
On Wed, Nov 25, 2020 at 11:41 PM Richard Kimberly Heck
wrote:
>
> This problem seems to have been solved. I believe that the issue
> was a misconfiguration of the NFS server. I had:
>
> /home/rikiheck/files 192.168.1.0/24(rw,sync,no_subtree_check,fsid=0)
> /home/nancy/files192.168.1.0/24(rw,sy
On Nov 25, 2020, at 17:41, Richard Kimberly Heck wrote:
>
> But /home/rikiheck/files was an ordinary directory that I want to export, not
> the root for NFSv4. It was being mounted as NFSv3 (trying to mount with nfs4
> would fail). But I'm guessing that it was being treated inconsistently
> be
On 26/11/2020 06:40, Richard Kimberly Heck wrote:
This problem seems to have been solved. I believe that the issue was a
misconfiguration of the NFS server. I had:
/home/rikiheck/files 192.168.1.0/24(rw,sync,no_subtree_check,fsid=0)
/home/nancy/files 192.168.1.0/24(rw,sync,no_subtree_check)
/h
This problem seems to have been solved. I believe that the issue was a
misconfiguration of the NFS server. I had:
/home/rikiheck/files 192.168.1.0/24(rw,sync,no_subtree_check,fsid=0)
/home/nancy/files 192.168.1.0/24(rw,sync,no_subtree_check)
/home/photos 192.168.1.0/24(rw,sync,no_subtree_check
On 11/25/20 4:28 PM, Richard Kimberly Heck wrote:
On 11/25/20 12:56 PM, Ed Greshko wrote:
On 25/11/2020 22:51, Richard Kimberly Heck wrote:
I can mount fine. The problem is that various programs (LyX,
LibreOffice) freeze when attempting to open files on those
filesystems. In LyX, I was able
1 - 100 of 760 matches
Mail list logo