On 1/12/23 23:09, Stephen Morris wrote:
On 1/12/23 22:54, Stephen Morris wrote:
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs
nfs users,nconnect=2,own
On 1/12/23 22:54, Stephen Morris wrote:
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This resu
On 30/11/23 22:42, Stephen Morris wrote:
I have a network drive being mounted from the following entry in
/etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This results in the following definition in /etc/m
On Thu, 2023-11-30 at 07:01 -0600, Roger Heflin wrote:
> That being said, I don't know that users and/or owner options *WORK*
> for network disks. Those options likely do not also work as any disk
> that has actual owners info stored on them. They are usually used
> with dos/fat/fat32 type fses
ked fine in F38 before I upgraded to F39.
regards,
Steve
NFS mount issues can be confusing. A lot depends on what the server
supports. Verify that the server is exporting the file system R/W.
Try mounting the file system from the command line with the -v option to
see if this gives more i
The user used to connect on the network drive has to have write capability.
Log in as that user and try to write at the directory of the mount point.
On Thu, Nov 30, 2023 at 8:02 AM Roger Heflin wrote:
> you specified "nfs" as the mount. And that should mount nfs4 with
> tcp, but mounted with n
you specified "nfs" as the mount. And that should mount nfs4 with
tcp, but mounted with nfs and udp so whatever is on the other end is
old and/or has tcp/nfsv4 disabled.
That being said, I don't know that users and/or owner options *WORK*
for network disks. Those options likely do not also work
I have a network drive being mounted from the following entry in /etc/fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,nconnect=2,owner,rw,_netdev 0 0
This results in the following definition in /etc/mtab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs n
On 3/16/22 22:16, Richard Kimberly Heck wrote:
Fresh install of F35. I have these lines in /etc/fstab:
192.168.1.2://home/rikiheck/files /home/rikiheck/files nfs
auto,nouser,rw,dev,nosuid,exec,_netdev 0 0
192.168.1.2://multi/ /mnt/mail/multi nfs
auto,user,noauto,rw,de
Fresh install of F35. I have these lines in /etc/fstab:
192.168.1.2://home/rikiheck/files /home/rikiheck/files nfs
auto,nouser,rw,dev,nosuid,exec,_netdev 0 0
192.168.1.2://multi/ /mnt/mail/multi nfs
auto,user,noauto,rw,dev,nosuid,noexec,_netdev 0 0
192.168.1.2://home/p
On Thu, 17 Feb 2022 10:19:34 + Alex Gurenko via users wrote:
> Happens to the best of us :) I assume that worked for you?
Yes of course. Thanks again.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to us
Happens to the best of us :) I assume that worked for you?
---
Best regards, Alex
--- Original Message ---
On Thursday, February 17th, 2022 at 11:07, Frank Elsner
wrote:
> On Thu, 17 Feb 2022 09:46:35 + Alex Gurenko via users wrote:
>
> > I would think that adding `sudo` to your c
On Thu, 17 Feb 2022 09:46:35 + Alex Gurenko via users wrote:
> I would think that adding `sudo` to your command would fix your problem.
Oh shit, I'm getting old.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an
I would think that adding `sudo` to your command would fix your problem.
---
Best regards, Alex
--- Original Message ---
On Thursday, February 17th, 2022 at 10:37, Frank Elsner via users
wrote:
> On Thu, 17 Feb 2022 10:28:11 +0100 Frank Elsner via users wrote:
>
> > Hello,
> >
> > on
On Thu, 17 Feb 2022 10:28:11 +0100 Frank Elsner via users wrote:
> Hello,
>
> on my Fedora 36 system I've the following (strange) mount error:
^^
Typo, Fedora 35 of course.
--Frank
___
users mailing list -- users@lists.fedoraproject.org
Hello,
on my Fedora 36 system I've the following (strange) mount error:
$ mount -t nfs christo:/misc/Backups /mnt
mount.nfs: failed to apply fstab options
In /etc/fstab there is no entry relating to /misc/Backups.
What options?
Kind regards, Frank Elsner
_
Since some Fedora33 update in the last couple of weeks the problem has
gone away. I haven't changed anything as far as I am aware.
One change is that the kernel moved from 5.13.x to 5.14.x ...
Terry
On 21/10/2021 23:36, Reon Beon via users wrote:
https://release-monitoring.org/project/2081/
We
https://release-monitoring.org/project/2081/
Well it is a pre-release version. 2.5.5.rc3
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedo
Hi Roger,
Thanks for looking.
I will try NFS v3 with my latency tests running. I did try NFS v3 before
and I "think" there were still desktop lockups but for a much shorter
time. But this is just a feeling.
Current kernel on both systems is: 5.13.19-100.fc33.x86_64.
If I find the time, I will
That network looks fine to me
I would try v3. I have had bad luck many times with v4 on a variety
of different kernels. If the code is recovering from something
related to a bug 45 seconds might be right to decide something that
was working is no longer working.
I am not sure any amount of debu
sar -n EDEV reports all 0's all around then. There are somerxdrop/s of 0.02 occasionally on eno1 through the day (about 20 of these
with minute based sampling). Today ifconfig lists 39 dropped RX packets
out of 2357593. Not sure why there are some dropped packets. "ethtool -S
eno1" doesn't seem
Since it is recovering from it, maybe it is losing packets inside the
network, what does "sar -n DEV" and "sar -n EDEV" look like during
that time on both client seeing the pause and the server.
EDEV is typically all zeros unless something is lost. if something is
being lost and it matches the ti
and iostats:
04/10/21 10:51:14
avg-cpu: %user %nice %system %iowait %steal %idle
2.09 0.00 1.56 0.02 0.00 96.33
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s
wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s
%drqm d
My disklatencytest showed a longish (14 secs) NFS file system
directoty/stat lookup again today on a desktop:
2021-10-04T05:26:19 0.069486 0.069486 0.000570 /home/...
2021-10-04T05:28:19 0.269743 0.538000 0.001019 /home/...
2021-10-04T09:48:00 1.492158 0.003314
On 04/10/2021 00:51, Roger Heflin wrote:
With 10 minute samples anything that happened gets averaged enough
that even the worst event is almost impossible to see.
Sar will report the same as date ie local time. And a 12:51 event
would be in the 13:00 sample (started at about 12:50 and ended a
With 10 minute samples anything that happened gets averaged enough that
even the worst event is almost impossible to see.
Sar will report the same as date ie local time. And a 12:51 event would be
in the 13:00 sample (started at about 12:50 and ended at 1300).
What I do see is that during that w
45 second event happened at: 2021-10-02T11:51:02 UTC. Not sure what sar
time is based on (maybe local time BST rather than UTC so would be
2021-10-02T12:51:02 BST.
Continuing info ...
sar -n NFSD on the server
11:00:01 24.16 0.00 24.16 0.00 24.16 0.00
0.00
45 second event happened at: 2021-10-02T11:51:02 UTC. Not sure what sar
time is based on (maybe local time BST rather than UTC so would be
2021-10-02T12:51:02 BST.
"sar -d" on the server:
11:50:02 dev8-0 4.67 0.01 46.62 0.00 9.99
0.12 14.03 5.75
11:50:0
You might retest with nfsv3, the code handling v3 should be significantly
different since v3 is stateless and does not maintain long-term connections.
And if the long-term connection had some sort of issue then 45 seconds may
be how long it takes to figure that out and re-initiate the connection.
What did the sar -d look like for the 2 minutes before and 2 minutes
afterward?
If it is slow or not may depend on if the directory/file fell out of cache
and had to be reread from the disk.
I have also seen really large dirs take a really long time to find, but
typically that takes thousands of
I am getting more sure this is an NFS/networking issue rather than an
issue with disks in the server.
I created a small test program that given a directory finds a random
file in a random directory three levels below, opens it and reads up to
a block (512 Bytes) of data from it and times how l
On Fri, 1 Oct 2021 at 16:20, Terry Barnaby wrote:
>
> Thanks for the info, I am using MDraid. There are no "mddevice" messages
> in /var/log/messages and smartctl -a lists no errors on any of the
> disks. The disks are about 3 years old, I change them in servers between
> 3 and 4 years old.
>
Wh
You need to replace mddevice with the name of your mddevice.
probably md0.
3-5 years is about when they start to go. I have 2-3TB wd-reds
sitting on the floor because their correctable/offline uncorr kept
happening and blipping my storage (a few second pause). I even
removed the disks from the
On 01/10/2021 19:05, Roger Heflin wrote:
it will show latency. await is average iotime in ms, and %util is
calced based in await and iops/sec. So long as your turn sar down to
1 minute samples it should tell you which of the 2 disks had higher
await/util%.With a 10 minute sample the 40sec p
it will show latency. await is average iotime in ms, and %util is
calced based in await and iops/sec. So long as your turn sar down to
1 minute samples it should tell you which of the 2 disks had higher
await/util%.With a 10 minute sample the 40sec pause may get spread
out across enough iops
On 01/10/2021 13:31, D. Hugh Redelmeier wrote:
Trivial thoughts from reading this thread. Please don't take the
triviality as an insult.
Perhaps the best way to determine if the problem is from a software update
is to downgrade likely packages. In the case of the kernel, you can just
boot an o
On 30/09/2021 19:27, Roger Heflin wrote:
Raid0, so there is no redundancy on the data?
And what kind of underlying hard disks? The desktop drives will try
for a long time (ie a minute or more) to read any bad blocks. Those
disks will not report an error unless it gets to the default os
timeou
Trivial thoughts from reading this thread. Please don't take the
triviality as an insult.
Perhaps the best way to determine if the problem is from a software update
is to downgrade likely packages. In the case of the kernel, you can just
boot an older one (assuming that an old enough one is s
Raid0, so there is no redundancy on the data?
And what kind of underlying hard disks? The desktop drives will try
for a long time (ie a minute or more) to read any bad blocks. Those
disks will not report an error unless it gets to the default os
timeout, or it hits the disk firmware timeout.
T
On Thu, 30 Sep 2021 17:50:01 +0100
Terry Barnaby wrote:
> Yes, problems often occur due to you having done something, but I am
> pretty sure nothing has changed apart from Fedora updates.
But hardware is sneaky. It waits for you to install software updates,
the breaks itself to make you think th
On 30/09/2021 11:42, Roger Heflin wrote:
On mine when I first access the NFS volume it takes 5-10 seconds for
the disks to spin up. Mine will spin down later in the day if little
or nothing is going on and I will get another delay.
I have also seen delays if a disk gets bad blocks and correct
On 30/09/2021 11:32, Ed Greshko wrote:
On 30/09/2021 16:35, Terry Barnaby wrote:
This is a very lightly loaded system with just 3 users ATM and very
little going on across the network (just editing code files etc). The
problem occurred again yesterday. For about 10 minutes my KDE desktop
locke
On mine when I first access the NFS volume it takes 5-10 seconds for the
disks to spin up. Mine will spin down later in the day if little or
nothing is going on and I will get another delay.
I have also seen delays if a disk gets bad blocks and corrects them. About
1/2 of time that does have a m
On 30/09/2021 16:35, Terry Barnaby wrote:
This is a very lightly loaded system with just 3 users ATM and very little going on
across the network (just editing code files etc). The problem occurred again yesterday.
For about 10 minutes my KDE desktop locked up in 20 second bursts and then the pr
Thanks for the feedback everyone.
This is a very lightly loaded system with just 3 users ATM and very
little going on across the network (just editing code files etc). The
problem occurred again yesterday. For about 10 minutes my KDE desktop
locked up in 20 second bursts and then the problem w
Make sure you have sar/sysstat enabled and changed to do 1 minute samples.
sar -d will show disk perf. If one of the disks "blips" at the
firmware level (working on a hard to read block maybe), the util% on
that device will be significantly higher than all other disks so will
stand out. Then you
Are there network switches under your control? It sounds similar to what
happens when MTU on the systems MTU do not match or one system MTU is set
above the value on the switch ports.
Next time the issue occurs use ping with the do not fragment flag.
ex $ ping -m DO -s 8972 ip.address
This exampl
On Sun, 26 Sep 2021 10:26:19 -0300
George N. White III wrote:
> If you have cron jobs that use a lot of network bandwidth it may work
> fine until some network issue causing lots of retransmits bogs it down.
Which is why you should check the dumb stuff first! Has a critter
chewed on the ethernet
On Sun, 26 Sept 2021 at 01:44, Tim via users
wrote:
> On Sat, 2021-09-25 at 06:04 +0100, Terry Barnaby wrote:
> > in the last month or so all of the client computers are getting KDE
> > GUI lockups every few hours that last for around 40 secs.
>
> Might one of them have a cron job that's scouring
On Sat, 2021-09-25 at 06:04 +0100, Terry Barnaby wrote:
> in the last month or so all of the client computers are getting KDE
> GUI lockups every few hours that last for around 40 secs.
Might one of them have a cron job that's scouring the network?
e.g. locate databasing
--
uname -rsvp
Linux
On Sat, 25 Sept 2021 at 02:04, Terry Barnaby wrote:
> Hi,
>
> I use NFS mount (defaults so V4) /home directories with a simple server
> over Gigabit Ethernet all running Fedora33. This has been working fine
> for 25+ years through various Fedora versions. However in the last month
On 25/09/2021 09:00, Ed Greshko wrote:
On 25/09/2021 14:07, Terry Barnaby wrote:
A few questions.
1. Are you saying your NFS server HW is the same for the past 25
years. Couldn't have been all Fedora, right?
No ( :) ) was using previous Linux and Unix systems before then.
Certainly OS v
On 25/09/2021 14:07, Terry Barnaby wrote:
A few questions.
1. Are you saying your NFS server HW is the same for the past 25 years.
Couldn't have been all Fedora, right?
No ( :) ) was using previous Linux and Unix systems before then. Certainly OS
versions and hardware has changed over th
On 25/09/2021 06:42, Ed Greshko wrote:
On 25/09/2021 13:04, Terry Barnaby wrote:
Hi,
I use NFS mount (defaults so V4) /home directories with a simple
server over Gigabit Ethernet all running Fedora33. This has been
working fine for 25+ years through various Fedora versions. However
in the
On 25/09/2021 13:04, Terry Barnaby wrote:
Hi,
I use NFS mount (defaults so V4) /home directories with a simple server over Gigabit
Ethernet all running Fedora33. This has been working fine for 25+ years through various
Fedora versions. However in the last month or so all of the client
Hi,
I use NFS mount (defaults so V4) /home directories with a simple server
over Gigabit Ethernet all running Fedora33. This has been working fine
for 25+ years through various Fedora versions. However in the last month
or so all of the client computers are getting KDE GUI lockups every few
Roger Heflin wrote:
>Fedora 33 shows this on recent kernels:
># CONFIG_NFS_V2 is not set
>So disabled in the kernel seems likely for 34 also.
>
>You would have to rebuild a kernel with that set to =m and boot that
>for v2 to work.
Thanks. You are absolutely correct.
My first thought was that the
ears that Fedora's mount doesn't work with v2.
>
> Thanks to those who replied earlier. However, your ideas didn't help.
>
> I wrote:
>
> >I'm trying to setup an NFS mount to an older NAS device. The client
> >is Fedora 34, the NAS is a Buffalo Linksta
error remains as shown
below, "mount(2): Protocol not supported". Even though I have explicitly
asked for mount v2, it appears that Fedora's mount doesn't work with v2.
Thanks to those who replied earlier. However, your ideas didn't help.
I wrote:
>I'm trying to s
On 31Aug2021 18:43, Stephen Morris wrote:
>I have an old nas that is nfs version 1, and in order to mount it in
>Fedora 34 I have to specify vers=3. Vers=1 or vers=2 wouldn't work for
>me, the same may apply to your device.
We've a PVR here and need to use "-o nolock" to mount our media server
On 30/8/21 13:45, Dave Close wrote:
Tom Horsley wrote:
The NAS only works with NFS v2.
If it is that old, you may need "proto=udp" as well as "vers=2"
in the mount options.
I've tried it both ways. The -v output shows which protocol gets used.
Joe Zeff wrote:
I've even tried with my firew
Tom Horsley wrote:
>> The NAS only works with NFS v2.
>
>If it is that old, you may need "proto=udp" as well as "vers=2"
>in the mount options.
I've tried it both ways. The -v output shows which protocol gets used.
Joe Zeff wrote:
>> I've even tried with my firewall disabled but no luck.
>
>Two
On 8/29/21 7:07 PM, Dave Close wrote:
I've even tried with my firewall disabled but no luck.
Two questions: first, why did you suspect the firewall and second, have
you re-enabled it?
___
users mailing list -- users@lists.fedoraproject.org
To unsubs
On Sun, 29 Aug 2021 18:07:06 -0700
Dave Close wrote:
> The NAS only works with NFS v2.
If it is that old, you may need "proto=udp" as well as "vers=2"
in the mount options.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an e
I'm trying to setup an NFS mount to an older NAS device. The client
is Fedora 34, the NAS is a Buffalo Linkstation. I have access to the
NAS via SSH and I can successfully mount it using CIFS and SSHFS. Of
course, CIFS loses some file attributes and SSHFS seems slow and
doesn't see the
On Thu, Jun 25, 2020 George N. White III wrote:
>
> Glad you are making progress. NFS configuration is pretty much the
> same across linux distros, so the NFS section in Debian Handbook
> should apply to Fedora (and contains links to Ubuntu docs). Arch
> Linux docs for NFS are also helpful.
Caref
On 2020-06-25 06:44, Bob Goodwin wrote:
>
>
> On 2020-06-24 17:38, Ed Greshko wrote:
>> As for the address of the SMB server. Are we talking about the ASUS router?
>> I thought you already knew
>> that to be 192.168.50.1? It would be the same as the router it is running
>> on.
>>
> °
> Yo
On 2020-06-24 17:38, Ed Greshko wrote:
As for the address of the SMB server. Are we talking about the ASUS router? I
thought you already knew
that to be 192.168.50.1? It would be the same as the router it is running on.
°
You cover several points here that I will need to consider but
On Wed, 24 Jun 2020 at 17:26, Bob Goodwin wrote:
>
>
> On 2020-06-24 16:07, Andy Paterson via users wrote:
> > exportfs -a
> °
> Yes that looks like the command I needed. Rebooting fixed it. I will
> make certain it is in my notes, I keep everything in Notecase Pro, have
> years of notes there bu
On 2020-06-25 04:18, Bob Goodwin wrote:
>
>
> On 2020-06-24 16:05, Tom Horsley wrote:
>> systemctl list-unit-files | fgrep nfs
>>
>> probably shows the name you want. "nfs-server" is
>> probably the right name (some other distro must call
>> it just "nfs" - I have so many virtual machines for testi
On 2020-06-24 16:07, Andy Paterson via users wrote:
exportfs -a
°
Yes that looks like the command I needed. Rebooting fixed it. I will
make certain it is in my notes, I keep everything in Notecase Pro, have
years of notes there but they are all in the server that i could not
connect to. T
On 2020-06-24 16:05, Tom Horsley wrote:
systemctl list-unit-files | fgrep nfs
probably shows the name you want. "nfs-server" is
probably the right name (some other distro must call
it just "nfs" - I have so many virtual machines for testing
I lose track of what things are called in different
d
On Wed, Jun 24, 2020 Bob Goodwin wrote:
> On 2020-06-24 15:45, Tom Horsley wrote:
>>
>> systemctl restart nfs
>
> No that doesn't work -
>
> [root@localhost bobg]# systemctl restart nfs
> Failed to restart nfs.service: Unit nfs.service not found.
>
> [root@localhost bobg]# systemctl restart nfsd
>
On Wednesday, 24 June 2020 21:05:43 BST Tom Horsley wrote:
> On Wed, 24 Jun 2020 15:52:55 -0400
> Bob Goodwin wrote:
>
>
> > [root@localhost bobg]# systemctl restart nfs
> > Failed to restart nfs.service: Unit nfs.service not found.
>
>
> systemctl list-unit-files | fgrep nfs
>
> probably show
On Wed, 24 Jun 2020 15:52:55 -0400
Bob Goodwin wrote:
> [root@localhost bobg]# systemctl restart nfs
> Failed to restart nfs.service: Unit nfs.service not found.
systemctl list-unit-files | fgrep nfs
probably shows the name you want. "nfs-server" is
probably the right name (some other distro mus
On 2020-06-24 15:45, Tom Horsley wrote:
systemctl restart nfs
°
No that doesn't work -
[root@localhost bobg]# systemctl restart nfs
Failed to restart nfs.service: Unit nfs.service not found.
[root@localhost bobg]# systemctl restart nfsd
Failed to restart nfsd.service: Unit nfsd.service not f
On Wed, 24 Jun 2020 15:38:41 -0400
Bob Goodwin wrote:
> I think there is a command to run after making
> a change but I forget that one, do you know of it?
I think "systemctl restart nfs" does all that is required
(unless the name isn't plain "nfs", but I bet it will have
nfs as part of the name
On 2020-06-24 15:37, Tom Horsley wrote:
/nfs4exports/home 192.168.2.0/24
Chek the /etc/exports file, it may have the .2 hard coded
from the previous setup.
°
Yes, I need the command to re run exports ...
--
Bob Goodwin - Zuni, Virginia, USA
http://www.qrz.com/db/W2BOD
FEDORA-32/64bit LINUX X
On 2020-06-24 15:22, Roger Heflin wrote:
hopefully someplace in the interface there is a way to change that.
From the command line it may be in /etc/exports and may look like this:
/usr/local 192.168.0.255/8(rw,sync,no_root_squash)
I don't know if from the ssh into the router you can edi
On Wed, 24 Jun 2020 14:59:17 -0400
Bob Goodwin wrote:
> Export list for nfs:
> /nfs4exports/home 192.168.2.0/24
Chek the /etc/exports file, it may have the .2 hard coded
from the previous setup.
___
users mailing list -- users@lists.fedoraproject.org
To
hopefully someplace in the interface there is a way to change that.
From the command line it may be in /etc/exports and may look like this:
/usr/local 192.168.0.255/8(rw,sync,no_root_squash)
I don't know if from the ssh into the router you can edit that, or if
they put it in that file
From t
On 2020-06-24 14:19, Roger Heflin wrote:
On an nfs server you also need to someplace defined what is allowed to
mount it and what permissions they have (readonly or rw).
do this on linux with the ip address of the router
showmount -a ipaddr
showmount -e ipaddr
-e shows what is "exported" on t
On 2020-06-24 14:00, Robert G (Doc) Savage via users wrote:
Speaking from personal experience, I always forget to set the
firewalls to allow NFS traffic on both sides of the link.
--Doc Savage
Fairview Heights, IL
°
I thought about firewall problems but I've changed nothing that I know
o
On an nfs server you also need to someplace defined what is allowed to
mount it and what permissions they have (readonly or rw).
do this on linux with the ip address of the router
showmount -a ipaddr
showmount -e ipaddr
-e shows what is "exported" on the nfs server, -a shows who it thinks
has it
Speaking from personal experience, I always forget to set the firewalls to
allow NFS traffic on both sides of the link.
--Doc Savage
Fairview Heights, IL
-Original Message-
From: Bob Goodwin
Reply-To: Community support for Fedora users
To: Fedora List
Subject: nfs mount problem
I have been chipping away at incorporating a new ASUS RT-ACFH13 router
in my system for a few days but it is beginning to seem like an
eternity! I have used a number of routers and they usually work after
some configuration, it's not a difficult thing to do. But I am beginning
to wonder if this
On Sat, Jun 13, 2020 at 6:22 AM Stephen Morris wrote:
> On 11/6/20 4:21 am, Tom H wrote:
>> On Wed, Jun 10, 2020 at 1:40 AM Stephen Morris
>> wrote:
>>> On 10/6/20 7:12 am, Tom H wrote:
On Tue, Jun 9, 2020 at 1:55 PM Stephen Morris
wrote:
>
> I have the following statement in
> It is possible I'm not remembering it correctly. Some time back I had an
> issue where when I specified the mount point in fstab and manually
> issued the mount the mount would fail (I've forgotten the exact syntax
> of the error) and when I raised a query on this list I thought I was
> told to t
On 11/6/20 4:21 am, Tom H wrote:
On Wed, Jun 10, 2020 at 1:40 AM Stephen Morris wrote:
On 10/6/20 7:12 am, Tom H wrote:
On Tue, Jun 9, 2020 at 1:55 PM Stephen Morris wrote:
I have the following statement in fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
nfsvers=1,x-systemd.automount,defau
On Wed, Jun 10, 2020 at 1:40 AM Stephen Morris wrote:
> On 10/6/20 7:12 am, Tom H wrote:
>> On Tue, Jun 9, 2020 at 1:55 PM Stephen Morris
>> wrote:
>>> I have the following statement in fstab:
>>>
>>> 192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
>>> nfsvers=1,x-systemd.automount,defaults 0 0
>>>
>>
Sorry, you are confusing the option not causing an issue with it
actaully selecting version=1.
The option may not have caused an issue on f28, but as wikipedia says
and someone else says there never was NFS version 1 (on linux). It
would seem the behavior when you give it an incorrect mount now c
On 10/6/20 7:12 am, Tom H wrote:
On Tue, Jun 9, 2020 at 1:55 PM Stephen Morris wrote:
I have the following statement in fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
nfsvers=1,x-systemd.automount,defaults 0 0
When I issue the command 'mount /mnt/nfs' it fails with the
following messages show
On 10/6/20 12:40 am, Roger Heflin wrote:
I used linux in 2.0.X kernels (vintage 1998), and we used version=2,
so I am going to guess there was never a version 1 on linux.
And wikipedia says this interesting bit I did not know:
Sun used version 1 only for in-house experimental purposes.
So I gue
On Tue, Jun 9, 2020 at 2:14 PM Tom Horsley wrote:
> On Tue, 9 Jun 2020 21:54:30 +1000
> Stephen Morris wrote:
>
>> nfsvers=1
>
> I seriously doubt there is any support for nfs 1 left
> in the code (could be wrong).
>
> Depending on random variations every time the nfs
> utilities get updates, I've
On Tue, Jun 9, 2020 at 1:55 PM Stephen Morris wrote:
>
> I have the following statement in fstab:
>
> 192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
> nfsvers=1,x-systemd.automount,defaults 0 0
>
> When I issue the command 'mount /mnt/nfs' it fails with the
> following messages shown in dmesg, which indi
On Tue, 2020-06-09 at 23:01 +1000, Stephen Morris wrote:
> There is also one other thing I don't understand. If I run dolphin,
> under the Remote Section in the left hand panel it shows the entry
> /mnt/HD/HD_a2:/mnt/nfs on 192.168.1.12 which matches the fstab entry,
> and when I click on that entr
I used linux in 2.0.X kernels (vintage 1998), and we used version=2,
so I am going to guess there was never a version 1 on linux.
And wikipedia says this interesting bit I did not know:
Sun used version 1 only for in-house experimental purposes.
So I guess that says no one outside of sun used ver
On 9/6/20 10:14 pm, Tom Horsley wrote:
On Tue, 9 Jun 2020 21:54:30 +1000
Stephen Morris wrote:
nfsvers=1
I seriously doubt there is any support for nfs 1 left
in the code (could be wrong).
Depending on random variations every time the nfs
utilities get updates, I've had to sometimes
specify "
On Tue, 9 Jun 2020 21:54:30 +1000
Stephen Morris wrote:
> nfsvers=1
I seriously doubt there is any support for nfs 1 left
in the code (could be wrong).
Depending on random variations every time the nfs
utilities get updates, I've had to sometimes
specify "proto=udp" as well as or instead of
the
Hi,
I have the following statement in fstab:
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
nfsvers=1,x-systemd.automount,defaults 0 0
When I issue the command 'mount /mnt/nfs' it fails with the
following messages shown in dmesg, which indicate that the mount seems
to be trying
1 - 100 of 202 matches
Mail list logo