On Thu, 2022-02-03 at 16:52 +0100, William Edwards wrote:
> Hi,
>
> Jeff Layton schreef op 2022-02-03 15:36:
> > On Thu, 2022-02-03 at 15:26 +0100, William Edwards wrote:
> > > Hi,
> > >
> > > Jeff Layton schreef op 2022-02-03 14:45:
> > > &
On Thu, 2022-02-03 at 15:26 +0100, William Edwards wrote:
> Hi,
>
> Jeff Layton schreef op 2022-02-03 14:45:
> > On Thu, 2022-02-03 at 12:01 +0100, William Edwards wrote:
> > > Hi,
> > >
> > > I need to set options from
> > > https://docs.ceph.c
Ceph 14.2.22. The clients are running Ceph
> 12.2.11. All clients use the kernel client.
>
The in-kernel client itself does not pay any attention to ceph.conf. The
mount helper program (mount.ceph) will look at that ceph configs and
keyrings to search for mon addresses and secrets for mounting if you
don't provide them in the device string and mount options.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
2021-11-25 22:12:40 [ 3322.625099] rwsem_down_read_slowpath+0x2f6/0x4a0
> 2021-11-25 22:12:40 [ 3322.629806] ? lookup_fast+0xae/0x150
> 2021-11-25 22:12:40 [ 3322.633472] walk_component+0x129/0x1b0
> 2021-11-25 22:12:40 [ 3322.637315] ? path_init+0x2ef/0x360
> 2021-11-25 22:12:40 [ 3322.640902] path_lookupat.isra.42+0x67/0x140
> 2021-11-25 22:12:40 [ 3322.645258] filename_lookup.part.56+0xa0/0x170
> 2021-11-25 22:12:40 [ 3322.649793] ? __check_object_size+0x162/0x180
> 2021-11-25 22:12:40 [ 3322.654238] ? strncpy_from_user+0x46/0x1e0
> 2021-11-25 22:12:40 [ 3322.658422] vfs_statx+0x72/0x110
> 2021-11-25 22:12:40 [ 3322.661740] __do_sys_newstat+0x39/0x70
> 2021-11-25 22:12:40 [ 3322.665584] ?
> syscall_trace_enter.isra.19+0x123/0x190
> 2021-11-25 22:12:40 [ 3322.670722] do_syscall_64+0x33/0x40
> 2021-11-25 22:12:40 [ 3322.674304]
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 2021-11-25 22:12:40 [ 3322.679375] RIP: 0033:0x7f8e8026ba79
> 2021-11-25 22:12:40 [ 3322.682949] RSP: 002b:7f8a05d9d048 EFLAGS:
> 0246 ORIG_RAX: 0004
> 2021-11-25 22:12:40 [ 3322.690517] RAX: ffda RBX:
> 7f8a05d9d050 RCX: 7f8e8026ba79
> 2021-11-25 22:12:40 [ 3322.697650] RDX: 7f8a05d9d050 RSI:
> 7f8a05d9d050 RDI: 7f8900018220
> 2021-11-25 22:12:40 [ 3322.704783] RBP: 7f8a05d9d100 R08:
> R09: 000459723280
> 2021-11-25 22:12:40 [ 3322.711917] R10: 7f8e687103a5 R11:
> 0246 R12: 7f8900018220
> 2021-11-25 22:12:40 [ 3322.719045] R13: 7f8c540c7b48 R14:
> 7f8a05d9d118 R15: 7f8c540c7800
> 2021-11-25 22:13:46 [ 3388.045080] ceph: mds0 hung
>
>
> --
> _
> prof. dr. Andrej Filipcic, E-mail:andrej.filip...@ijs.si
> Department of Experimental High Energy Physics - F9
> Jozef Stefan Institute, Jamova 39, P.o.Box 3000
> SI-1001 Ljubljana, Slovenia
> Tel.: +386-1-477-3674Fax: +386-1-477-3166
> -
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
eally hard to do any sort
of immediate notification on certain types of changes.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
to ceph-users-le...@ceph.io
> _______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
work filesystem like cephfs on the backend, you can't really
get around the extra hops that reads and writes have to take on the
network. The only real way to fix that would be to support pNFS, but
there is not currently a consensus on how to do that.
--
Jeff Layton
see that it
provides anything of value, and it'll make it harder to use NFSv4
migration later, if we decide to add that support.
My recommendation is still to have a different address for each server
and just use round-robin DNS to distribute the clients.
> Hope someone has a few insights
__
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Tue, 2021-03-02 at 17:44 +0100, Stefan Kooman wrote:
> On 3/2/21 5:16 PM, Jeff Layton wrote:
> > On Tue, 2021-03-02 at 09:25 +0100, Stefan Kooman wrote:
> > > Hi,
> > >
> > > On a CentOS 7 VM with mainline kernel (5.11.2-1.el7.elrepo.x86_64 #1 SMP
>
2 support in the kernel is keyed on the ms_mode= mount option, so that
has to be passed in if you're connecting to a v2 port. Until the mount
helpers get support for that option you'll need to specify the address
and port manually if you want to use v2.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
eff, we should probably push that patch to stable kernels.
>
Sure, sounds fine. I'll send a note to the stable@vger list.
Thanks!
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
. I got curious and after
> > some digging I managed to reproduce the issue with kernel 5.3. The
> > culprit was commit e09580b343aa ("ceph: don't list vxattrs in
> > listxattr()"), in 5.4.
> >
> > Getting a bit more into the whole rabbit hole, it look
the nfs-ganesha log during this touch[5].
> >
> >
> >
> > nfs-ganesha-2.8.1.2-0.1.el7.x86_64
> > nfs-ganesha-ceph-2.8.1.2-0.1.el7.x86_64
> >
> > ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0)
> nautilus
> > (stable)
> >
> >
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Sun, 2020-06-14 at 15:17 +0200, Marc Roos wrote:
> When rsyncing to a nfs-ganesha exported cephfs the process hangs, and
> escalates into "cache pressure" of other cephfs clients[1].
>
> When testing the rsync with more debugging on, I noticed that rsync
> stalled at the 'set modtime of . '[2
gt; Access_Type = RW;
> > Attr_Expiration_Time = 0;
> > Squash = no_root_squash;
> >
> > FSAL {
> > Name = CEPH;
> > User_Id = "ganesha";
> > Secret_Access_Key = CEPHXKEY
'd be interesting to see what that
looks like, particularly when the MDS is complaining about client
resource utilization.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
oesn't do any filtering. You just get a dump of
names and have to filter them in userspace yourself. It also doesn't
help that xattrs aren't governed by any sort of standard so the rules
for all of this are quite vague.
Personally, I find xattrs to be a garbag
12
Chunks in use :0
8<---
That should tell us something about the cached inodes that ganesha is
holding onto.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
"items": 4121,
> "bytes": 75912
> },
> "mds_co": {
> "items": 221728924,
> "bytes": 16320868177
> },
> },
> "t
On Fri, 2020-03-27 at 07:36 -0400, Jeff Layton wrote:
> On Thu, 2020-03-26 at 10:32 -0700, Gregory Farnum wrote:
> > On Thu, Mar 26, 2020 at 9:13 AM Frank Schilder wrote:
> > > Dear all,
> > >
> > > yes, this is it, quotas. In the structure A/B/ there was a
gt; process adds approximately 0.5 % of new data of the cluster’s total
> storage capacity.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ce it does everything
under the BCM (Big Client Mutex), but won't be in the kernel client.
Opening a bug for this won't hurt, but it may not be simple to
implement.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
suspect this will be fixed by the attached patch
that's already slated for RHEL7.9
If you're able to build a test a kernel with that patch, then please let
us know if it fixes this problem for you as well.
Thanks,
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
mount /mnt/cephfs ; ls -l /mnt/cephfs
Sent 0 KiB over sendfile(3EXT) of 0 KiB requested
total 1
drwxr-xr-x 1 rootroot 1 Mar 25 08:53 foo
drwxrwxrwx 1 rootroot 0 Mar 25 08:37 scratch
drwxr-xr-x 1 rootroot57 Mar 25 08:44 test
-rw-r--r-- 1 jlayton jlayton 27 Mar
ecalled for any competing access. What you really want to avoid is any
sort of caching at the ganesha daemon layer.
--
Jeff Layton
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ayer. For example:
https://lwn.net/Articles/687354/
https://lwn.net/Articles/806176/
Currently, there is no real solution for this in stock kernels, but you
could look at shiftfs as a possible solution for now and we may
eventually get uid/gid shifting as part of the generic VFS
gain, this is non-trivial to fix.
In summary I don't see a real future for this feature unless someone
wants to step up to own it and commit to fixing up these problems.
> On 16/08/2019 13.15, Jeff Layton wrote:
> > A couple of weeks ago, I sent a request to the mailing list aski
elease cycles, once we're past the point where someone
can upgrade directly from Nautilus (release Q or R?) we'd rip out
support for this feature entirely.
Thoughts, comments, questions welcome.
--
Jeff Layton
___
ceph-users mailing list -- ceph-u
29 matches
Mail list logo