se??
Kind regards
Arnaud
- Mail original -
De: "Venky Shankar"
À: "arnaud martel"
Cc: "ceph-users"
Envoyé: Vendredi 11 Février 2022 15:03:04
Objet: Re: [ceph-users] MDS crash when unlink file
Hi Arnaud,
On Fri, Feb 11, 2022 at 2:42 PM Arnaud MARTEL
wrote:
Hi,
MDSs are crashing on my production cluster when trying to unlink some files and
I need help :-).
When looking into the log files, I have identified some associated files and I
ran a scrub on the parent directory with force,repair,recursive options. No
error were detected but the problem p
Hi all,
We have a cephfs cluster in production for about 2 months and, for the past 2-3
weeks, we are regularly experiencing MDS crash loops (every 3-4 hours if we
have some user activity).
A temporary fix is to remove the MDSs in error (or unknown) state, stop samba &
nfs-ganesha gateways
Hi Samuel,
You have to use pool affinity.
For example, with 3 pools in ceph pacific:
- pool_fs_meta -> pool replicated, SSD only
- pool_fs_data -> pool replicated, hdd only
- pool_fs_data_ec -> pool EC, hdd
#ceph fs new cephfsvol pool_fs_meta pool_fs_data
#ceph osd pool set pool_fs_data_ec allo
Hi Dan,
I probably have a very specific use of CEPH + nfs-ganesha but I don't use
nfs-ganesha builds (from any location).
The reason is that I had to patch nfs-ganesha 3.5 in order to play nicely with
CEPHFS and POSIX ACLs (using VFS FSAL)...
Arnaud
___
Hi Matthew,
I dont' know if it will be helpful but I had the same problem using debian 10
and the solution was to install docker from docker.io and not from the debian
package (too old).
Arnaud
- Mail original -
De: "Matthew Pounsett"
À: "ceph-users"
Envoyé: Lundi 30 Août 2021 17:34
eading files in cephfs (may be related with the dropped packets), so I
downgraded our 2 clusters to 16.2.4.
I will try to resolve my problems with 16.2.5 to benefit from the next
enhancements of snapshots mirroring...
Kind regards,
Arnaud
- Mail original -
De: "Venky Shankar"
Hi all,
I'm not sure to really understand how cephfs snapshots mirroring is supposing
to work.
I have 2 ceph clusters (pacific 16.2.4) and snapshots mirroring is set up for
only one directory, /ec42/test, in our cephfs filesytem (it's for test purposes
but we plan to use it with about 50-60
Peter,
I had the same error and my workaround was to manually create /usr/lib/sysctl.d
directory on all nodes, then resume the upgrade
Arnaud Martel
- Mail original -
De: "Peter Childs"
À: "ceph-users"
Envoyé: Vendredi 6 Août 2021 15:03:20
Objet: [ceph-users]
OK. I found the answer (based on a previous discussion) and I was able to clean
this warning using the following command:
ceph orch restart mgr
Arnaud
- Mail original -
De: "arnaud martel"
À: "ceph-users"
Envoyé: Jeudi 22 Juillet 2021 16:20:43
Objet: [cep
Hi,
I just upgraded my cluster from 16.2.4 to 16.2.5 and I had an error during the
upgrade of the first osd daemon (cf below). I fixed the error (I just created
the missing directory on all hosts), then resume the upgrade. Now, everything
is OK but I still have a warning:" [WRN] UPGRADE_REDEPL
11 matches
Mail list logo