[ceph-users] Re: MDS crash when unlink file

2022-02-14 Thread Arnaud MARTEL
se?? Kind regards Arnaud - Mail original - De: "Venky Shankar" À: "arnaud martel" Cc: "ceph-users" Envoyé: Vendredi 11 Février 2022 15:03:04 Objet: Re: [ceph-users] MDS crash when unlink file Hi Arnaud, On Fri, Feb 11, 2022 at 2:42 PM Arnaud MARTEL wrote:

[ceph-users] MDS crash when unlink file

2022-02-11 Thread Arnaud MARTEL
Hi, MDSs are crashing on my production cluster when trying to unlink some files and I need help :-). When looking into the log files, I have identified some associated files and I ran a scrub on the parent directory with force,repair,recursive options. No error were detected but the problem p

[ceph-users] mds crash loop - Server.cc: 7503: FAILED ceph_assert(in->first <= straydn->first)

2022-02-08 Thread Arnaud MARTEL
Hi all, We have a cephfs cluster in production for about 2 months and, for the past 2-3 weeks, we are regularly experiencing MDS crash loops (every 3-4 hours if we have some user activity). A temporary fix is to remove the MDSs in error (or unknown) state, stop samba & nfs-ganesha gateways

[ceph-users] Re: Use of an EC pool for the default data pool is discouraged

2022-01-21 Thread Arnaud MARTEL
Hi Samuel, You have to use pool affinity. For example, with 3 pools in ceph pacific: - pool_fs_meta -> pool replicated, SSD only - pool_fs_data -> pool replicated, hdd only - pool_fs_data_ec -> pool EC, hdd #ceph fs new cephfsvol pool_fs_meta pool_fs_data #ceph osd pool set pool_fs_data_ec allo

[ceph-users] Re: are you using nfs-ganesha builds from download.ceph.com

2022-01-12 Thread Arnaud MARTEL
Hi Dan, I probably have a very specific use of CEPH + nfs-ganesha but I don't use nfs-ganesha builds (from any location). The reason is that I had to patch nfs-ganesha 3.5 in order to play nicely with CEPHFS and POSIX ACLs (using VFS FSAL)... Arnaud ___

[ceph-users] Re: cephadm Pacific bootstrap hangs waiting for mon

2021-08-31 Thread Arnaud MARTEL
Hi Matthew, I dont' know if it will be helpful but I had the same problem using debian 10 and the solution was to install docker from docker.io and not from the debian package (too old). Arnaud - Mail original - De: "Matthew Pounsett" À: "ceph-users" Envoyé: Lundi 30 Août 2021 17:34

[ceph-users] Re: cephfs snapshots mirroring

2021-08-23 Thread Arnaud MARTEL
eading files in cephfs (may be related with the dropped packets), so I downgraded our 2 clusters to 16.2.4. I will try to resolve my problems with 16.2.5 to benefit from the next enhancements of snapshots mirroring... Kind regards, Arnaud - Mail original - De: "Venky Shankar"

[ceph-users] cephfs snapshots mirroring

2021-08-23 Thread Arnaud MARTEL
Hi all, I'm not sure to really understand how cephfs snapshots mirroring is supposing to work. I have 2 ceph clusters (pacific 16.2.4) and snapshots mirroring is set up for only one directory, /ec42/test, in our cephfs filesytem (it's for test purposes but we plan to use it with about 50-60

[ceph-users] Re: Cephadm Upgrade from Octopus to Pacific

2021-08-06 Thread Arnaud MARTEL
Peter, I had the same error and my workaround was to manually create /usr/lib/sysctl.d directory on all nodes, then resume the upgrade Arnaud Martel - Mail original - De: "Peter Childs" À: "ceph-users" Envoyé: Vendredi 6 Août 2021 15:03:20 Objet: [ceph-users]

[ceph-users] Re: Can't clear UPGRADE_REDEPLOY_DAEMON after fix

2021-07-23 Thread Arnaud MARTEL
OK. I found the answer (based on a previous discussion) and I was able to clean this warning using the following command: ceph orch restart mgr Arnaud - Mail original - De: "arnaud martel" À: "ceph-users" Envoyé: Jeudi 22 Juillet 2021 16:20:43 Objet: [cep

[ceph-users] Can't clear UPGRADE_REDEPLOY_DAEMON after fix

2021-07-22 Thread Arnaud MARTEL
Hi, I just upgraded my cluster from 16.2.4 to 16.2.5 and I had an error during the upgrade of the first osd daemon (cf below). I fixed the error (I just created the missing directory on all hosts), then resume the upgrade. Now, everything is OK but I still have a warning:" [WRN] UPGRADE_REDEPL