[ceph-users] Re: Converting to cephadm : Error EINVAL: Failed to connect

2023-06-02 Thread David Barton
Thanks, Michel. ceph -s reports it as a stray host (since I haven't been able to add it). ceph health detail reiterates that it is a stray host # ceph cephadm check-host cephstorage-rs01 check-host failed: Host 'cephstorage-rs01' not found. Use 'ceph orch host ls' to see all managed hosts. I

[ceph-users] Re: Encryption per user Howto

2023-06-02 Thread Alexander E. Patrakov
Hello Stefan, On Fri, Jun 2, 2023 at 11:12 PM Stefan Kooman wrote: > On 6/2/23 16:33, Anthony D'Atri wrote: > > Stefan, how do you have this implemented? Earlier this year I submitted > > https://tracker.ceph.com/issues/58569 > > asking to enable just this

[ceph-users] Re: [EXTERNAL] Re: Converting to cephadm : Error EINVAL: Failed to connect

2023-06-02 Thread Beaman, Joshua
I usually find the most useful errors for troubleshooting orch/cephadm connection issues in: ceph log last 50 cephadm Thank you, Josh Beaman From: Michel Jouvin Date: Friday, June 2, 2023 at 1:19 PM To: ceph-users@ceph.io Subject: [EXTERNAL] [ceph-users] Re: Converting to cephadm : Error EINVA

[ceph-users] Re: Converting to cephadm : Error EINVAL: Failed to connect

2023-06-02 Thread Michel Jouvin
Hi David, Normally cephadm connection issue are not that difficult to solve. It is just the matter of having the appropriate SSH configuration in the root account. Mainly the public key used by cephadm (extracted with the command you used in a shell) added in the root account .ssh/authorized_k

[ceph-users] Converting to cephadm : Error EINVAL: Failed to connect

2023-06-02 Thread David Barton
I am trying to debug an issue with ceph orch host add Is there a way to debug the specific ssh commands being issued or add debugging code to a python script? There is nothing useful in my syslog or /var/log/ceph/cephadm.log Is there a way to get the command to log, or can someone point me in

[ceph-users] Re: Encryption per user Howto

2023-06-02 Thread Stefan Kooman
On 6/2/23 16:33, Anthony D'Atri wrote: Stefan, how do you have this implemented? Earlier this year I submitted https://tracker.ceph.com/issues/58569  asking to enable just this. Lol, I have never seen that tracker otherwise I would have informed you abou

[ceph-users] Re: Encryption per user Howto

2023-06-02 Thread Anthony D'Atri
Stefan, how do you have this implemented? Earlier this year I submitted https://tracker.ceph.com/issues/58569 asking to enable just this. > On Jun 2, 2023, at 10:09, Stefan Kooman wrote: > > On 5/26/23 23:09, Alexander E. Patrakov wrote: >> Hello Frank, >> On Fri, May 26, 2023 at 6:27 PM Fran

[ceph-users] Re: Encryption per user Howto

2023-06-02 Thread Stefan Kooman
On 5/26/23 23:09, Alexander E. Patrakov wrote: Hello Frank, On Fri, May 26, 2023 at 6:27 PM Frank Schilder wrote: Hi all, jumping on this thread as we have requests for which per-client fs mount encryption makes a lot of sense: What kind of security to you want to achieve with encryption

[ceph-users] Re: CEPH Version choice

2023-06-02 Thread Frank Schilder
Hi Marc, > > We actually kept the benchmark running through an upgrade from mimic to > > octopus. Was quite interesting to see how certain performance properties > > change with that. > > So you have stats that show the current performance of a host having mimic > and from > another host that has

[ceph-users] Re: NFS export of 2 disjoint sub-dir mounts

2023-06-02 Thread Frank Schilder
To answer my own question, assigning an explicit fsid to only one of the exports seems to overwrite the default for all file systems with the same default fsid. Hence, the export definitions /mnt/S1 -options NET /mnt/S2 -options IP and /mnt/S1 -options NET /mnt/S2 -options,fsid=100 IP are equ

[ceph-users] Metadata pool space usage decreases

2023-06-02 Thread Nathan MALO
Hello all, I have a weird behavior on my Cephfs. On May 29th I noticed a drop of 50Tb in my data pool. It has been followed by a decrease of space usage in the metadata pool since then. >From May 29th, still happening as I write, the metadata pool has lost 1Tb over the initial 1.8Tb. Regarding the