[ceph-users] Re: Is there a command to update a client with a new generated key?

2020-12-21 Thread Eugen Block
I played with ceph-authtool and this seems to work: host1:/etc/ceph # ceph-authtool ceph.client.user1.keyring -g -n client.user1 --cap mon "allow r" --cap mds "allow rw path=/dir1" --cap osd "allow rw tag cephfs data=cephfs" where "ceph.client.user1.keyring" is obviously the client's keyrin

[ceph-users] Re: Setting up NFS with Octopus

2020-12-21 Thread Jens Hyllegaard (Soft Design A/S)
This is the output from ceph status: cluster: id: 9d7bc71a-3f88-11eb-bc58-b9cfbaed27d3 health: HEALTH_WARN 1 pool(s) do not have an application enabled services: mon: 3 daemons, quorum ceph-storage-1.softdesign.dk,ceph-storage-2,ceph-storage-3 (age 4d) mgr: cep

[ceph-users] Re: Setting up NFS with Octopus

2020-12-21 Thread Eugen Block
Hi, I am still not sure if I need to create two different pools, one for NFS daemon and one for the export? the pool (and/or namespace) you specify in your nfs.yaml is for the ganesha config only (and should be created for you), it doesn't store nfs data since that is covered via cephfs (

[ceph-users] Re: Is there a command to update a client with a new generated key?

2020-12-21 Thread Marc Roos
P, I guess it is time to create issue feature request for 'ceph auth new-key ' -Original Message- From: Eugen Block [mailto:ebl...@nde.ag] Sent: 21 December 2020 10:20 To: ceph-users@ceph.io Subject: [ceph-users] Re: Is there a command to update a client with a new generated

[ceph-users] Re: Setting up NFS with Octopus

2020-12-21 Thread Jens Hyllegaard (Soft Design A/S)
Hi. You are correct. There must be some hardcoded occurences of nfs-ganesha. I tried creating a new cluster using the ceph nfs cluster create command. I was still unable to create an export using the management interface, still got permission errors. But I created the folder manually and did a ch

[ceph-users] Re: Setting up NFS with Octopus

2020-12-21 Thread Eugen Block
Yep, there's a note at the bottom of [1]: Note: Only NFS v4.0+ is supported. Zitat von "Jens Hyllegaard (Soft Design A/S)" : Hi. You are correct. There must be some hardcoded occurences of nfs-ganesha. I tried creating a new cluster using the ceph nfs cluster create command. I was still un

[ceph-users] Re: PGs down

2020-12-21 Thread Jeremy Austin
On Sun, Dec 20, 2020 at 6:56 PM Alexander E. Patrakov wrote: > On Mon, Dec 21, 2020 at 4:57 AM Jeremy Austin wrote: > > > > On Sun, Dec 20, 2020 at 2:25 PM Jeremy Austin > wrote: > > > > > Will attempt to disable compaction and report. > > > > > > > Not sure I'm doing this right. In [osd] secti

[ceph-users] Re: mgr's stop responding, dropping out of cluster with _check_auth_rotating

2020-12-21 Thread David Orman
We've got a PR in to fix this; we validated it resolves the issue in our larger clusters. We could use some help getting this moved forward since it seems to impact a number of users: https://github.com/ceph/ceph/pull/38677 On Fri, Dec 11, 2020 at 9:10 AM David Orman wrote: > No, as the number

[ceph-users] Ceph with Firewall

2020-12-21 Thread Seena Fallah
Hi, I want to enable the firewall on my ceph nodes with ufw. Does anyone have any experience with any performance regression in it? Is there any solution for blocking exporter ports without a firewall in a Ceph cluster like node exporter and ceph exporter? Thanks.

[ceph-users] Re: PGs down

2020-12-21 Thread Igor Fedotov
Hi Jeremy, you might want to try RocksDB's disable_auto_compactions option for that. To adjust rocksdb's options one should  edit/insert bluestore_rocksdb_options in ceph.conf. E.g. bluestore_rocksdb_options = "disable_auto_compactions=true,compression=kNoCompression,max_write_buffer_number

[ceph-users] Re: PGs down

2020-12-21 Thread Igor Fedotov
Hi Alexander, the option you provided controls bluefs log compaction not rocksdb ones. Hence it doesn't make sense in Jeremy's case. Thanks, Igor On 12/21/2020 6:55 AM, Alexander E. Patrakov wrote: On Mon, Dec 21, 2020 at 4:57 AM Jeremy Austin wrote: On Sun, Dec 20, 2020 at 2:25 PM Jeremy

[ceph-users] guide to multi-homed hosts, for Octopus?

2020-12-21 Thread Philip Brown
In my experiments with ceph so far, setting up a new cluster goes fairly well... so long as i only use a single network. But when I try to use separate networks, things stop functioning in various ways. (For example, I can " SO I thought I'd ask for pointers to any multi-network setup guide. My

[ceph-users] Removing secondary data pool from mds

2020-12-21 Thread Michael Thomas
I have a cephfs secondary (non-root) data pool with unfound and degraded objects that I have not been able to recover[1]. I created an additional data pool and used "setfattr -n ceph.dir.layout.pool' and a very long rsync to move the files off of the degraded pool and onto the new pool. This

[ceph-users] Cephfs mount hangs

2020-12-21 Thread Marc Roos
How to recover from this? It is possible to have a vm with a cephfs mount on a osd server? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] friendly warning about death by container versions

2020-12-21 Thread Philip Brown
I was having horrible problems getting my test ceph clusterj reinitialized. All kinds of annoying things were happening. including things like getting differing output from ceph orch device ls vs ceph device ls Being new-ish to ceph, i was going nuts, wondering what kind of init options I was

[ceph-users] Re: PGs down

2020-12-21 Thread Jeremy Austin
Igor, You're a bloomin' genius, as they say. Disabling auto compaction allowed OSDs 11 and 12 to spin up/out. The 7 down PGs recovered; there were a few unfound items previously which I went ahead and deleted, given that this is EC, revert not being an option. HEALTH OK :) I'm now intending to