I played with ceph-authtool and this seems to work:
host1:/etc/ceph # ceph-authtool ceph.client.user1.keyring -g -n
client.user1 --cap mon "allow r" --cap mds "allow rw path=/dir1" --cap
osd "allow rw tag cephfs data=cephfs"
where "ceph.client.user1.keyring" is obviously the client's keyrin
This is the output from ceph status:
cluster:
id: 9d7bc71a-3f88-11eb-bc58-b9cfbaed27d3
health: HEALTH_WARN
1 pool(s) do not have an application enabled
services:
mon: 3 daemons, quorum
ceph-storage-1.softdesign.dk,ceph-storage-2,ceph-storage-3 (age 4d)
mgr: cep
Hi,
I am still not sure if I need to create two different pools, one for
NFS daemon and one for the export?
the pool (and/or namespace) you specify in your nfs.yaml is for the
ganesha config only (and should be created for you), it doesn't store
nfs data since that is covered via cephfs (
P, I guess it is time to create issue feature request for 'ceph auth
new-key '
-Original Message-
From: Eugen Block [mailto:ebl...@nde.ag]
Sent: 21 December 2020 10:20
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Is there a command to update a client with a
new generated
Hi.
You are correct. There must be some hardcoded occurences of nfs-ganesha.
I tried creating a new cluster using the ceph nfs cluster create command.
I was still unable to create an export using the management interface, still
got permission errors.
But I created the folder manually and did a ch
Yep, there's a note at the bottom of [1]:
Note: Only NFS v4.0+ is supported.
Zitat von "Jens Hyllegaard (Soft Design A/S)" :
Hi.
You are correct. There must be some hardcoded occurences of nfs-ganesha.
I tried creating a new cluster using the ceph nfs cluster create command.
I was still un
On Sun, Dec 20, 2020 at 6:56 PM Alexander E. Patrakov
wrote:
> On Mon, Dec 21, 2020 at 4:57 AM Jeremy Austin wrote:
> >
> > On Sun, Dec 20, 2020 at 2:25 PM Jeremy Austin
> wrote:
> >
> > > Will attempt to disable compaction and report.
> > >
> >
> > Not sure I'm doing this right. In [osd] secti
We've got a PR in to fix this; we validated it resolves the issue in our
larger clusters. We could use some help getting this moved forward since it
seems to impact a number of users:
https://github.com/ceph/ceph/pull/38677
On Fri, Dec 11, 2020 at 9:10 AM David Orman wrote:
> No, as the number
Hi,
I want to enable the firewall on my ceph nodes with ufw. Does anyone have
any experience with any performance regression in it?
Is there any solution for blocking exporter ports without a firewall in a
Ceph cluster like node exporter and ceph exporter?
Thanks.
Hi Jeremy,
you might want to try RocksDB's disable_auto_compactions option for that.
To adjust rocksdb's options one should edit/insert
bluestore_rocksdb_options in ceph.conf.
E.g.
bluestore_rocksdb_options =
"disable_auto_compactions=true,compression=kNoCompression,max_write_buffer_number
Hi Alexander,
the option you provided controls bluefs log compaction not rocksdb ones.
Hence it doesn't make sense in Jeremy's case.
Thanks,
Igor
On 12/21/2020 6:55 AM, Alexander E. Patrakov wrote:
On Mon, Dec 21, 2020 at 4:57 AM Jeremy Austin wrote:
On Sun, Dec 20, 2020 at 2:25 PM Jeremy
In my experiments with ceph so far, setting up a new cluster goes fairly
well... so long as i only use a single network.
But when I try to use separate networks, things stop functioning in various
ways.
(For example, I can "
SO I thought I'd ask for pointers to any multi-network setup guide.
My
I have a cephfs secondary (non-root) data pool with unfound and degraded
objects that I have not been able to recover[1]. I created an
additional data pool and used "setfattr -n ceph.dir.layout.pool' and a
very long rsync to move the files off of the degraded pool and onto the
new pool. This
How to recover from this? It is possible to have a vm with a cephfs
mount on a osd server?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I was having horrible problems getting my test ceph clusterj reinitialized.
All kinds of annoying things were happening.
including things like getting differing output from
ceph orch device ls
vs
ceph device ls
Being new-ish to ceph, i was going nuts, wondering what kind of init options I
was
Igor,
You're a bloomin' genius, as they say.
Disabling auto compaction allowed OSDs 11 and 12 to spin up/out. The 7 down
PGs recovered; there were a few unfound items previously which I went ahead
and deleted, given that this is EC, revert not being an option.
HEALTH OK :)
I'm now intending to
16 matches
Mail list logo