Hi,
your whole OSD deployment is wrong. CEPH does not use any filesystem
anymore for at least two major releases, and the existing filestore
backend is deprecated. Dunno where you got those steps from...
Just use ceph-volume, and preferable the lvm based deployment. If you
really want to u
Or you can achieve users from bucket usage, consult with code of
radosgw_usage_exporter [1] maybe if enough to just start exporter and work with
data in Grafana
Cheers,
k
[1] https://github.com/blemmenes/radosgw_usage_exporter
> On 24 Feb 2021, at 16:08, Marcelo wrote:
>
> I'm trying to lis
Yes, the OSD Key is in the correct folder (or, at least, I think it is).
The line in the steps I did is:
sudo -u ceph ceph auth get-or-create osd.0 osd 'allow *' mon 'allow
profile osd' mgr 'allow profile osd' -o /var/lib/ceph/osd/ceph-0/keyring
This places the osd-0 key in the file 'k
On 2/6/20 6:04 PM, Stefan Kooman wrote:
Hi,
After setting:
ceph config set mds mds_recall_max_caps 1
(5000 before change)
and
ceph config set mds mds_recall_max_decay_rate 1.0
(2.5 before change)
And the:
ceph tell 'mds.*' injectargs '--mds_recall_max_caps 1'
ceph tell 'mds.*' inj
I'm not running octupus and I don't use the hard-core bare metal deployment
method. I use ceph-volume and things work smoothly. Hence, my input might be
useless.
Now looking at your text, you should always include the start-up and shut-down
log of the OSD. As a wild guess, did you copy the OSD
Hello Simon,
On Wed, Feb 24, 2021 at 7:43 AM Simon Oosthoek wrote:
>
> On 24/02/2021 12:40, Simon Oosthoek wrote:
> > Hi
> >
> > we've been running our Ceph cluster for nearly 2 years now (Nautilus)
> > and recently, due to a temporary situation the cluster is at 80% full.
> >
> > We are only usi
I'm not sure what to add. NFSv3 uses a random port for the server, and
uses a service named portmapper so that clients can find the port of the
server. Connecting to the portmapper requires a privileged container.
With docker, this is done with the --privileged option. I don't know
how to do
I've never used cephadm, sorry. When I last ran containerized Ganesha,
I was using docker directly.
Daniel
On 9/23/20 1:58 PM, Gabriel Medve wrote:
Hi
Thanks for the reply.
cephadm runs ceph containers automatically. How to set privileged mode
in ceph container?
--
El 23/9/20 a las 13:
We're happy to announce the 9th backport release in the Octopus series.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
https://ceph.io/releases/v15-2-9-Octopus-released
Notable Changes
---
*
On Wed, 2021-02-24 at 16:47 +0100, Ilya Dryomov wrote:
> On Wed, Feb 24, 2021 at 4:09 PM Frank Schilder wrote:
> >
> > Hi all,
> >
> > I get these log messages all the time, sometimes also directly to the
> > terminal:
> >
> > kernel: ceph: mdsmap_decode got incorrect state(up:standby-repl
Hi.
I'm a newbie in CephFS and I have some questions about how per-MDS journals
work.
In Sage's paper (osdi '06), I read that each MDSs has its own journal and it
lazily flushes metadata modifications on OSD cluster.
What I'm wondering is that some directory operations like rename work with
mul
Hi Daniel,
Can you give me more details, i have same issue use NFSv3 mount.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I was able to disable 'x pool(s) have no replicas configured' by running the
following:
$ ceph health mute POOL_NO_REDUNDANCY
$ ceph status
cluster:
id: 9692970f-b717-4050-bf58-05a603b497c1
health: HEALTH_OK
(muted: POOL_NO_REDUNDANCY)
_
Dear Ilya,
thanks for the quick response. Good to know that it is harmless.
It is very irritating though. I did maintenance some time ago adding lots of
new disks and restarting the MDSses and the system got a bit stressed. These
messages started to run over the root console with such a frequen
Hi Everyone,
Let me apologise upfront:
If this isn't the correct List to post to
If this has been answered already (& I've missed it in my searching)
If this has ended up double posted
If I've in any way given (or about to give) offence to anyone
I really need some help.
I'm try
Hi all,
We've recently run into an issue where our single ceph rbd pool is throwing
errors for nearfull osds. The OSDs themselves vary in PGs/%full with a low of
64/78% and a high of 73/86%. Is there any suggestions on how to get this to
balance a little more cleanly? Currently we have 360 driv
Hi all,
I get these log messages all the time, sometimes also directly to the terminal:
kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)
The cluster is healthy and the MDS complaining is actually both, configured and
running as a standby-replay daemon. These messages show
Le mar. 23 févr. 2021, à 03 h 07, Janne Johansson a
écrit :
> >>> Hello,
> >>> We have functional ceph swarm with a pair of S3 rgw in front that uses
> >>> A.B.C.D domain to be accessed.
> >>>
> >>> Now a new client asks to have access using the domain : E.C.D, but to
> >>> already existing bucke
Le lun. 22 févr. 2021, à 14 h 50, Chris Palmer a
écrit :
> I'm not sure that the tenant solution is what the OP wants - my reading
> is that running under a different tenant allows you have different
> tenants use the same bucket and user names but still be distinct, which
> wasn't what I thought
On Wed, Feb 24, 2021 at 9:10 AM 조규진 wrote:
>
> Hi.
>
> I'm a newbie in CephFS and I have some questions about how per-MDS journals
> work.
> In Sage's paper (osdi '06), I read that each MDSs has its own journal and
> it lazily flushes metadata modifications on OSD cluster.
> What I'm wondering is
On Wed, Feb 24, 2021 at 4:09 PM Frank Schilder wrote:
>
> Hi all,
>
> I get these log messages all the time, sometimes also directly to the
> terminal:
>
> kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)
>
> The cluster is healthy and the MDS complaining is actually both, c
On 24/02/2021 12:40, Simon Oosthoek wrote:
> Hi
>
> we've been running our Ceph cluster for nearly 2 years now (Nautilus)
> and recently, due to a temporary situation the cluster is at 80% full.
>
> We are only using CephFS on the cluster.
>
> Normally, I realize we should be adding OSD nodes, b
Hello.
I'm trying to list the number of buckets that users have for monitoring
purposes, but I need to list and count the number of buckets per user. Is
it possible to get this information somewhere else?
Thanks, Marcelo
___
ceph-users mailing list -- c
Hi
we've been running our Ceph cluster for nearly 2 years now (Nautilus)
and recently, due to a temporary situation the cluster is at 80% full.
We are only using CephFS on the cluster.
Normally, I realize we should be adding OSD nodes, but this is a
temporary situation, and I expect the cluster
Hi.
I'm a newbie in CephFS and I have some questions about how per-MDS journals
work.
In Sage's paper (osdi '06), I read that each MDSs has its own journal and
it lazily flushes metadata modifications on OSD cluster.
What I'm wondering is that some directory operations like rename work with
multip
25 matches
Mail list logo