[ceph-users] Re: Newbie Requesting Help - Please, This Is Driving Me Mad/Crazy!

2021-02-24 Thread Burkhard Linke
Hi, your whole OSD deployment is wrong. CEPH does not use any filesystem anymore for at least two major releases, and the existing filestore backend is deprecated. Dunno where you got those steps from... Just use ceph-volume, and preferable the lvm based deployment. If you really want to u

[ceph-users] Re: List number of buckets owned per user

2021-02-24 Thread Konstantin Shalygin
Or you can achieve users from bucket usage, consult with code of radosgw_usage_exporter [1] maybe if enough to just start exporter and work with data in Grafana Cheers, k [1] https://github.com/blemmenes/radosgw_usage_exporter > On 24 Feb 2021, at 16:08, Marcelo wrote: > > I'm trying to lis

[ceph-users] Re: Newbie Requesting Help - Please, This Is Driving Me Mad/Crazy!

2021-02-24 Thread duluxoz
Yes, the OSD Key is in the correct folder (or, at least, I think it is). The line in the steps I did is: sudo -u ceph ceph auth get-or-create osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' -o /var/lib/ceph/osd/ceph-0/keyring This places the osd-0 key in the file 'k

[ceph-users] Re: mds lost very frequently

2021-02-24 Thread Stefan Kooman
On 2/6/20 6:04 PM, Stefan Kooman wrote: Hi, After setting: ceph config set mds mds_recall_max_caps 1 (5000 before change) and ceph config set mds mds_recall_max_decay_rate 1.0 (2.5 before change) And the: ceph tell 'mds.*' injectargs '--mds_recall_max_caps 1' ceph tell 'mds.*' inj

[ceph-users] Re: Newbie Requesting Help - Please, This Is Driving Me Mad/Crazy!

2021-02-24 Thread Frank Schilder
I'm not running octupus and I don't use the hard-core bare metal deployment method. I use ceph-volume and things work smoothly. Hence, my input might be useless. Now looking at your text, you should always include the start-up and shut-down log of the OSD. As a wild guess, did you copy the OSD

[ceph-users] Re: ceph slow at 80% full, mds nodes lots of unused memory

2021-02-24 Thread Patrick Donnelly
Hello Simon, On Wed, Feb 24, 2021 at 7:43 AM Simon Oosthoek wrote: > > On 24/02/2021 12:40, Simon Oosthoek wrote: > > Hi > > > > we've been running our Ceph cluster for nearly 2 years now (Nautilus) > > and recently, due to a temporary situation the cluster is at 80% full. > > > > We are only usi

[ceph-users] Re: NFS Ganesha NFSv3

2021-02-24 Thread Daniel Gryniewicz
I'm not sure what to add. NFSv3 uses a random port for the server, and uses a service named portmapper so that clients can find the port of the server. Connecting to the portmapper requires a privileged container. With docker, this is done with the --privileged option. I don't know how to do

[ceph-users] Re: NFS Ganesha NFSv3

2021-02-24 Thread Daniel Gryniewicz
I've never used cephadm, sorry. When I last ran containerized Ganesha, I was using docker directly. Daniel On 9/23/20 1:58 PM, Gabriel Medve wrote: Hi Thanks for the reply. cephadm runs ceph containers automatically. How to set privileged mode in ceph container? -- El 23/9/20 a las 13:

[ceph-users] v15.2.9 Octopus released

2021-02-24 Thread David Galloway
We're happy to announce the 9th backport release in the Octopus series. We recommend users to update to this release. For a detailed release notes with links & changelog please refer to the official blog entry at https://ceph.io/releases/v15-2-9-Octopus-released Notable Changes --- *

[ceph-users] Re: kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)

2021-02-24 Thread Jeff Layton
On Wed, 2021-02-24 at 16:47 +0100, Ilya Dryomov wrote: > On Wed, Feb 24, 2021 at 4:09 PM Frank Schilder wrote: > > > > Hi all, > > > > I get these log messages all the time, sometimes also directly to the > > terminal: > > > > kernel: ceph: mdsmap_decode got incorrect state(up:standby-repl

[ceph-users] Question about per-MDS journals

2021-02-24 Thread bori19960
Hi. I'm a newbie in CephFS and I have some questions about how per-MDS journals work. In Sage's paper (osdi '06), I read that each MDSs has its own journal and it lazily flushes metadata modifications on OSD cluster. What I'm wondering is that some directory operations like rename work with mul

[ceph-users] Re: NFS Ganesha NFSv3

2021-02-24 Thread louis_zhu
Hi Daniel, Can you give me more details, i have same issue use NFSv3 mount. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Possible to disable check: x pool(s) have no replicas configured

2021-02-24 Thread paulmin925
I was able to disable 'x pool(s) have no replicas configured' by running the following: $ ceph health mute POOL_NO_REDUNDANCY $ ceph status cluster: id: 9692970f-b717-4050-bf58-05a603b497c1 health: HEALTH_OK (muted: POOL_NO_REDUNDANCY) _

[ceph-users] Re: kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)

2021-02-24 Thread Frank Schilder
Dear Ilya, thanks for the quick response. Good to know that it is harmless. It is very irritating though. I did maintenance some time ago adding lots of new disks and restarting the MDSses and the system got a bit stressed. These messages started to run over the root console with such a frequen

[ceph-users] Newbie Requesting Help - Please, This Is Driving Me Mad/Crazy!

2021-02-24 Thread matthew
Hi Everyone, Let me apologise upfront: If this isn't the correct List to post to If this has been answered already (& I've missed it in my searching) If this has ended up double posted If I've in any way given (or about to give) offence to anyone I really need some help. I'm try

[ceph-users] Ceph 14.2.8 OSD/Pool Nearfull

2021-02-24 Thread Matt Dunavant
Hi all, We've recently run into an issue where our single ceph rbd pool is throwing errors for nearfull osds. The OSDs themselves vary in PGs/%full with a low of 64/78% and a high of 73/86%. Is there any suggestions on how to get this to balance a little more cleanly? Currently we have 360 driv

[ceph-users] kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)

2021-02-24 Thread Frank Schilder
Hi all, I get these log messages all the time, sometimes also directly to the terminal: kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay) The cluster is healthy and the MDS complaining is actually both, configured and running as a standby-replay daemon. These messages show

[ceph-users] Re: multiple-domain for S3 on rgws with same ceph backend on one zone

2021-02-24 Thread Simon Pierre DESROSIERS
Le mar. 23 févr. 2021, à 03 h 07, Janne Johansson a écrit : > >>> Hello, > >>> We have functional ceph swarm with a pair of S3 rgw in front that uses > >>> A.B.C.D domain to be accessed. > >>> > >>> Now a new client asks to have access using the domain : E.C.D, but to > >>> already existing bucke

[ceph-users] Re: multiple-domain for S3 on rgws with same ceph backend on one zone

2021-02-24 Thread Simon Pierre DESROSIERS
Le lun. 22 févr. 2021, à 14 h 50, Chris Palmer a écrit : > I'm not sure that the tenant solution is what the OP wants - my reading > is that running under a different tenant allows you have different > tenants use the same bucket and user names but still be distinct, which > wasn't what I thought

[ceph-users] Re: Question about per MDS journals

2021-02-24 Thread John Spray
On Wed, Feb 24, 2021 at 9:10 AM 조규진 wrote: > > Hi. > > I'm a newbie in CephFS and I have some questions about how per-MDS journals > work. > In Sage's paper (osdi '06), I read that each MDSs has its own journal and > it lazily flushes metadata modifications on OSD cluster. > What I'm wondering is

[ceph-users] Re: kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)

2021-02-24 Thread Ilya Dryomov
On Wed, Feb 24, 2021 at 4:09 PM Frank Schilder wrote: > > Hi all, > > I get these log messages all the time, sometimes also directly to the > terminal: > > kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay) > > The cluster is healthy and the MDS complaining is actually both, c

[ceph-users] Re: ceph slow at 80% full, mds nodes lots of unused memory

2021-02-24 Thread Simon Oosthoek
On 24/02/2021 12:40, Simon Oosthoek wrote: > Hi > > we've been running our Ceph cluster for nearly 2 years now (Nautilus) > and recently, due to a temporary situation the cluster is at 80% full. > > We are only using CephFS on the cluster. > > Normally, I realize we should be adding OSD nodes, b

[ceph-users] List number of buckets owned per user

2021-02-24 Thread Marcelo
Hello. I'm trying to list the number of buckets that users have for monitoring purposes, but I need to list and count the number of buckets per user. Is it possible to get this information somewhere else? Thanks, Marcelo ___ ceph-users mailing list -- c

[ceph-users] ceph slow at 80% full, mds nodes lots of unused memory

2021-02-24 Thread Simon Oosthoek
Hi we've been running our Ceph cluster for nearly 2 years now (Nautilus) and recently, due to a temporary situation the cluster is at 80% full. We are only using CephFS on the cluster. Normally, I realize we should be adding OSD nodes, but this is a temporary situation, and I expect the cluster

[ceph-users] Question about per MDS journals

2021-02-24 Thread 조규진
Hi. I'm a newbie in CephFS and I have some questions about how per-MDS journals work. In Sage's paper (osdi '06), I read that each MDSs has its own journal and it lazily flushes metadata modifications on OSD cluster. What I'm wondering is that some directory operations like rename work with multip