[ceph-users] Too strong permission for RGW in OpenStack

2022-10-18 Thread Michal Strnad
Hi. We have ceph cluster with a lot of users who use S3 and RBD protocols. Now we need to give access to one use group with OpenStack, so they run RGW on their side, but we have to set "ceph caps" for this RGW. In the documentation for OpenStack is following ceph auth get-or-create client.ra

[ceph-users] Re: Cephadm - Adding host to migrated cluster

2022-10-18 Thread Eugen Block
That doesn't sound right. I had a single node cluster deployed with 16.2.5 and tried to reproduce. I only installed cephadm and copied the cephadm public key to the new node and added it to the cluster via dashboard. Then I added some disks to it and they were successfully deployed as OSDs.

[ceph-users] Balancing MDS services on hosts

2022-10-18 Thread s . paulusma
I deployed a small cluster for testing/deploying CephFS with cephadm. I was wondering if it's possible to balance the active and standby daemons on hosts.The service configuration:service_type: mdsservice_id: test-fsservice_name: mds.test-fsplacement:  count: 4  hosts:  - host1.example.com  - host2

[ceph-users] Balancing MDS services on multiple hosts

2022-10-18 Thread Sake Paulusma
Another shot, company mail server did something special... I deployed a small cluster for testing/deploying CephFS with cephadm. I was wondering if it's possible to balance the active and standby daemons on hosts. The service configuration: service_type: mds service_id: test-fs service_name: mds

[ceph-users] Re: cephadm error: add-repo does not have a release file

2022-10-18 Thread Murilo Morais
AFAIK there are no repositories for Ubuntu 22.04 yet, but if I'm not mistaken there are packages compiled by Canonical for Ubuntu 22.04, try running apt install ceph-common. Em seg., 17 de out. de 2022 às 20:30, Na Na escreveu: > > > I followed

[ceph-users] Re-install host OS on Ceph OSD node

2022-10-18 Thread Geoffrey Rhodes
Good day I have a ceph osd node with an OS drive that has errors and may soon fail. There are 8 x 18TB drives installed in this node. The journal's for each drive are co-located on each drive. I'd like to replace the falling OS drive, re-install the OS (same node name and IP addressing), push the

[ceph-users] Re: monitoring drives

2022-10-18 Thread Kai Stian Olstad
On 17.10.2022 12:52, Ernesto Puerta wrote: - Ceph already exposes SMART-based health-checks, metrics and alerts from the devicehealth/diskprediction modules . I find this kind of high-level monitoring more di

[ceph-users] Announcing go-ceph v0.18.0

2022-10-18 Thread John Mulligan
We are happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.18.0 Changes include additions to the nfs admin package. More details are available at the link above. Th

[ceph-users] Re: Cephadm migration

2022-10-18 Thread Jean-Marc FONTANA
Hello Adam, Just tried command ceph orch redeploy (without "--image") and it works, the rgw image is the right version. The command we used is $ sudo ceph orch daemon redeploy rgw.testrgw.svtcephrgwv1.zlfzpx quay.io/ceph/ceph:v16.2.10 The old rgw service is still alive, but it seems that i

[ceph-users] Noob install: "rbd pool init" stuck

2022-10-18 Thread Renato Callado Borges
Dear all, I am deploying a Ceph system for the first time. I have 3 servers where I intend to install 1 manager, 1 mon and 12 OSDs in each. Since they are used in production already, I selected a single machine to begin deployment, but got stuck when creating rbd pools. The host OS is Cen

[ceph-users] Re: Too strong permission for RGW in OpenStack

2022-10-18 Thread Casey Bodley
On Tue, Oct 18, 2022 at 4:01 AM Michal Strnad wrote: > > Hi. > > We have ceph cluster with a lot of users who use S3 and RBD protocols. > Now we need to give access to one use group with OpenStack, so they run > RGW on their side, but we have to set "ceph caps" for this RGW. In the > documentation

[ceph-users] Quincy 22.04/Jammy packages

2022-10-18 Thread Reed Dier
Curious if there is a timeline for when quincy will start getting packages for Ubuntu Jammy/22.04. It looks like quincy started getting builds for EL9 with 17.2.4, and now with the 17.2.5 there are still only bullseye and focal dists available. Canonical is publishing a 17.2.0 build in jammy-upd

[ceph-users] Re: Noob install: "rbd pool init" stuck

2022-10-18 Thread Eugen Block
Hi, the command doesn't return because your PGs are inactive. It looks like you're trying to use the default replicated_rule but it can't find a suitable placement. What does your 'ceph osd tree' look like? And also paste your ruleset ('ceph osd crush rule dump replicated_rule'). Regardin

[ceph-users] Slow OSD heartbeats message

2022-10-18 Thread Frank Schilder
Hi all, I have flaky transceivers that sometimes lead to these messages: Slow OSD heartbeats on front from osd.412 [CON-161-A1,ContainerSquare,Risoe] to osd.706 [CON-161-A1,ContainerSquare,Risoe] 12913.173 msec After upgrade to octopus, this message now tries to be more helpful than before. Un

[ceph-users] Re: Slow monitor responses for rbd ls etc.

2022-10-18 Thread Gregory Farnum
On Fri, Oct 7, 2022 at 7:53 AM Sven Barczyk wrote: > > Hello, > > > > we are encountering a strange behavior on our Ceph. (All Ubuntu 20 / All > mons Quincy 17.2.4 / Oldest OSD Quincy 17.2.0 ) > Administrative commands like rbd ls or create are so slow, that libvirtd is > running into timeouts and

[ceph-users] Ceph User + Dev Monthly Meeting coming up this Thursday

2022-10-18 Thread Laura Flores
Hello Ceph users, The *Ceph User + Dev Monthly Meeting* is happening this *Thursday, October 20th @ 2:00 pm UTC.* The meeting will be on this link: https://meet.jit.si/ceph-user-dev-monthly. Please feel free to add any topics you'd like to discuss to the monthly minutes etherpad! https://pad.ceph.

[ceph-users] Re: Getting started with cephfs-top, how to install

2022-10-18 Thread Xiubo Li
Hi Zach, On 18/10/2022 04:20, Zach Heise (SSCC) wrote: I'd like to see what CephFS clients are doing the most IO. According to this page: https://docs.ceph.com/en/quincy/cephfs/cephfs-top/ - cephfs-top is the simplest way to do this? I enabled 'ceph mgr module enable stats' today, but I'm a

[ceph-users] Quincy - Support with NFS Ganesha on Alma

2022-10-18 Thread Lokendra Rathour
Hi, was trying to get the NFS Ganesha installed on the Alma 8.5 with Ceph Quincy releases. getting errors of some packages not being available. For example: 'librgw' nothing provides librgw.so.2()(64bit) needed by nfs-ganesha-rgw-3.5-3.el8.x86_64 nothing provides libcephfs.so.2()(64bit) needed by

[ceph-users] Re: Getting started with cephfs-top, how to install

2022-10-18 Thread Jos Collin
How many clients do you have?  If you have several clients and issues viewing them, please checkout [1] patch. [1] https://github.com/ceph/ceph/pull/48090 On 18/10/22 01:50, Zach Heise (SSCC) wrote: I'd like to see what CephFS clients are doing the most IO. According to this page: https://do

[ceph-users] Re: Quincy - Support with NFS Ganesha on Alma

2022-10-18 Thread Tahder Xunil
Hi Lokendra It seems Alma doesn't support quincy version, for more info . If you do 'rpm -qa | grep ceph', you will see that only nautilus, octopus and pacific are supported for AlmaLinux 8.x. i.e. dnf install centos-release-cep