[ceph-users] Audit logs of creating RBD volumes and creating RGW buckets

2023-01-26 Thread Jinhao Hu
Hi, Are the creation of RBD volumes and RGW buckets audited? If yes, what do the audit logs look like? Is there any documentation about it? I tried to find the related audit logs from the "/var/log/ceph/ceph.audit.log" file but didn't find any. Thanks, Jinhao

[ceph-users] Re: OSDs are not utilized evenly

2023-01-26 Thread Jeremy Austin
Thanks Stefan. I believe pgremapper won't work for me until pending operations are finished or canceled, but I'll keep on with it, e.g.: WARNING: pg 20.1a: conflicting mapping 1->6 found when trying to map 4->6 On Thu, Jan 26, 2023 at 10:27 AM Stefan Kooman wrote: > On 1/26/23 18:47, Jeremy Aus

[ceph-users] Re: ceph 16.2.10 cluster down

2023-01-26 Thread Jens Galsgaard
I got the monitor started with cephadm and could run ceph-mon manually to get the mon service running again. Thanks for the hints along the way Robert 😊 //Jens -Oprindelig meddelelse- Fra: Robert Sander Sendt: Thursday, January 26, 2023 4:23 PM Til: ceph-users@ceph.io Emne: [ceph-user

[ceph-users] Re: OSDs are not utilized evenly

2023-01-26 Thread Stefan Kooman
On 1/26/23 18:47, Jeremy Austin wrote: Are there alternatives to TheJJ balancer? I have a (temporary) rebalance problem, and that code chokes[1]. https://github.com/digitalocean/pgremapper Gr. Stefan ___ ceph-users mailing list -- ceph-users@ceph.i

[ceph-users] Re: OSDs are not utilized evenly

2023-01-26 Thread Jeremy Austin
Are there alternatives to TheJJ balancer? I have a (temporary) rebalance problem, and that code chokes[1]. Essentially, I have a few pgs in remapped+backfill_toofull, but plenty of space in the parent's parent bucket(s). [1] https://github.com/TheJJ/ceph-balancer/issues/23 On Wed, Dec 14, 2022 a

[ceph-users] Re: Ceph rbd clients surrender exclusive lock in critical situation

2023-01-26 Thread Marc
> > > > Hi all, > > > > we are observing a problem on a libvirt virtualisation cluster that > might come from ceph rbd clients. Something went wrong during execution > of a live-migration operation and as a result we have two instances of > the same VM running on 2 different hosts, the source- and

[ceph-users] Re: Octopus mgr doesn't resume after boot

2023-01-26 Thread Renata Callado Borges
Hi all! I want to register in this thread the debug and solution of this problem, for future reference. Thanks to Murilo Morais who did all the debugging! The issue happened because when the machine rebooted, it applied a new sshd configuration that prevented root ssh connections. Specific

[ceph-users] Re: cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-01-26 Thread Zakhar Kirpichenko
Hi Konstantin, Many thanks for your response! That is the funny part: the logs on both hosts do not indicate that anything happened to any devices at all, those related to the OSDs which failed to start or otherwise. The only useful message was from the OSD debug logs: "debug -3> 2023-01-25T2

[ceph-users] Re: ceph 16.2.10 cluster down

2023-01-26 Thread Robert Sander
Hi Jens, On 26.01.23 16:17, Jens Galsgaard wrote: After removing the dead monitors with the monmaptool the mon container has vanished from podman. So this somehow made things worse. You have not mentioned that you are running Ceph in containers. The procedure to repair the MON map may look

[ceph-users] Cannot delete images in rbd_trash

2023-01-26 Thread Nikhil Shah
Hello, We have a couple of RBD images in a pool that are unable to be deleted. The user attempted to delete these volumes , while we were in the middle of a ceph minor version upgrade (where ceph processes restart). I am suspecting that during one of the service restarts (probably monitor?), th

[ceph-users] Re: ceph 16.2.10 cluster down

2023-01-26 Thread Jens Galsgaard
Hi Robert, After removing the dead monitors with the monmaptool the mon container has vanished from podman. So this somehow made things worse. Is it possible to create and add new monitors? Re-bootstrap the cluster in lack of better terms? //Jens -Oprindelig meddelelse- Fra: Robert

[ceph-users] January Ceph Science Virtual User Group

2023-01-26 Thread Kevin Hrpcek
Hey all, We will be having a Ceph science/research/big cluster call on Tuesday January 31st. If anyone wants to discuss something specific they can add it to the pad linked below. If you have questions or comments you can contact me. This is an informal open call of community members mostly

[ceph-users] Re: ceph 16.2.10 cluster down

2023-01-26 Thread Robert Sander
Hi, On 26.01.23 12:46, Jens Galsgaard wrote: Setup is: 3 hosts with each 12 disks (osd/mon) 3 vm's with mon/mds/mgr The vm's are unavailable at the moment and one of the hosts is online with osd/mon running. You have only one out of six MONs running. This MON is unable to form a quorum.

[ceph-users] ceph 16.2.10 cluster down

2023-01-26 Thread Jens Galsgaard
Hello, I'm currently investigating a downed ceph cluster that I cannot communicate with. Setup is: 3 hosts with each 12 disks (osd/mon) 3 vm's with mon/mds/mgr The vm's are unavailable at the moment and one of the hosts is online with osd/mon running. When issuing the command ceph -s nothing

[ceph-users] Re: Debian update to 16.2.11-1~bpo11+1 failing

2023-01-26 Thread Matthias Aebi
I can only confirm that. The file https://download.ceph.com/debian-pacific/pool/main/c/ceph/python3-rados_16.2.11-1~bpo11+1_amd64.deb is clearly missing on the ceph download server which makes it impossible to install the upgrade on Debian. And as the previous 16.2.10 Package definition has bee

[ceph-users] Re: Debian update to 16.2.11-1~bpo11+1 failing

2023-01-26 Thread Luke Hall
There's definitely at least one or two 16.2.11-1~bpo11+1_amd64.deb packages missing actually. python3-rados for example On 26/01/2023 11:02, Luke Hall wrote: So it looks as though the python packages are in the pool ok eg https://download.ceph.com/debian-pacific/pool/main/c/ceph/python3-cephf

[ceph-users] Re: Debian update to 16.2.11-1~bpo11+1 failing

2023-01-26 Thread Luke Hall
So it looks as though the python packages are in the pool ok eg https://download.ceph.com/debian-pacific/pool/main/c/ceph/python3-cephfs_16.2.11-1~bpo11%2B1_amd64.deb https://download.ceph.com/debian-pacific/pool/main/c/ceph/python3-rados_16.2.11-1~bpo10%2B1_amd64.deb but apt is not seeing them

[ceph-users] Re: cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-01-26 Thread Stefan Kooman
On 1/26/23 02:33, Zakhar Kirpichenko wrote: Hi, Attempted to upgrade 16.2.10 to 16.2.11, 2 OSDs out of many started crashing in a loop on the very 1st host: I Just upgraded a test cluster to 16.2.11 and I did not observe this behavior. It all went smooth (thx devs!). Just to add an upgrade

[ceph-users] Debian update to 16.2.11-1~bpo11+1 failing

2023-01-26 Thread Luke Hall
Hi, Trying to dist-upgrade an osd server this morning and lots of necessary packages have been removed! Start-Date: 2023-01-26 10:04:57 Commandline: apt dist-upgrade Install: linux-image-5.10.0-21-amd64:amd64 (5.10.162-1, automatic) Upgrade: librados2:amd64 (16.2.10-1~bpo11+1, 16.2.11-1~bpo11