Both of the volumes mentioned above are showing a status of "error_deleting" in 
OpenStack. I'm probably going to have to remove them with openstack volume 
delete --force.

I'm guessing this might be a regression in DiffIterate that doesn't handle 
volumes/RBD images in an inconsistent state.

It does crash the mgr:

Jul 25 09:13:15 ceph00.cecnet.gmu.edu systemd-coredump[3359647]: [🡕] Process 
2293091 (ceph-mgr) of user 167 dumped core.
Jul 25 09:13:15 ceph00.cecnet.gmu.edu podman[3359656]: 2024-07-25 
09:13:15.988246154 -0400 EDT m=+0.021510462 container died 
963fb5be9e7008161a942a3d26fa629b570cb94f9cd615aa7d92293ef327e0df 
(image=quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906,
 na>
Jul 25 09:13:16 ceph00.cecnet.gmu.edu podman[3359656]: 2024-07-25 
09:13:16.004247002 -0400 EDT m=+0.037511304 container remove 
963fb5be9e7008161a942a3d26fa629b570cb94f9cd615aa7d92293ef327e0df 
(image=quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906,
 >
Jul 25 09:13:16 ceph00.cecnet.gmu.edu systemd[1]: 
ceph-4d3fb6ba-2c56-11ec-bd82-90b11c418084@mgr.ceph00.wwmtep.service: Main 
process exited, code=exited, status=134/n/a
Jul 25 09:13:16 ceph00.cecnet.gmu.edu systemd[1]: 
ceph-4d3fb6ba-2c56-11ec-bd82-90b11c418084@mgr.ceph00.wwmtep.service: Failed 
with result 'exit-code'.
Jul 25 09:13:16 ceph00.cecnet.gmu.edu systemd[1]: 
ceph-4d3fb6ba-2c56-11ec-bd82-90b11c418084@mgr.ceph00.wwmtep.service: Consumed 
18min 11.892s CPU time.

I can probably live with this error while gathering any information needed to 
create a fix.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to