It appears that if the client or Openstack cinder service is in the same
network as Ceph, it works.
In the Openstack network it fails, but only on this particular pool! It
was working well before the upgrade and no changes have been made on
network side.
Very strange issue. I checked the Ceph re
Hi,
With --debug-objecter=20, I found that the rados ls command hangs
looping on laggy messages :
|
||2019-07-03 13:33:24.913 7efc402f5700 10 client.21363886.objecter
_op_submit op 0x7efc3800dc10||
||2019-07-03 13:33:24.913 7efc402f5700 20 client.21363886.objecter
_calc_target epoch 13146 bas
Hi Eugen,
The cinder keyring used by the 2 pools is the same, the rbd command
works using this keyring and ceph.conf used by Openstack while the rados
ls command stays stuck.
I tried with the previous ceph-common version used 10.2.5 and the last
ceph version 14.2.1.
With the Nautilus ceph-co
Hi,
did you try to use rbd and rados commands with the cinder keyring, not
the admin keyring? Did you check if the caps for that client are still
valid (do the caps differ between the two cinder pools)?
Are the ceph versions on your hypervisors also nautilus?
Regards,
Eugen
Zitat von Adr
Hi all,
I'm facing a very strange issue after migrating my Luminous cluster to
Nautilus.
I have 2 pools configured for Openstack cinder volumes with multiple
backend setup, One "service" Ceph pool with cache tiering and one "R&D"
Ceph pool.
After the upgrade, the R&D pool became inaccessible f