Hi,

I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12 before I dare to put it into production. I installed latest tcmu-runner release (1.5.1) and (like before) I'm seeing that both nodes switch exclusive locks for the disk images every 21 seconds. tcmu-runner logs look like this:

2019-08-05 12:53:04.184 13742 [WARN] tcmu_notify_lock_lost:222 rbd/iscsi.test03: Async lock drop. Old state 1 2019-08-05 12:53:04.714 13742 [WARN] tcmu_rbd_lock:762 rbd/iscsi.test03: Acquired exclusive lock. 2019-08-05 12:53:25.186 13742 [WARN] tcmu_notify_lock_lost:222 rbd/iscsi.test03: Async lock drop. Old state 1 2019-08-05 12:53:25.773 13742 [WARN] tcmu_rbd_lock:762 rbd/iscsi.test03: Acquired exclusive lock.

Old state can sometimes be 0 or 2.
Is this expected behaviour?

What may be of interest in my case is that I use a dedicated cluster_client_name in iscsi-gateway.cfg (not client.admin) and that I'm running 2 separate targets in different IP networks.

thx for advice
matthias

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to