Hi list,

  I wonder someone can help with rbd kernel client fencing (aimed to avoid 
simultaneously rbd map on different hosts).

I know the exclusive rbd image feature is added later to avoid manual rbd lock 
CLIs. But want to know previous blacklist solution.

The official workflow I’ve got is listed below (without exclusive rbd feature) :

 - identify old rbd lock holder (rbd lock list <img>)
 - blacklist old owner (ceph osd blacklist add <addr>)
 - break old rbd lock (rbd lock remove <img> <lockid> <addr>)
 - lock rbd image on new host (rbd lock add <img> <lockid>)
 - map rbd image on new host

The blacklisted entry identified by entity_addr_t (ip, port, nonce).

However as far as I know, ceph kernel client will do socket reconnection if 
connection failed. So I wonder in this scenario it won’t work:

1. old client network down for a while
2. perform below steps on new host to achieve failover
 - identify old rbd lock holder (rbd lock list <img>)
 - blacklist old owner (ceph osd blacklist add <addr>)
 - break old rbd lock (rbd lock remove <img> <lockid> <addr>)
 - lock rbd image on new host (rbd lock add <img> <lockid>)
 - map rbd image on new host
3. old client network come back and reconnect to osds with new created socket 
client, i.e. new (ip, port,nonce) turple

as a result both new and old client can write to same rbd image, which might 
potentially cause the data corruption.

So does this mean if kernel client does not support exclusive-lock image 
feature, fencing is not possible ?
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to