Dears,
Some days ago, I read about this comands rbd lock add and rbd lock
remove , this commands will go maintened in ceph in future versions, or the better
form, to use lock in ceph, will go exclusive-lock and this commands will go depreciated
?
Thanks a Lot,
Marcelo
Em 24/07/2017, Jaso
Dears,
Some days ago, I read about this comands rbd lock add and
rbd lock remove , this commands will go maintened in ceph in future versions, or
the better form, to use lock in ceph, will go exclusive-lock and this commands will go depreciated ?
Thanks a Lot,
Marcelo
Em 24/07/2017, Jason
On Mon, Jul 24, 2017 at 2:15 PM, wrote:
> 2 questions,
>
>
>
> 1 In the moment i use kernel 4.10, the exclusive-lock not works fine in
> kernel's version less than < 4.12, right ?
Exclusive lock should work just fine under 4.10 -- but you are trying
to use the new "exclusive" map option that is
2 questions,
1 In the moment i use kernel 4.10, the exclusive-lock not works fine in kernel's version
less than < 4.12, right ? 2 The comand with exclusive would this
?
rbd map --exclusive test-xlock3
Thanks a Lot,
Marcelo
Em 24/07/2017, Jason Dillaman escreveu:
> You
You will need to pass the "exclusive" option when running "rbd map"
(and be running kernel >= 4.12).
On Mon, Jul 24, 2017 at 8:42 AM, wrote:
> I'm testing ceph in my enviroment, but the feature exclusive lock don't
> works fine for me or maybe i'm doing something wrong.
>
> I testing in two mach
I'm testing ceph in my enviroment, but the feature exclusive lock don't works
fine for me or maybe i'm doing something wrong.
I testing in two machines create one image with exclusive-lock enable, if I
understood correctly, with this feature, one machine only can mount and write
in image at ti
Hello,
In this context my first question would also be, how does one wind up with
such a lock contention in the first place?
And how to safely resolve this?
Both of which are not Ceph problems, but those of the client stack being
used or of knowledgeable, 24/7 monitoring and management.
Net-spl
Unfortunately that is correct -- the exclusive lock automatically
transitions upon request in order to handle QEMU live migration. There
is some on-going work to deeply integrate locking support into QEMU
which would solve this live migration case and librbd could internally
disable automatic lock
Hello all,
I have been attempting to use the exclusive-lock rbd volume feature to try
to protect against having two QEMUs writing to a volume at the same time.
Specifically if one VM appears to fail due to a net-split, and a second
copy is started somewhere else.
Looking at various mailing list p