10.2.3 is now release to updates - Marking 'Fix Released'
** Changed in: ceph (Ubuntu)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607694
Title:
Excl
Included in the 10.2.3 release which is currently in xenial-proposed for
testing so marking Fix Committed.
** Changed in: ceph (Ubuntu)
Status: Confirmed => Fix Committed
** Changed in: ceph (Ubuntu)
Importance: Undecided => Medium
--
You received this bug notification because you are
Thanks. Interesting note from my side: I'm currently debugging a
different hang that we're experiencing with XFS on the inside. See
http://tracker.ceph.com/issues/17536 which will be updated with detailed
logs soon.
--
You received this bug notification because you are a member of Ubuntu
Bugs, wh
Until monday I'd have said: yes, fixed. We have 10.2.3 running for a few
weeks now and it looks good.
However, on monday we've had 2 vms in one night that died with the error
message 'INFO: task jbd2/sda1-8:171_blocked for more than 120 seconds.'
This was during snapshot create/delete times.
Al
As 10.2.3 was released - any news whether this is fixed now? Daniel?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607694
Title:
Exclusive-Lock Issue
To manage notifications about this bug go to:
The v10.2.2 branch with the cherry-picked fix is available under the
'wip-16950-jewel' branch name. See the instructions above for how to
install test/development packages.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.l
@jdillaman That would be awesome, I'd test this version on our servers
to see if it resolves our problems.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607694
Title:
Exclusive-Lock Issue
To manag
If you are interested, I can build a Jewel 10.2.2 + this fix
"development" release package [1] that you can install and verify that
it resolves the issue. I am going out of town for a few days so if it's
desired, I would need to know ASAP in order to start the build.
[1] http://docs.ceph.com/docs/
Thanks for the possible workaround. But fast-diff is a very important
feature for us and unfortunately disabling it is not an options for us.
The virtual machine gets killed almost every night at the moment. Is
there an estimated time window for the fix in sight?
--
You received this bug notifica
Thanks for the assistance. It appears like the issue you are hitting is
due to a failed watch:
2016-08-12 22:29:45.249895 7f6dc700 -1 librbd::ImageWatcher:
0x7f6db4003b60 image watch failed: 140096867143248, (107) Transport
endpoint is not connected
There is a heartbeat that your client is su
I should also mention that, in the meantime, you may want to disable the
exclusive-lock feature on the affected images via the rbd CLI:
rbd feature disable exclusive-lock,object-map,fast-diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to U
Got it. This is the log output that results in the dead kvm process.
2016-08-12 22:29:44.499285 7f6dc700 10 librbd::ImageWatcher: 0x7f6db400b020
C_NotifyAck start: id=234071422725522, handle=140096867143248
2016-08-12 22:29:44.518673 7f6dc700 10 librbd::ImageWatcher: 0x7f6db4003b60
image
Ok, it's logging now for a machine that died 3 times in the last 3 nights.
All deaths followed directly a backup run, where we do this:
1. create a snapshot
2. create a rbd fast-diff between this snapshot and the 24h older one
The log file is growing *very* fast, so let's see if we can afford to
I think the config option is "log file" instead of "log path". Also,
make sure your QEMU process has access to the directory (both
permissions and AppArmor).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bu
I have added
[client]
debug rbd = 20
log path = /tmp/ceph-$pid.log
to /etc/ceph/ceph.conf
but this seems to have no effect for logging. At least, I can't find anything
relevant below /tmp.
We're using virsh/libvirt for kvm virtualization on a hybrid
configuration (osd hosts == VM hosts). Any hi
As an alternative, could you provide the core dump? You can use ceph-
post-file to upload the file.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607694
Title:
Exclusive-Lock Issue
To manage notif
Sure - but that's a bit heavy because these are production machines ;)
That'll take some time for a maintenance window.
Would this setting be active as soon as I `virsh restart` the machine?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
Hmm -- any chance either one of you could reproduce with debug logging
enabled? In your ceph configuration file, add the following to the
"[client]" section:
debug rbd = 20
log path = /path/that/is/writable/by/qemu/process/ceph-$pid.log
--
You received this bug notification because you are a mem
No.we don't have any message like that.
On 4 Aug 2016 13:55, "Jason Dillaman" wrote:
> @Luis or Daniel: Are you seeing a "image watch failed" error message in
> your QEMU librbd logs?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launc
@jdillaman nope, there's no such entry.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607694
Title:
Exclusive-Lock Issue
To manage notifications about this bug go to:
https://bugs.launchpad.net/ub
@Luis or Daniel: Are you seeing a "image watch failed" error message in
your QEMU librbd logs?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607694
Title:
Exclusive-Lock Issue
To manage notificati
We had this again last night. It hit the 14.04 machine again. This bug
seems to prefer this special VM...
librbd/ExclusiveLock.cc: In function 'std::__cxx11::string
librbd::ExclusiveLock::encode_lock_cookie() const [with ImageCtxT =
librbd::ImageCtx; std::__cxx11::string = std::__cxx11::basic_st
We have had this on two VMs in the last 2 days which run on ubuntu 12.04
and 14.04 via virtio-scsi with 8 queues. Just around the time they were
killed, also unattended-upgrades were running on the host.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subsc
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: ceph (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607694
Title:
Exclu
24 matches
Mail list logo