Thanks Falstaff, yeah we knew the fixes are in latter releases - they just were
hard to backport keeping the general regression risk low (for all other users).
UCA as you used it is a valid way to get fixes ahead of time into last LTS,
Thanks for verifying this again falstaff.
@bestpa - sad to h
I’ve given up on qcow and on ubuntu for my hypervisor needs. See ya!
On Thu, Apr 26, 2018 at 3:00 PM falstaff wrote:
> Observed the same issue on Ubuntu 16.04.4 with a Dell R440 and a RAID 5
> consisting of 3 10k SAS disks. Using 16.04+UCA-Pike resolved the issue
> just fine.
>
> --
> You recei
Observed the same issue on Ubuntu 16.04.4 with a Dell R440 and a RAID 5
consisting of 3 10k SAS disks. Using 16.04+UCA-Pike resolved the issue
just fine.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1
On "static" systems I'm usually a slow upgrader on others systems I use daily
cloud images right away.
So if you have a complex (custom/manual) setup I'd likely go with LTS+UCA.
It means less major changes to your system than doing a release upgrade every 6
months but would keep your virt stack
Thanks for pointing out that pike would be 3.6. To me it is still hard
to track which version what UCA release includes because those resources
are actually quite hard to find.
Given the proposed solution [3], do you consider the usage of the Ubuntu
Cloud Archive package repository to be the recom
pike would be 3.6 not 3.5 - other than that yes.
In general upgrading from 16.04+UCA-Pike -> 18.04 shouldn't be very different
to 17.10->18.04.
It is supposed to work.
As you know it is generally a good advise to go with test systems,
backups, phased upgrades, ... as there always could be someth
Hi Christian,
thanks for your insights. The Ubuntu Cloud Archive is completely new to
me. Am I right in the assumption that adding the ocata cloud-archive
repository with 'sudo add-apt-repository cloud-archive:ocata' and a
subsequent 'apt update && apt upgrade' would effectively upgrade libvirt
to
Hi Dominik,
thanks for the links.
Yes 1.3.1 is ancient in a way of "as old as the 16.04 Ubuntu release"
plus fix-backports as they are identifiable and qualify for the SRU
process [1].
We have three Options here, but atm not all are feasible:
1. Backport the fix to Xenial
I beg your pardon, but
We see this same issue in one of our production systems. The live backup
scripts fail every few days and it is necessary to manually run a
blockjob abort and a subsequent blockcommit usually passes. The backup
scripts can be found here: https://github.com/dpsenner/libvirt-
administration-tools
On
I thought that a few links to the mailing list archives could help so
here they are:
https://www.redhat.com/archives/libvirt-users/2017-August/msg00020.html
https://www.redhat.com/archives/libvirt-users/2017-October/msg00033.html
--
You received this bug notification because you are a member of
I increased the write load in my reproducer, but still can't trigger it
here :-/
Did you have any chance to try the Cloud Archive versions mentioned in
c#13 or (I know based on an older version, but you could force it in )
the ppa from c#15 ?
>From the comments it seems this is your production en
I can reproduce this reliably on Server 16.04.3 LTS.
virsh version
Compiled against library: libvirt 1.3.1
Using library: libvirt 1.3.1
Using API: QEMU 1.3.1
Running hypervisor: QEMU 2.5.0
I have 4 VMs, the one that consistently fails is write heavy, its a
carbon/graphite server.
Steps to reprod
Thanks Patrick,
I unfortunately haven't found anything in there which I'd have lacked in my try
to recreate :-/
Btw: I think this should be $i and not mail, although if guests are all the
same it doesn't matter
"IMAGE_DIR=`virsh domblklist mail" -> "IMAGE_DIR=`virsh domblklist $i"
What is the
Happy to share what i can.
I should have mentioned that the backup script goes through all my VM's, and my
ambiguous comment mentions that it went through 3 of the VM's before stalling
on this, the fourth. System is low utilisation for RAM CPU and disk. Proliant
G5 dual chip quad core (no HT)
Hi Patrick,
sorry to see you run into it again - but for now I consider it a great chance
to find something that allows to reproduce and catch the issue.
Qou said:
"My backup script made it through 3 of these, one hda, and one vda as well.
Then this"
It seems you backup script does:
1. check old
what's the proper way to keep running the base LTS with a newer virt
stack? Do i need to point to a particular repo? Don't even know where
to start with this...
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.n
Happened again. Same VM, too. My backup script made it through 3 of
these, one hda, and one vda as well. Then this:
-BEGIN backup for VM called mail
Sat Apr 22 00:07:51 EDT 2017
current snapshots mail - should be empty
Name Creation Time State
-
Thanks for reporting back,
there are a few races with block jobs that we look into atm which might as well
affect this.
Unfortunately - as it always is with races - they are hard to trigger/confirm
and I had hoped you might have found a case to reliably trigger.
If you happen to find any coincid
at this point, i'll consider it a one-time occurence, but i'm thinking
it may have happened other times, due to a jammed up backup script i see
once in a while. I don't wish to pursue it further until it's too
infuriating, then i'll just run a hostOS with a fresher version
available in a different
Well i'll be. It seems to work no problem upon subsequent tries.
Here's my methodology and run-through.
virsh # list
IdName State
17mail running
virsh #
virsh #
virsh #
vir
For now I'll mark it as incomplete waiting for any further info you can provide.
To better triage and confirm your case, I'd like to understand if you:
- can reliably trigger this (if you have steps to do so please report them as
well)
- was a one-time failure
- happens every now and then in your
Info to repro I tried:
# create a simple system via uvtool-libvirt
$ uvt-kvm create [...]
$ virsh dumpxml > t1.xml
$ virsh undefine
# need to be transient for the blockcopy test
$ virsh create t1.xml
# Now we have a transient domain, and can copy them around:
$ virsh domblklist xenial-zfspool-
You reported your issue on commit rather than copy as in the RH bug.
So looking into that more specific.
$ virsh snapshot-create-as --domain testguest snap1 --diskspec
vda,file=/var/lib/uvtool/libvirt/images/vda-snap1.qcow2 --disk-only --atomic
--no-metadata
# touch a file in guest
$ virsh sna
On the changes:
First set of patches is in 1.2.18, so we have that already
faa14391 virsh: Refactor block job waiting in cmdBlockCopy
74084035 virsh: Refactor block job waiting in cmdBlockCommit
2e782763 virsh: Refactor block job waiting in cmdBlockPull
eae59247 qemu: Update state of block job to R
many thanks!
On Wed, Apr 12, 2017 at 2:38 PM, Joshua Powers
wrote:
> Hi and thanks for reporting this bug! I am going to see if someone else
> from the team can also take a look at this to see how big of a change
> this would require.
>
> Also, sorry for marking this as incomplete and then new
Hi and thanks for reporting this bug! I am going to see if someone else
from the team can also take a look at this to see how big of a change
this would require.
Also, sorry for marking this as incomplete and then new again as I was
on the wrong tab.
--
You received this bug notification because
bugfix mentioned : https://bugzilla.redhat.com/show_bug.cgi?id=1197592
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1681839
Title:
libvirt - disk not ready for pivot yet
To manage notifications ab
27 matches
Mail list logo